650 likes | 861 Views
DiffServ Aware Video Streaming over 3G Wireless Networks. Julio Orozco, David Ros Novembre Project Sophia Antipolis, 26/11/2004. Agenda. Context The DiffServ-aware streaming approach Quality Assessment Performance Evaluation. Agenda. Context Overview Technical challenges Requirements
E N D
DiffServ Aware Video Streaming over 3G Wireless Networks Julio Orozco, David Ros Novembre Project Sophia Antipolis, 26/11/2004
Agenda • Context • The DiffServ-aware streaming approach • Quality Assessment • Performance Evaluation
Agenda • Context • Overview • Technical challenges • Requirements • The DiffServ-aware streaming approach • Quality Assessment • Performance Evaluation
Example Scenario • I’m at the airport and have a two-hour wait ahead … • Real Madrid faces Milan in the Champions League final … • After a simple procedure, I start watching the match … • Transmitted from an Internet server … • On my mobile terminal … • With decent quality and an affordable price.
Example Scenario Internet UMTS Video Server Video Client (mobile terminal)
Technical Challenges • UMTS radio link • High and variable delay/jitter • Variable and limited bandwidth • Heterogeneous architecture • Internet + UMTS • Business/billing models
Requirements • Video compression • Highly efficient • Error resilient • Network architecture • Affordable QoS • Integration • Video information – Network • Internet - UMTS
Agenda • Context • The DiffServ-aware streaming approach • Concept • H.264 • DiffServ architecture • Semantic mapping • Quality Assessment • Performance Evaluation
Our Approach • DiffServ-aware streaming of H.264 video • Pseudo-subjective quality assessment • Goals • Reduce visual distortion caused by network losses (induced by variable bandwidth and delay) • Validate performance in terms of visual quality
DiffServ Aware Streaming of H.264 Video • H.264 • State-of-the-art standard codec • High efficiency • Improved network adaptation and error resilience • DiffServ IP network • Simple, scalable QoS at the IP level • Mapping • Video semantics <-> DiffServ packet priorities
H.264 Video Codec • State of the art (May 2003) • High compression efficiency • 50% rate gain against MPEG-2 • 30% against MPEG-4 • Designed for network applications • Network adaptation layer (NAL) • Novel error resilience features
DiffServ Architecture • AF prioritized discard • Three packet priorities per class • Green • Yellow • Red • Under congestion: • Red packets get discarded first then yellow packets, and finally green packets.
AF Prioritized Discard • RIO algorithm
Semantic Mapping • Original idea (MPEG-2) • Video is transported in a single AF class • AF packet priorities <-> MPEG frame types • Reduces visual distortion caused by losses
DiffServ Mapping of H.264 • General strategy • Map coarse syntax elements in a single AF class • Take advantage of H.264 advanced network adaptation and error resilience features • Slices • Flexible Macroblock Ordering • Data Partitioning
Agenda • Context • The DiffServ-aware streaming approach • Quality Assessment • Motivation • Classical methods • Pseudo-subjective assessment • Performance Evaluation
Motivation • Streaming in Internet/UMTS • Distorsion = f(compression, network losses) • Network losses = f (congestion, rate, delay, jitter)
Motivation • We need to measure the quality of the streamed signals • Does DiffServ-Aware Streaming yield a better quality? • Which mapping strategy is better? (from a perceived-quality point of view)
Subjective Reflects human perception Difficult Expensive Not feasible in real time Objective Automated Repeateble Does not necessarily reflects human perception Requires acces to the original signal Can be computer-intensive Classical Quality Assessment
Pseudo-Subjective Assessment • Novel Approach • Based on Neural Networks • Link network and coding parameters to human perception • MOS = f (network, coding)
Pseudo-Subjective Assesment • Methodology • Identification of the quality-affecting parameters • Generation of distorted samples and recording of parameter values • Subjective assessment of distorted samples • NN training and validation
Preliminary evaluation • NS-2 simulation • Wired scenario • Objective quality assessment Developement of UMTS simulation models/tools • UMTS scenario • NS-2 simulation • Pseudo-subjective quality • assessment 1 • Markov chain loss simulator • Subjective assessment • Neural network training • Pseudo-subjective • quality assessment 2 • Prediction with trained • neural network Performance Evaluation Specification
Preliminary evaluation • NS-2 simulation • Wired scenario • Objective quality assessment Developement of UMTS simulation models/tools • UMTS scenario • NS-2 simulation • Pseudo-subjective quality • assessment 1 • Markov chain loss simulator • Subjective assessment • Neural network training • Pseudo-subjective • quality assessment 2 • Prediction with trained • neural network Preliminary Evaluation Specification
Preliminary Evaluation • Goal • Verify that our proposal effectively yields a better visual quality than plain best-effort • NS-2 simulation • Wireline scenario • Objective quality assessment • ITS impairment metric (ANSI standard)
Preliminary Evaluation • Results: visual impairment
Preliminary evaluation • NS-2 simulation • Wired scenario • Objective quality assessment Developement of UMTS simulation models/tools • UMTS scenario • NS-2 simulation • Pseudo-subjective quality • assessment 1 • Markov chain loss simulator • Subjective assessment • Neural network training • Pseudo-subjective • quality assessment 2 • Prediction with trained • neural network Development of UMTS Simulation Models & Tools Specification
Development of UMTS Simulation Models & Tools • Goal • NS link object with variable bandwidth and delay • Tradeoff between simplicity and realism
Target Abstraction Mobiles GGSN SGSN RAN Video server Internet UMTS backbone video source DiffServ queue background sources rtr0 mobile terminal • UMTS link • Low multiplexing (1-5 flows) • Variable bandwidth • Variable delay
Bandwidth Oscillation • A single mobile in HS-DSCH BW (bit/s) Time (s)
Approach • Markov-chain model • One state per bandwidth level • Transitions possible between all states P(0,n) P(0,1) P(1,n) 0 1 n P(0,0) P(n,n) P(1,1) P(1,0) P(n,1) P(n,0)
To state 0 1 2 n From state 0 P P P P 0,0 0,1 0,2 0,3 1 P P P P 1,0 1,1 1,2 1,n 2 P P P P 2,0 2,1 2,2 2,n n P P P P n,0 n,1 n,2 n,n Model Definition • We need to define: • Number and values of bandwidth levels • Transition period • Transition probability matrix BW (bit/s) Time (s)
Model Solution • Trace-based measurement • Run simulations with the Eurane UMTS extensions • Measure transitions • Three « variability » scenarios • Low, medium, high • Combination of number of users, speed and distance • One transition matrix per scenario
Transition period: 20 ms 12 bandwitdh levels (kbit/s) 0 208 318 400 680 920 Mean bandwidth: 290 kbit/s 1,272 1,848 2,136 2,632 3,040 3,600 First Model
Model Implementation • Main issue in NS-2: • Packet scheduling when bandwidth goes to zero • Solved!
Preliminary evaluation • NS-2 simulation • Wired scenario • Objective quality assessment Developement of UMTS simulation models/tools • UMTS scenario • NS-2 simulation • Pseudo-subjective quality • assessment 1 • Markov chain loss simulator • Subjective assessment • Neural network training • Pseudo-subjective • quality assessment 2 • Prediction with trained • neural network Pseudo-Subjective Quality Assessment 1 Specification
Quality Affecting Parameters • Per-color packet loss rate • Green • Yellow • Red packet • Mean green loss burst size • Coding/mapping strategy
Example Generation • 100 distorted clips • Each clip is associated to a combination of parameter values • Markov-chain loss simulator
Subjective Assessment • 20 assessors rated the 100 clips 4 2
Examples Subjective grade Network and source parameters Training algorithm Learning: Training the Neural Network Random Neural Network
Preliminary evaluation • NS-2 simulation • Wired scenario • Objective quality assessment Developement of UMTS simulation models/tools • UMTS scenario • NS-2 simulation • Pseudo-subjective quality • assessment 1 • Markov chain loss simulator • Subjective assessment • Neural network training • Pseudo-subjective • quality assessment 2 • Prediction with trained • neural network UMTS Scenario Specification
Topology Clip: 318 kbit/s 10 s/CIF/15 fps 60 byte packets video source DiffServ RIO queue 10 Mbit/s 5ms mobile terminal 10 Mbit/s 10ms 10 Mbit/s 15ms UMTS link Mean rate: 1 Mbit/s background sources Downlink delay: 200 ms Pareto/TCP Uplink delay: 200 ms Varying bandwith
Scenarios • Best Effort • AF • Two threshold models • Overlapped (G-RIO) • Scattered (RIO) • Three values of Wq • 0.0017 « Normal » • 0.5 • 1 (maximum reactiveness)
Preliminary evaluation • NS-2 simulation • Wired scenario • Objective quality assessment Developement of UMTS simulation models/tools • UMTS scenario • NS-2 simulation • Pseudo-subjective quality • assessment 1 • Markov chain loss simulator • Subjective assessment • Neural network training • Pseudo-subjective • quality assessment 2 • Prediction with trained • neural network Pseudo-Subjective Quality Assessment 2 Specification
New network (simulation output) and source data New subjective score Prediction: Using the Neural Network Trained Neural Network
Conclusions • DiffServ-aware streaming can help reduce visual distortion under drastic bandwidth variations in UMTS for H.264 video • RIO thresholds affect visual quality • RIO must be highly reactive • Increase Wq -> react more to instantaneous than to average queue size
Perspectives • Introduce delay oscillations in the UMTS link model • Detailed study of RIO parameters • Introduce losses due to excessive delay
Questions • Thank you!
Quality assessment • There is a need to measure the quality of the streamed signals • Does DiffServ-Aware Streaming yield a better quality? • Which mapping strategy is better? (from a perceived-quality point of view)