280 likes | 291 Views
This paper discusses the buffer bloat effect caused by adaptive video flows and introduces SABRE, a client-based technique to mitigate this problem. Experimental results show significant reductions in queuing delays.
E N D
SABRE: A client based technique for mitigating the buffer bloat effect of adaptive video flows Ahmed Mansy, Mostafa Ammar(Georgia Tech)Bill Ver Steeg(Cisco)
Large buffers increase queuing delays and also delays loss events Significantly high queuing delays from TCP & large buffers What is buffer bloat? Bottleneck = C bps Client Server RTT • TCP sender tries to fill the pipe by increasing the sender window (cwnd) • Ideally, cwnd should grow to BDP = C x RTT • TCP uses packet loss to detect congestion, and then it reduces its rate
Manifest DASH: Dynamic Adaptive Streaming over HTTP DASH client HTTP server 350kbps 600kbps 900kbps Buffer 100% 1200kbps Download rate Video is split into short segments Time Initial buffering phase S. Akhshabi et al, “An experimental evaluation of rate-adaptation algorithms in adaptive streaming over HTTP”, MMSys’ 11 Steady state (On/Off)
Does DASH cause buffer bloat? Problem description DASH VoIP Will the quality of VoIP calls get affected by DASH flows? And if yes, how can we solve this problem?
Our approach • In order to answer the first two questions • We perform experiments on a testbed in the lab to measure the buffer bloat effect of DASH flows • We developed a scheme SABRE: Smooth Adaptive BitRatE to mitigate this problem • We use the testbed to evaluate our solution
Adaptive HTTP video flows have a significant effect on VoIP traffic Measuring the buffer bloat effect OTT VoIP traffic UDP traffic: 80kbps, pkt=150bytes iPerf client iPerf server RTT 100ms 1Gbps 6Mbps (DSL) Bottleneck emulator Tail-drop: 256 packets HTTP Video server DASH client 6
TCP is bursty Understanding the problem – Why do we get large bursts? 6Mbps 1Gbps
Smooth download driven by the client Possible solutions • Middlebox techniques • Active Queue Management (AQM) • RED, BLUE, CODEL, etc. • RED is on every router but hard to tune • Server techniques • Rate limiting at the server to reduce burst size • Our solution:
Two data channels • In traditional DASH players: • while(true) recv • 1 and 2 are coupled Server Some hidden details Client Playout buffer DASH player recv 1 HTTP GET OS 2 Socket buffer
Idea TCP can send a burst of min(rwnd, cwnd) Since we can not control cwnd, then control rwnd rwnd is a function of the empty space on the receiver socket buffer Server • Two objectives • Keep socket buffer almost full all the time • Not to starve the playout buffer Smooth download to eliminate bursts Socket buffer rwnd Client Playout buffer DASH player recv HTTP GET OS Socket buffer
Keeping the socket buffer full -Controlling recv rate While(1) recv Rate Rate On On While(timer) recv Off Off T T HTTP GET HTTP GET Client Client Server Server Playout Socket Socket Playout GET S1 GET S1 S1 S1 Off GET S2 Bursty GET S2 S2 Empty socket buffer S2 Off
Socket buffer is always full, rwnd is small HTTP Pipelining # segments = 1 + Keeping the socket buffer full Socket buffer size #Segments = 1 + Segment size Client Client Server Socket Playout Server Socket Playout GET S1, S2 GET S1 S1 S1 S1 Off GET S3 S2 GET S2 S2 S2 GET S4 S3 Off
Still one more problem • Socket buffer level drops temporarily when the available bandwidth drops • This results in larger values of rwnd • Can lead to large bursts and hence delay spikes • Continuous monitoring of the socket buffer level can help Socket buffer Available BW Video bitrate
Experimental results OTT VoIP traffic UDP traffic: 80kbps, pkt=150bytes iPerf client iPerf server RTT 100ms 1Gbps 6Mbps (DSL) Bottleneck emulator Tail-drop: 256 packets HTTP Video server DASH client We implemented SABRE in the VLC DASH player
SABRE On/Off OnOff: delay > 200ms about 40% of the time SABRE: delay < 50ms for 100% of the time Single DASH flow - constant available bandwidth
Server Video adaptation: how does SABRE react to variable bandwidth? Client Playout buffer DASH player recv HTTP GET OS Socket buffer Players tries to up-shift to a higher bitrate, but can’t sustain it Socket buffer is full Rate Available BW Video bitrate Socket buffer gets grained, reduce recv rate, down-shift to a lower bitrate Player can support this bitrate, shoot for a higher one Socket buffer is full, can not estimate the available BW Time
Single DASH Flow –variable available bandwidth Rate 6Mbps 3Mbps T=180 T=380 Time (sec) SABRE On/Off
Two On/Off clients Two SABRE clients Two clients C1 Server C2
Summary • The On/Off behavior of adaptive video players can have a significant buffer bloat effect • We designed and implemented a client based technique to mitigate this problem • A single On/Off client significantly increases queuing delays • Future work: • Improve SABRE adaptation logic for the case of a mix of On/Off and SABRE clients • Investigate DASH-aware middlebox and server based techniques
Thank you! Questions?
Once the burst is on the wire, not much can be done! How can we eliminate large bursts? Random Early Detection:Can RED help? P=1 Loss probability P=0 Avg queue size min max
SABRE Single DASH Flow -constant available bandwidth
SABRE On/Off OnOff: delay > 200ms about 40% of the time SABRE: delay < 50ms for 100% of the time Single DASH flow - constant available bandwidth
Single DASH Flow –variable available bandwidth Rate 6Mbps 3Mbps T=180 T=380 Time (sec) SABRE On/Off
SABRE On/Off Single ABR Flow –variable available bandwidth
At least one OnOff DASH client significantly increases queuing delays Two clients