400 likes | 678 Views
Multirate Congestion Control Using TCP Vegas Throughput Equations. Anirban Mahanti Department of Computer Science University of Calgary Calgary, Alberta Canada T2N 1N4. ADSL. Dial-up. Internet. High-speed Access. Video Server. Problem Overview.
E N D
Multirate Congestion Control Using TCP Vegas Throughput Equations Anirban Mahanti Department of Computer Science University of Calgary Calgary, Alberta Canada T2N 1N4
ADSL Dial-up Internet High-speed Access Video Server Problem Overview • Context: Live or schedule multicast of popular content to thousands of clients • “Layered Encoding” to serve heterogeneous clients • Employ a “multirate” congestion control protocol • Receiver-driven for scalability
The Multirate CC Wish List • “TCP friendly” • Operate without inducing packet losses while probing for bandwidth • Receivers behind a common bottleneck link receive media of the same quality • Responsive to congestion, yet achieve consistent playback quality
TCP Friendliness for Multimedia Streams • TCP-friendly bandwidth share? • As much as a TCP flow under similar condition (e.g., RLC Infocom’98) • Function of the number of receivers (e.g., WEBRC Sigcomm’02) • Equation-based approach • Fair sharing of bandwidth • Lower variation in reception rate compared to TCP-like AIMD approaches
Objective • Develop a new multirate congestion control protocol using TCP Vegas throughput model – “Adaptive Vegas Multicast Rate Control” • Less oscillatory throughput? • Fewer packet losses? • Reduced RTT bias? • Prior work: Reno-like rate control (e.g., RLM Sigcomm’96, RLC, FLID-DL in NGC’00 etc)
TCP Reno Throughput Model • Reno (Mathis et al. ACM CCR 1997, Padhye et al. Sigcomm’98)
TCP Vegas Window Evolution [Samois & Vernon’03] Window Size NO LOSS WINDOW EVOLUTION Window Size Stable Backlog: No-loss Window evolution between loss events
TCP Vegas Throughput Model [Samois & Vernon’03]
TCP Throughput Models: Summary • RTT bias • None when packet losses are negligible • In presence of packet losses some RTT bias, but lower than that of TCP Reno • Relative aggressiveness of TCP Vegas flows depend on: • Vegas threshold parameters!! • Buffer space available at bottleneck router!! • How to adaptively set the TCP Vegas threshold parameters?
Online Estimation of Parameters: RTT • E.g., Exponential Weighted Moving Average for RTT • What “weights” should be used?
s2 s3 s1 Lost Obtained Average Loss Interval (ALI) Method
Adaptive Vegas Multicast Rate Control • End-to-end protocol • Server transmits data for a media object using multiple multicast channels • Clients independently determine their reception rate using TCP Vegas model • subscribe to multiple multicast channels, such that client reception rate approximately matches estimated fair share
AVMRC Overview Continued … • Dynamically vary Vegas threshold parameters • Short-term and long-term averages of loss event rate and delay • RTT approximated as average queuing delay along path from server to client plus some “aggressiveness constant” • Clients are “weakly” synchornized
Time Slot: Protocol Invocation Granularity • How often clients compute new throughput estimates? • Once every T seconds ( a time slot) • T = ??? • Time slot dilemma • Longer slots for reliable estimates of RTT & p • Smaller slots to enable quick channel drop in the event of an aggressive add!
AVMRC: Time Slot Dilemma • AVMRC default: T = 100 ms • Maintain short-term & long-term estimates • Smaller slots to enable quick channel drop based on short-term estimates • Channel adds governed by stable long-term estimates
AVMRC: Receiver Synchronization • Add operations can impede convergence to fair share • Quick drop by a client, however, do not impede converge of other receivers. • AVMRC solution: weak synchronization • Server inserts a marker in the data stream once every T seconds; is this enough? A Bottleneck Congestion by A causes B to drop below fair share B
AVMRC: Channel add/drop Frequency • Reception rate choices may be coarse-grained, resulting in client reception rate oscillations • Allow add operations every Tadd = nT • Clusters channel additions behind a common bottleneck • when nxT larger than n/w delay variations • Channel drops allowed every T seconds (time slot) Fair Share 500 Kb 300 Kb Subscription oscillates 200 Kb
AVMRC: RTT Estimation • How to define RTT for multicast traffic? • Little or no reverse traffic • Obtain RTT by end-to-end control info. exchange • Use a fixed RTT (e.g., FLID-DL, RLC) • AVMRC default: Fixed RTT + Queuing Delay • Queuing Delay calculation doesn’t require synchronized clocks
Performance Evaluation - Goals • Explore properties of AVMRC • Compare AVMRC with an analogous protocol (RMRC) that used TCP Reno throughput model • Other factors of AVMRC considered: • Synchronization Policy • RTT Estimation Policy • Data Transmission Policy – Bursty vs. smooth • Protocol Reactivity • Evaluation using Network Simulator (ns-2)
AVMRC: Default protocol Parameters • Slot duration T = 0.1s • RTT: Fixed value (0.1s) + variable queuing delay • ALI with n=8 for loss event rate comp. • Weak synchronization • Bursty transmissions – once every 0.1s • Cumulative layered encoding with following rates: 256, 384, 576, 864, 1296, 1944, 2916, 4374, 6561Kbps • RMRC uses the same parameters
Network Model • Dumbbell topology with a single bottleneck • 3Mbps to 100Mbps • Drop-tail FIFO buffering • approx. 50 to 250 ms • Background traffic simulated • HTTP • FTP • UDP • Round-trip prop. delay in [20, 460]ms
No Background Traffic (a) AVMRC (b) RMRC
No Background Traffic: Scalability (1) Bottleneck = 3Mbps Buffer = 80 packets
No Background Traffic: Scalability (2) Bottleneck = 3Mbps Buffer = 80 packets
UDP Background Traffic • Bottleneck = 3Mbps, Buffer = 80 packets • If bottleneck link lightly loaded, AVMRC operates without inducing packet losses.
FTP Background Traffic • Bottleneck = 3Mbps, Buffer = 80 packets • AVMRC experiences no packet losses in a majority of the experiments
Dynamic Vegas Thresholds (1) • Bottleneck = 45Mbps, Buffer = 250 packets • Background Flows: 90% HTTP, 10% FTP; RTT in [20,420]ms
Dynamic Vegas Thresholds (2) • Scaling bottleneck link capacity & background traffic mix • Dynamic threshold works!
RTT Estimation Policy • Bottleneck capacity = 10 Mbps, Buffer = 150 packets, 90 Background HTTP sessions
Protocol Reactivity: Session Scalability • Bottleneck capacity = 3 Mbps, Buffer = 80 packets, no background traffic
Protocol Reactivity: HTTP Bkg. Traffic • Bottleneck capacity = 10 Mbps, Buffer = 150 packets, background traffic is HTTP
Conclusions & Future Work • AVMRC, a new multirate CC protocol based on TCP Vegas throughput model • Can operate without inducing losses • No feedback from source • No explicit coordination among clients • No constraints on data transmission policy • Fair sharing with TCP Reno • Dynamic TCP Vegas threshold estimation • Incremental deployment of Vegas? • Unicast rate control?
For Details … • Anirban Mahanti, “Scalable Reliable On-Demand Media Streaming Protocols”, Ph.D. Thesis, Dept. of Computer Science, Univ. of Saskatchewan, March 2004. • Anirban Mahanti, Derek L. Eager, and Mary K. Vernon, “Improving Multirate Congestion Control Using TCP Vegas Throughput Equations”, Computer Networks Journal, to appear 2004. • Email:mahanti@cpsc.ucalgary.ca • http://www.cpsc.ucalgary.ca/~mahanti