280 likes | 392 Views
2005.1.30 v 0.2. CUBIC : A New TCP-Friendly High-Speed TCP Variant. 2005.2. Injong Rhee, Lisong Xu Member, IEEE. Outline. 1. Motivation 2. Introduction 3. Performance Evaluation 4. Conclusion. 1. Motivation.
E N D
2005.1.30 v 0.2 CUBIC : A New TCP-Friendly High-Speed TCP Variant 2005.2. Injong Rhee, Lisong Xu Member, IEEE
Outline 1. Motivation 2. Introduction 3. Performance Evaluation 4. Conclusion
1. Motivation • In the last few years, Many TCP variants have been proposed to address the under-utilization problem due to the slow growth of TCP congestion window.(e.g. FAST, HSTCP, STCP, HTCP, SQRT, Westwood BIC) • While the window growth of new protocols is scalable, their fairness issue has remained as a major challenge.(e.g. TCP Friendliness, RTT fairness, and inter/intra protocol fairness) • The crux of the problem is to find a “suitable” growth function.
2. Introduction: CUBIC – A New TCP Variant • CUBIC is anenhanced version of BIC • Simplifies the BIC window control using a cubic function. • Improves its TCP friendliness & RTT fairness • The window growth function of CUBIC is based onreal-time (the elapsed time since the last loss event), so that it is independent of RTT. • First proposed by [Shorten and Leith, May 2003 Yale workshop], and also later in [HTCP]. • Window growth becomes independent on RTT • RTT fairness and also TCP friendliness – under low delays. • HTCP, SQRT.
2. Introduction : BIC function • BIC overall performs very well inevaluation of advanced TCP stacks on fast long-distance production networks by SLAC ( Stanford LinearAcceleratorCenter). • BIC (also HSTCP & STCP) growth function can be still aggressive for TCP especially under short RTTs or low speed networks. • Currently a default TCP stack for Redhat Linux 2.6. • Microsoft and Sun are considering BIC to include in their OS stacks.
2. Introduction : CUBIC function accelerate slow down accelerate where C is a scaling factor, tis the elapsed time from the last window reduction, and βis a constant multiplication decrease factor
if > Otherwise : window size = : window size = 2. Introduction: CUBIC – New TCP Mode • In short RTT networks, the window growth of CUBIC is slower than TCP since CUBIC is independent of RTT. We emulate the TCP window algorithm after a packet loss event. Average sending rate of AIMD = (TCP). Thus, The size of TCP window after time t from window reduction.
High-Speed TCP Variants : e.g. CUBIC, BIC, FAST, HSTCP, STCP High-Speed TCP or TCP SACK Background Traffic Generation (Next Slide) Background Traffic Generation (Next Slide) 3.1 Testbed (Dummynet) Setup 1 Gbps link Linux FreeBSD Setting RTT for each path between Senders and Receiver RTT for Background Traffic : Exponential Distribution (Next Slide) Bottleneck Point : 800 Mbps Sender 1 Receiver Router 1 Router 2 Sender 2 Background TrafficGenerator 2 Background TrafficGenerator 1
3.1 Testbed Setup : Background Traffic Generation • TCP Flow RTT: Exponential Distribution • The mean is set to 66 ms (one-way delay), then the CDF is very similar to the CDF of RTT samples shown in paper • [“Variability in TCP Roundtrip Times” by J. Ajkat, J. Kaur, F.D. Smith, and K. Jeffay in SigComm Internet Measurement Conference, 2003]. • Inter-Arrival Time Between Two Successive TCP connections: Exponential Distribution (observed from Floyd and Paxson) • This is the parameter that we used to control the background traffic load • TCP Flow Duration: Lognormal (Body) and Pareto (Tail) Distribution • Using the parameters from paper “Generating Representative Web Workloads for Network and Server Performance Evaluation” by Paul Barford, Mark Crovella in SigMetric 1998
3.2 TCP Friendliness • NS simulation : RTT 10 ms & 20 Mbps ~ 1 Gbps
3.2 TCP Friendliness (cont.) • NS simulation : RTT 100 ms & 20 Mbps ~ 1 Gbps
3.2 TCP Friendliness (cont.) • Dummynet Testbed : RTT 5ms & 800 Mbps, 100% router buffer of the BDP with 80 ~ 200 Mbps background traffic Background traffic 80 Mbps 200 Mbps Link Utilization (%) TCP Friendliness on short RTT - 5ms
3.2 TCP Friendliness (cont.) • Dummynet Testbed : RTT 10ms & 800 Mbps, 100% router buffer of the BDP with 80 ~ 200 Mbps background traffic Background traffic 80 Mbps 200 Mbps Link Utilization (%) TCP Friendliness on short RTT - 10ms
3.2 TCP Friendliness (cont.) • Dummynet Testbed : RTT 100ms & 800 Mbps, 100% router buffer of the BDP with 80 ~ 200 Mbps background traffic Background traffic 80 Mbps 200 Mbps Link Utilization (%) TCP Friendliness on long RTT - 100ms
3.2 TCP Friendliness (cont.) • Dummynet Testbed : RTT 200ms & 800 Mbps, 100% router buffer of the BDP with 80 ~ 200 Mbps background traffic Background traffic 80 Mbps 200 Mbps Link Utilization (%) TCP Friendliness on long RTT - 200ms
3.3 RTT Fairness Dummynet testbed : RTT 40, 120, 240 ms & 800 Mbps, Router buffer: 50% of the BDP with 200 Mbps background traffic
3.4 Stability : NS Simulation Setup • NS simulation : High-Speed TCP Variants on 220ms, TCP SACK on 20ms and 2.5 Gbps with 5% router buffer of the BDP
3.4 Stability : NS Simulation Result (cont.) NS simulation : High-Speed TCP Variants on 220ms, TCP SACK on 20ms and 2.5 Gbps, Router buffer: 5% of the BDP * HTCP have some stability issues (this needs to be confirmed with the original authors of HTCP).
3.4 Stability : NS Simulation Result (cont.) • Coefficient of Variations in the stability test on NS simulation
RTT 95 ms for Sender 1 RTT 5 ms for Sender 2 RTT 5ms for both of senders 800 Mbps Drop Tail 1000 Mbps Drop Tail RTT 5ms RTT : Exponential Distribution 3.4 Stability : Dummynet Testbed Setup (cont.) • Dummynet testbed : High-Speed TCP Variants on 200ms, TCP SACK on 20ms, & 800 Mbps Router buffer: 100% of the BDP with 200Mbps background traffic 1 Gbps link Linux FreeBSD High-Speed TCP Variant Flows Sender 1 Receiver Long-livedTCP Flows Router 1 Router 2 Sender 2 Background TrafficGenerator 2 Background TrafficGenerator 1
3.4 Stability : Dummynet Testbed Result (cont.) BIC CUBIC HSTCP STCP
3.4 Stability : Dummynet Testbed Result (cont.) FAST * The throughput of FAST flows was lower than that of TCP as much as TCP Friendliness experiments due to small alpha parameter value.
3.5 Evaluation Summary • CUBIC and HTCP had good TCP Friendliness especially on short RTT networks. FAST needs alpha parameter tuning. • CUBIC and FAST had good RTT Fairness under both short and long RTT paths. • CUBIC showed the best stability.FAST requires tuning alpha parameter.
4. Discussion • How to define TCP-friendliness. • How to measure stability and fairness. • The role of background traffic – what is the realistic traffic mix?
5. Conclusion • A real-time based protocol seems a good idea. • A CUBIC seems a good simplification of BIC, but is there any other choice for the window growth function? • What makes a cubic function better than others? • Any odd-order function would do well?
Reference [1] H. Bullot, R. Les Cottrell, and R. Hughes-Jones, "Evaluation of Advanced TCP Stacks on Fast Long-Distance Production Networks,“ Second International Workshop on Protocols for Fast Long-Distance Networks, February 16-17, 2004, Argonne, Illinois USA [2] C. Jin, D. X. Wei and S. H. Low, "FAST TCP: motivation, architecture, algorithms, performance," In Proceedings of IEEE INFOCOM 2004, March 2004 [3] S. Floyd, “HighSpeed TCP for large congestion windows,” INTERNET DRAFT, draft-floyd-tcp-highspeed-01.txt, 2003 [4] T. Kelly, “Scalable TCP: Improving performance in highspeed wide area networks,” ACM SIGCOMM Computer Communication Review, Volume 33, Issue 2, pp. 83-91, April 2003 [5] R. Shorten, and D. Leith, "H-TCP: TCP for high-speed and longdistance networks,” Second International Workshop on Protocols for Fast Long-Distance Networks, February 16-17, 2004, Argonne, Illinois USA [6] T. Hatano, M. Fukuhara, H. Shigeno, and K. Okada, "TCP-friendly SQRT TCP for High Speed Networks," in Proceedings of APSITT 2003, pp455-460, Nov 2003. [7] C. Casetti, M. Gerla, S. Mascolo, M. Y. Sanadidi, and R. Wang, "TCP Westwood: Bandwidth Estimation for Enhanced Transport over Wireless Links," In Proceedings of ACM Mobicom 2001, pp 287-297, Rome, Italy, July 16-21 2001 [8] L. Xu, K. Harfoush, and I. Rhee, "Binary Increase Congestion Control (BIC) for Fast Long-Distance Networks," In Proceedings of IEEE INFOCOM 2004, March 2004