1 / 75

Improving the Performance of TCP Vegas and TCP SACK: Investigations and Solutions

Improving the Performance of TCP Vegas and TCP SACK: Investigations and Solutions. By Krishnan Nair Srijith Supervisor: A/P Dr. A.L. Ananda School of Computing National University of Singapore. Outline. Research Objectives Motivation Background Study Transmission Control Protocol (TCP)

adanne
Download Presentation

Improving the Performance of TCP Vegas and TCP SACK: Investigations and Solutions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving the Performance of TCP Vegas and TCP SACK:Investigations and Solutions By Krishnan Nair Srijith Supervisor: A/P Dr. A.L. Ananda School of Computing National University of Singapore

  2. Outline • Research Objectives • Motivation • Background Study • Transmission Control Protocol (TCP) • TCP SACK • Section 1- TCP variants over satellite links

  3. Outline (Cont.) • Section 2 - Solving issues of TCP Vegas (TCP Vegas-A) • Section 3 - Improving TCP SACK’s performance • Conclusion

  4. Research Objectives • Study performance of TCP over satellite links. • Study TCP Vegas and suggest mechanisms to overcome limitations. • Study TCP SACK and suggest mechanisms to overcome limitations.

  5. Motivation • TCP is the most widely used transport control protocol. • TCP SACK was proposed to solve issues with New Reno when multiple packets are lost in a window. • However under some conditions SACK too perform badly. • Overcoming this can enhance SACK’s efficiency.

  6. Motivation (Cont.) • TCP Vegas is very different from New Reno, the most commonly used variant of TCP. • Vegas shows greater efficiency, but there are several unresolved issues. • Solving these issues could produce a better alternative to New Reno.

  7. Transmission Control Protocol • The most widely used transport protocol, used in applications like FTP, Telnet etc. • It is a connection oriented, reliable, byte stream service on top of IP layer. • Uses 3 way handshake to establish connections. • Each byte of data is assigned a unique sequence number which has to be acknowledged.

  8. TCP (Cont.) • Major control mechanisms of TCP: • Slow Start • Used to estimate the bandwidth available by a new connection • Congestion Avoidance • Used to avoid losing packets and if and when packets are lost, to deal with the situation

  9. TCP SACK • Was proposed to overcome problems when multiple packet are lost by New Reno within a single window. • In SACK, TCP receiver informs the sender of packets that are successfully received. • It thus allows selective retransmission of lost packets alone.

  10. Section 1 • Studied performance of TCP New Reno and SACK over satellite link. • Paper:- • “Effectiveness of TCP SACK, TCP HACK and TCP Trunk over Satellite Links” - Proceedings of IEEE International Conference on Communications (ICC 2002), Vol.5, pp. 3038 - 3043, New York, April 28 - May 2, 2002.

  11. TCP over Satellite • There are several factors that limit the efficiency of TCP over satellite links. • Long RTT • Increase time in slow start mode,decreases throughput. • Large Bandwidth-delay product • Small window sizes causes under utilization. • High Bit Error Rates • TCP assumes congestion and decreases window.

  12. Error/Delay Box Server Router Client 1 Client 2 Experiment testbed 1 Experimental Setup

  13. Satellite Link Server Client 1 Router Client 2 Experiment testbed 2 Experimental Setup (Cont.)

  14. Results - SACK • Emulator setup with no corruption • RTT of 510 ms was introduced by the error/delay box to simulate the long latency of the satellite link of 10Mbps bandwidth. • TCP maximum window size was varied from 32 KB through 1024KB. • Files of different size were sent from client to server.

  15. Results- SACK (Contd.) Goodput for 1MB and 10MB file transfers for different window sizes - no corruption

  16. Results – SACK (Contd.) • Goodput generally increases with increase in window size. • However for the window size of 1024KB, the goodput decreases in both cases, but more in the New Reno case. • This is because, when the window size is set larger than the bandwidth-delay product of the link (652.8KB), congestion sets in and the goodput falls.

  17. Results – SACK (Contd.) • Emulator setup with corruption • Packet errors of 0.5%,1.0% and 2% were introduced. • RTT was kept at 510ms. • Files transfers of size 1MB and 10MB were carried out with varying window sizes.

  18. Results – SACK (Contd.) Goodput at 1% corruption Goodput for 10MB file at different corruption

  19. Results – SACK (Contd.) • Again, the 10MB file transfer goodput decreases when window size is increased beyond 652.8KB because of the presence of congestion in addition to corruption. • SACK is able to handle this situation better and provides a better goodput.

  20. 1MB New Reno 1MB SACK 10MB New Reno 10MB SACK 64KB 13 14 16.5 17.6 128KB 13.75 15 16.5 18.5 256KB 12.5 13 15.75 17.75 Result - SACK (Contd.) • Satellite Link • The goodput increases as window size is increased, as long as the window size is kept less than the bandwidth-delay product. • SACK performs better than New Reno for both the file sizes as well as for all the window sizes used. Goodput in KBps for 1MB and 10MB file transfers for varying window size – satellite link

  21. Summary • The performance of TCP SACK was compared with New Reno in a GEO satellite environment. • It was shown that SACK performs better than New Reno unless the level of corruption is very high.

  22. Section 2 • Studied the limitations of TCP Vegas and proposed changes to overcome them (TCP Vegas-A). • Paper:- • “TCP Vegas-A: Solving the Fairness and Rerouting Issues of TCP Vegas” - accepted for Proceedings of 22nd IEEE International Performance, Computing, and Communications Conference (IPCCC) 2003, Phoenix, Arizona, 9 - 11 April, 2003.

  23. TCP Vegas • Proposed by Brakmo et al. as a different form of TCP congestion mechanism. • It uses a different bandwidth estimation scheme based on fine-grained measurement of RTTs. • The increment of cwnd in TCP Vegas is governed by the following algorithm:

  24. TCP Vegas (Cont.) • Calculate: • Expected_rate = cwnd/base_rtt • Actual_rate = cwnd/rtt • Diff = expected_rate – actual rate cwnd +1, if diff < α cwnd –1, if diff > β cwnd = cwnd, otherwise α=1 β=3

  25. Issues with TCP Vegas • Fairness • Vegas uses a conservative scheme, while New Reno is more aggressive. • New Reno thus attains more bandwidth than Vegas when competing against it. • Furthermore, New Reno aims to fill up the link space, which Vegas interprets as sign of congestions and reduces cwnd.

  26. Issues with Vegas (Cont.) • Vegas+ was proposed by Hasegawa et al. to tackle this issue. • However, this method assumes that an increase in RTT is always due to presence of competing traffic. • Furthermore, it introduces another parameter count(max), whose chosen value is not explained.

  27. Issues with TCP Vegas (Cont.) • Re-routing • Vegas calculates the expected_rtt using the smallest RTT of that connection (baseRTT). • When routes change during the connection, this value can change, but Vegas cannot adapt if this new smallest RTT value is more than the original one, since it cannot know whether the change is due to congestion or route change.

  28. Issues with Vegas (Cont.) • Vegas assumes RTT increase is due to congestion and decreases cwnd, just opposite of what it should be doing. • La et al. proposed a modification to Vegas to counter this problem, but their solutions adds more variables K,N,L,δ and γ, whose optimum value is still open to debate.

  29. Issues with Vegas (Cont.) • Unfair treatment of old connections • It has been shown that Vegas is inherently unfair towards older connection. • The critical window size that triggers a reduction in cwnd is smaller in older connection and larger in newer connection. • Similarly, critical cwnd that triggers an increase in congestion window is lesser for newer connections.

  30. Vegas-A: Solving Vegas’ Problems • To solve these issues, a modification to the algorithm is proposed, named Vegas-A. • The main idea is to make the values of the parameters α and β adaptive and not fixed at 1 and 3. • The modified algorithm is as follows:

  31. Vegas-A algorithm if β > diff > α { if Th(t) > Th(t-rtt) {cwnd = cwnd +1, α= α+1, β= β+1} else (i.e if Th(t) <= Th(t-rtt)) {no update of cwnd, α, β} } else if diff < α { if α >1 and Th(t) > Th(t-rtt) {cwnd = cwnd +1} else if α >1 and Th(t) < Th(t-rtt) {cwnd = cwnd –1, α= α-1, β= β-1} else if α =1 {cwnd = cwnd+1} }

  32. Vegas-A Algorithm (Cont.) else if diff > β { cwnd= cwnd-1, α= α-1, β= β-1 } else { no update of cwnd, α, β }

  33. Simulation of Vegas vs. Vegas-A • Simulations used Network Simulator (NS 2) • Wired and satellite (GEO and LEO) links were simulated. • NS 2 Vegas agent was modified to work as Vegas-A agent.

  34. S1 D1 Dx Sx R1 R2 Sn Dn Wired link simulation Simulated wired network topology

  35. Wired simulation (Cont.) • Re-routing condition • Route change was simulated by changing RTT of S1-R1 from 20ms to 200ms after 20s into the simulation. • Bandwidth of S1-R1, R1-R2 and R2-D1 was 1Mbps and RTTs of R1-R2 and R2-D1 were 10ms. • Simulation run for 200 seconds.

  36. Re-routing simulation

  37. cwnd variation for Vegas and Vegas-A due to RTT change

  38. Throughput variation for Vegas due to RTT change

  39. Throughput variation for Vegas-A due to RTT change

  40. Bandwidth sharing with New Reno • S1 uses Vegas/Vegas-A while S2 uses New Reno. • S1-R1 and S2-R1=8Mbps, 20ms (RTT) • R2-D1 and R2-D2=8Mbps, 20ms (RTT) • R1-R2 = 800Kbps, 80ms(RTT) • S1 started at 0s and S2 at 10s.

  41. Throughput of TCP New Reno and Vegas over congested link

  42. Throughput of TCP New Reno and Vegas-A connections over congested link

  43. Competing against New Reno • When 3 Vegas/Vegas-A connections and New Reno were used, Vegas-A was again found to obtain a fairer share of the bandwidth compared to Vegas.

  44. Old vs. New Vegas/Vegas-A • 5 Vegas/Vegas-A connections were simulated starting at intervals of 50 seconds.

  45. Bias against high BW flows • It has been shown that Vegas is biased against connections with higher bandwidth. • Simulations conducted to check if Vegas-A fares better. • 3 sources – S1,S2,S3. • S1-R1=128Kbps, S2-R1=256Kbps, S3-R1=512Kbps, R1-R2 = 400Kbps

  46. High BW flows bias (Cont.) • The table below shows that Vegas-A does indeed perform better than Vegas.

  47. Retaining properties of Vegas • While trying to overcome the problems of Vegas, Vegas-A should not lose properties of Vegas. • One Vegas/Vegas-A connection simulated • S1-R1=1Mbps, 45ms RTT • R1-R2=250Kbps, 45ms RTT • R2-D1=1Mbps, 10ms RTT

  48. Retaining properties of Vegas (Cont.) Comparison of New Reno, Vegas and Vegas-A connections over a 100ms RTT link

  49. Retaining properties of Vegas(Cont.) • The effect of changing buffer size on the performance of New Reno, Vegas and Vegas-A was studied next. • RTT was set to 40ms and bottleneck link BW was set to 500Kbps.

  50. Retaining properties of Vegas(Cont.) Comparison of New Reno, Vegas and Vegas-A connections with different router buffer queue size

More Related