1 / 18

Modeling of Web/TCP Transfer Latency

Modeling of Web/TCP Transfer Latency. Yujian Peter Li January 22, 2004 M. Sc. Committee: Dr. Carey Williamson Dr. Wayne Eberly Dr. Elena Braverman. Department of Computer Science, University of Calgary. Outline. Motivation and Objectives. TCP Overview and Related Work.

jefferys
Download Presentation

Modeling of Web/TCP Transfer Latency

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modeling of Web/TCP Transfer Latency Yujian Peter Li January 22, 2004 M. Sc. Committee: Dr. Carey Williamson Dr. Wayne Eberly Dr. Elena Braverman Department of Computer Science, University of Calgary

  2. Outline • Motivation and Objectives • TCP Overview and Related Work • The Proposed TCP Transfer Latency Model • Model Validation by Simulation • Extending the Proposed Model to CATNIP TCP • Conclusions

  3. Motivation • Web response time • Highly dominated by TCP performance • Understanding the sensitivity of TCP to network conditions helps to improve TCP performance • No work on modeling CATNIP TCP

  4. Objectives • To survey and compare existing TCP models • To develop an accurate model for short-lived TCP flows • To model CATNIP TCP

  5. Connection-oriented • Flow control • Reliable, in-order byte stream • Congestion Control SYN SYN/ACK ACK DATA FIN FIN/ACK ACK TCP Overview Characteristics Web Browser Web Server

  6. TCP Overview Congestion Control • When intermediate nodes (routers) become overloaded, the condition is called congestion. • The mechanisms to solve the problem are called congestion control.

  7. TCP Overview – Congestion Control Slow Start & Congestion Avoidance • Slow start: cwnd=cwnd+1 for every received ACK • Congestion avoidance: cwnd = cwnd + 1/cwnd

  8. Related Work • TCP Steady State Throughput Model • [Padhye et al. 1998] • TCP Response Time Models • Cardwell-00 Model [Cardwell et al. 2000] • Padhye Model [Cardwell et al. 1998] • Cardwell-98 Model [Cardwell et al. 1998] • Sikdar Model [Sikdar et al. 2001]

  9. The Proposed TCP Response Time Model Assumptions • Bernoulli packet loss model, i.e., packet is independently lost with fixed probability p • Congestion avoidance algorithm ignored, i.e., cwnd always increases by one upon receiving one ACK (exponentially) • Packet loss can be via RTO or triple duplicate ACKs • The effect of delayed ACK, Tdelay, is added when necessary

  10. The Proposed Model (Cont’d) Congestion Window Evolution

  11. Simulation Experiments Network Topology

  12. Simulation Experiments Metric & Experimental Factors • Performance Metric: Data Transfer Time, the time from when the sender sends the first packet until the time when the sender receives the ACK of the last data packet. • Experimental factors and levels

  13. Simulation Results Short-lived Flows (p=3%) (p=10%)

  14. CATNIP TCP   C. Williamson and Q. Wu : “A Case for Context-Aware TCP/IP”. ACM Performance Evaluation Review, Vol. 29, No. 4, pp. 11-23, March 2002. • Convey application-layer context information to TCP/IP • Not all packet losses created equal HTTP Document Size TCP Packet Loss Priority IP

  15. CATNIP TCP v.s. Partial CATNIP TCP

  16. CATNIP TCP v.s. Partial CATNIP TCP p=3% p=5% p=10% PDF CDF

  17. Modeling Partial CATNIP TCP Short-lived Flows (p=3% p’=0%) (p=10% p’=0%)

  18. Conclusions • The proposed TCP latency model fits the simulation results better than earlier models. • The differences between Partial CATNIP and CATNIP are minimal when p<10%. • Partial CATNIP TCP model matches the simulation as well. • Partial CATNIP TCP improves TCP latency compared to TCP Reno. For short-lived flows, Partial CATNIP TCP is about 10% faster than TCP Reno in most cases. • CATNIP TCP is a suitable approach to improve TCP Performance.

More Related