180 likes | 189 Views
Modeling of Web/TCP Transfer Latency. Yujian Peter Li January 22, 2004 M. Sc. Committee: Dr. Carey Williamson Dr. Wayne Eberly Dr. Elena Braverman. Department of Computer Science, University of Calgary. Outline. Motivation and Objectives. TCP Overview and Related Work.
E N D
Modeling of Web/TCP Transfer Latency Yujian Peter Li January 22, 2004 M. Sc. Committee: Dr. Carey Williamson Dr. Wayne Eberly Dr. Elena Braverman Department of Computer Science, University of Calgary
Outline • Motivation and Objectives • TCP Overview and Related Work • The Proposed TCP Transfer Latency Model • Model Validation by Simulation • Extending the Proposed Model to CATNIP TCP • Conclusions
Motivation • Web response time • Highly dominated by TCP performance • Understanding the sensitivity of TCP to network conditions helps to improve TCP performance • No work on modeling CATNIP TCP
Objectives • To survey and compare existing TCP models • To develop an accurate model for short-lived TCP flows • To model CATNIP TCP
Connection-oriented • Flow control • Reliable, in-order byte stream • Congestion Control SYN SYN/ACK ACK DATA FIN FIN/ACK ACK TCP Overview Characteristics Web Browser Web Server
TCP Overview Congestion Control • When intermediate nodes (routers) become overloaded, the condition is called congestion. • The mechanisms to solve the problem are called congestion control.
TCP Overview – Congestion Control Slow Start & Congestion Avoidance • Slow start: cwnd=cwnd+1 for every received ACK • Congestion avoidance: cwnd = cwnd + 1/cwnd
Related Work • TCP Steady State Throughput Model • [Padhye et al. 1998] • TCP Response Time Models • Cardwell-00 Model [Cardwell et al. 2000] • Padhye Model [Cardwell et al. 1998] • Cardwell-98 Model [Cardwell et al. 1998] • Sikdar Model [Sikdar et al. 2001]
The Proposed TCP Response Time Model Assumptions • Bernoulli packet loss model, i.e., packet is independently lost with fixed probability p • Congestion avoidance algorithm ignored, i.e., cwnd always increases by one upon receiving one ACK (exponentially) • Packet loss can be via RTO or triple duplicate ACKs • The effect of delayed ACK, Tdelay, is added when necessary
The Proposed Model (Cont’d) Congestion Window Evolution
Simulation Experiments Network Topology
Simulation Experiments Metric & Experimental Factors • Performance Metric: Data Transfer Time, the time from when the sender sends the first packet until the time when the sender receives the ACK of the last data packet. • Experimental factors and levels
Simulation Results Short-lived Flows (p=3%) (p=10%)
CATNIP TCP C. Williamson and Q. Wu : “A Case for Context-Aware TCP/IP”. ACM Performance Evaluation Review, Vol. 29, No. 4, pp. 11-23, March 2002. • Convey application-layer context information to TCP/IP • Not all packet losses created equal HTTP Document Size TCP Packet Loss Priority IP
CATNIP TCP v.s. Partial CATNIP TCP p=3% p=5% p=10% PDF CDF
Modeling Partial CATNIP TCP Short-lived Flows (p=3% p’=0%) (p=10% p’=0%)
Conclusions • The proposed TCP latency model fits the simulation results better than earlier models. • The differences between Partial CATNIP and CATNIP are minimal when p<10%. • Partial CATNIP TCP model matches the simulation as well. • Partial CATNIP TCP improves TCP latency compared to TCP Reno. For short-lived flows, Partial CATNIP TCP is about 10% faster than TCP Reno in most cases. • CATNIP TCP is a suitable approach to improve TCP Performance.