260 likes | 379 Views
Impact of Background Traffic on Performance of High-speed TCPs. Injong Rhee http://www.csc.ncsu.edu/faculty/rhee/ North Carolina State University Collaborators: Sangtae Ha, Lisong Xu, Long Le. Microsoft Workshop. Background. Experiment with linux 2.6.19 Iperf (1 TCP-SACK flow)
E N D
Impact of Background Traffic on Performance of High-speed TCPs Injong Rhee http://www.csc.ncsu.edu/faculty/rhee/ North Carolina State University Collaborators: Sangtae Ha, Lisong Xu, Long Le Microsoft Workshop
Background • Experiment with linux 2.6.19 • Iperf (1 TCP-SACK flow) • 1Gbit backbone link: NC (USA) – Korea – Japan (special thanks to research team in Japan) Slow window growth of Reno-style TCP results in under-utilization Korea NC 202ms Japan 48ms
High-Speed TCP Variants • Many High-speed TCP variants have been proposed • How can we evaluate these protocols? Which criteria? CUBIC H-TCP TCP-Africa Compound TCP HSTCP BIC-TCP TCP- AReno FAST Scalable New Protocol TCP- Westwood
HTCP HSTCP STCP BIC Window Size CUBIC Time Window growth patterns HSTCP H-TCP Scalable BIC-TCP CUBIC NS2-Linux [?], 400Mbps, 160ms one-way delay,100% BDP buffer, No background traffic
Performance Criteria and Design Tradeoffs • There are many performance criteria • Fairness • Intra-protocol fairness • RTT-fairness • TCP-friendliness • Scalability (High link utilization) • Stability • Not all protocols satisfy all the goals. • But instead, make different design tradeoffs. • For example, give up on convergence time to gain more stability, or vice versa.
Performance Evaluation Methodology • Internet experiment • Most realistic tests, but • Hard to reproduce the results • No idea on what happened in the network • Simulation or dummynet emulation • Easily reproducible and verifiable • Main issue: are they realistic? how to recreate the Internet environments? • Theoretical analysis • Provide important insights on the behavior of protocols • But convenient assumptions and less useful for comparison (e.g., first order behaviors).
Testbed emulation - recreating the Internet environment. • Topology • Can’t model the complexity of the entire network. • Thus, most evaluations focus on one or a few hop environments (or dumbbell). • Workload • To compensate, focus on injecting realistic background traffic into the bottleneck link. • As arriving flows must have gone through many hops, mimicking the traffic pattern seen in one core router has some effect of emulating the topology. • Not perfect as it does not allow us to see the behaviors of protocols under multiple bottlenecks. • But this can be overcome by use of a “parking” lot topology assuming bottleneck links are only a few.
Realistic background traffic • Hard to prove its realism, but we can make at least the statistics similar. • Measure the Internet traffic in one Internet link and extract its statistical patterns such as flow sizes, arrival rates, transmission rates, etc. • Highly detailed recreation of Internet traffic (based on these statistical patterns) are possible. • Tools: HARPOON, Tmix, etc. • A quick and dirty way: just emulate the patterns generally observed in the Internet. • Arrivals -- exponential, heavy-tail • Flow sizes -- a varied form of heavy-tail (different body and tail) • RTT variations -- log-normal
Our work • We study the impact of background traffic patterns on the performance of protocols. • Important to understand their behaviors in the Internet-like environments. • This will shed lights on different tradeoffs that different protocols take.
Testbed (Dummynet) Setup • Totally 18 servers for generating background traffic and receiving and sending protocol flows. • Background traffic is pushed into forward/backward directions • Long-lived flows: Iperf, short-lived flows: Surge (web traffic generator) • The RTT of each flow is randomly chosen based on input distribution. • Experimental parameters: RTT (40ms to 320ms), buffer sizes (1MB to 8MB).
Five different types of background traffic • Type I: • Surge (LN Body 93%, Pareto tail 7%) • Exponential arrival (0.2) • Type II: • Surge (LN Body 70%, Pareto tail 30%) • Minimum file size for tail - 1MB • Exponential arrival (0.6) • Type III: • Type I (90%), P2P traffic (10%) • P2P traffic - Pareto, Minimum 3MB • Type IV: • 100% log-normal body • Type V: • Type II + 12 long-lived Iperf flows
Link utilization and stability No Background (Buffer 1MB) Type II (Buffer 1MB) Some protocols reduce utilization when the rate variance of background traffic increases.
Link utilization, stability and loss synchronization No Background Type II Utilization High-speed TCP flows Background traffic High rate variations of protocol flows may cause loss synchronization and low utilization.
Stability vs. Link utilization Protocol Stability (measured in CoV - Standard Deviation divided By mean) Link utilization
Link utilization and stability under various traffic types (HTCP) Link utilization CoV
Fairness (measured in throughput ratio) TCP friendliness (RTT 42 ms; 2MB buffer) Intra-protocol Fairness (RTT 82 ms) RTT-fairness (flow 1: 42 ms; flow 2: 162 ms) Generally, H-TCP shows the excellent fairness regardless of traffic types. All protocols improve fairness with more variance in bg traffic, but the size of traffic makes the biggest difference (type V).
TCP friendliness No background Type V Generally, all protocols improve fairness with type V background traffic.
TCP-friendliness another look • Type II traffic with varying numbers of high-speed flows (320ms RTT). • Measured the throughput of Type II traffic. • We don’t find much difference in throughput.
Convergence speed Cubic H-TCP No background traffic Type II
Conclusion • Types of background traffic reveal “the beast” in disguise. E.g, • Some protocols trade convergence speed for higher stability. • Some protocols trade stability for faster convergence and fairness. • Rate variance of background traffic affects the stability and also link utilization. • All protocols improve fairness and convergence speed with more background traffic (size matters more than variance).
Intra-protocol fairness No background (2 MB buffer) Type V (2 MB buffer)
Intra-protocol fairness (FAST) Type I Traffic; 1 MB Buffer. Wrong estimation of minimum RTT causes different flows to run at different rates
Link utilization v.s. buffer size • As the buffer space increases, the stability gets better. 320ms RTT
Impact of buffer sizes • Buffer size (1 – 8MB), four HS flows with the same RTT (320ms) • As the buffer size increases, the CoV of all protocols decreases
Impact of congestion • Buffer size (2MB), two HS flows with the same RTT (40ms – 320ms), a dozen long-lived TCP flows added • Convex protocols have a large variations (convex ordering still exists)