290 likes | 467 Views
Modeling and Taming Parallel TCP on the Wide Area Network. Dong Lu , Yi Qiao Peter Dinda , Fabian Bustamante Department of Computer Science Northwestern University. Summary. Parallel TCP flows are frequently used
E N D
Modeling and Taming Parallel TCP on the Wide Area Network Dong Lu ,Yi Qiao Peter Dinda , Fabian Bustamante Department of Computer Science Northwestern University
Summary • Parallel TCP flows are frequently used • What number of parallel flows will give the highest throughput with less than a p% impact on cross traffic? --- “Maximum Nondisruptive Throughput” • Our answer to this question: • Active probing at two parallelism levels • Modeling and predicting parallel TCP thropughput at other parallelism levels • Estimating impact on cross traffic, proposing a parallelism level that bounds the impact
Outline • Motivation • Modeling Parallel TCP throughput • Two probes at different parallelism levels • Evaluation via wide area experiments • Taming parallel TCP • Estimating the impact on cross traffic • Evaluation via simulations
Motivation • Parallel TCP flows are broadly used to achieve higher throughput on the current Internet. GridFTP is one example. However, • No practical mechanism to predict its throughput • No previous work on estimating and controlling the negative impacts on cross traffic throughput (taming parallel TCP)
Motivation • Danger of using too many parallel TCP flows • Congest the end-to-end path, significantly disturb cross traffic • Diminishing Returns, or worse throughput
Our solution: TameParallelTCP() struct ParallelTCPChar { int num_flows; double max_nondisruptive_thru; double cross_traffic_impact; }; ParallelTCPChar * TameParallelTCP(Address dest, double maximpact); Percentage
Outline • Motivation • Modeling Parallel TCP throughput • Two probes at different parallelism levels • Evaluation via wide area experiments • Taming parallel TCP • Estimating the impact on cross traffic • Evaluation via simulations
Modeling Parallel TCP throughput Single TCP throughput model [Mathis, et al, Sigcomm CCR’97] Parallel TCP throughput upper bound model [Hacker, et al, IPDPS’02] • Upper bound tight only in uncongested networks • Hard to obtain future loss rate: what is the loss rate if I add 20 parallel TCP flows?
Modeling Parallel TCP throughput Single TCP throughput model [Mathis, et al, Sigcomm CCR’97] Eq(1) Eq(2) Parallel TCP throughput model (Ours) n: number of parallel flows p: loss rate RTT: round trip time MSS: max segment size b and c1: constant Eq(3)
Assumptions • Parallel TCP flows share same loss rate P. Loss rate increases with parallelism level. • Supported by previous research • MSS remains stable after TCP connection setup • TCP throughput shows transient stability • Supported by previous research • Our associated work to appear in ICDCS’05 • Our model does NOT require the knowledge of RTT, MSS, p, b, and c1
Modeling and predicting loss rate Eq(4) Two probes at different parallelism level: w and v Eq(5) don’t need to know know after probing then we can calculate BWn based on the two probes If we know: Empirically, we use a partial polynomial to approximate f(n): Eq(6)
Eq(4) Eq(5) Eq(6) Predicting throughput at level m Eq(7)
Experiments setup • Testbed: Planetlab • randomly chosen 41 pairs of hosts (41 end-to-end paths) • Throughput test tool: iperf • Methodology: A test consists of testing parallel TCP throughput with increasing parallelism levels (1~30) • Repeat each test 10 times on each path
A random wide area example Measurement Prediction
Prediction Errors Unrelated To Parallelism Level 0.1 Mean relative prediction error 0 30 -0.1 Parallelism level (number of parallel TCP flows)
Outline • Motivation • Modeling Parallel TCP throughput • Two probes at different parallelism levels • Evaluation via wide area experiments • Taming parallel TCP • Estimating the impact on cross traffic • Evaluation via simulations
Maximum Nondisruptive Throughput • The highest throughput with less than a p% impact on cross traffic (MNT)
Our solution: TameParallelTCP() struct ParallelTCPChar { int num_flows; double max_nondisruptive_thru; double cross_traffic_impact; }; ParallelTCPChar * TameParallelTCP(Address dest, double maximpact); Function Return User specified
Challenges • The available bandwidth on the bottleneck link is unknown • The number of cross traffic flows and their loss rates is unknown • Overhead considerations
Assumptions • TCP flows share same loss rate on the bottleneck link • If the cross traffic flows have RTT similar to our parallel TCP flows • The router on the bottleneck link is using Random Early Detection (RED) like queue management policies
Estimating the impact on cross traffic • Recall that after two probes, we get the value of a and b for • We set n1=1, and n2=“number of parallel TCP flows under consideration” • Then with Eq(10), we can calculate relc Eq (10)
Simulation setup • Why do we need simulations? • Detailed information on cross traffic • Ns2 based simulations • TCP Reno • Each simulation is repeated 10 times
Simulation topologies Cross traffic Topo 1 Parallel TCP Cross traffic Parallel TCP RED RED Topo 2 Cross traffic
Low, slightly biased prediction errors 1 Probability (error<x) -0.6 0 0.6 Relative prediction error
Implementing TameParallelTCP() TameParallelTCP() { Send two probes at different parallelism levels; Estimate the loss rate curve; Estimate the throughput at different parallelism levels; Estimate the impact on cross traffic at different parallelism levels; Proposed a parallelism level with estimated impact < maximpact; Return struct ParallelTCPChar; } struct ParallelTCPChar { int num_flows; double max_nondisruptive_thru; double cross_traffic_impact; };
Conclusions • We have shown how to estimate parallel TCP throughput and its impact on cross traffic by sending two probes • Our evaluation using both wide area experiments and ns2 based simulations shows the effectiveness of our approach • Future work • How to relax our assumptions about the cross traffic?
For more information • Tool available at: • http://plab.cs.northwestern.edu/Clairvoyance • Dong Lu, Northwestern Univ. http://www.cs.northwestern.edu/~donglu • Related work on sequential TCP characterization and prediction • Dong Lu, Yi Qiao, Peter Dinda, Fabian Bustamante, "Characterizing and Predicting TCP Throughput on the Wide Area Network", ICDCS 2005.