190 likes | 211 Views
This presentation covers the current status of TCP alternatives such as Fast TCP, Scalable TCP, and Highspeed TCP. Results from SLAC, Internet2 information, and future plans for TransPAC are discussed. Questions regarding throughput, RTT, Linux, and New Reno are addressed. The session provides insights into the performance of various TCP alternatives and their potential impact on network optimization.
E N D
Status of FAST TCP and other TCP alternatives John Hicks TransPAC HPCC Engineer Indiana University APAN Meeting – Hawaii 30-January-2004
Overview • Brief introduction to TCP alternatives • Results from SLAC • Internet2 information • TransPAC work • Future plans • Questions
700 Throughput Mbps Reno RTT ms RTT (~70ms) 0 1200 s Linux 2.4 New Reno • Low performance on fast long distance paths • AIMD (add a=1 pkt to cwnd / RTT, decrease cwnd by factor b=0.5 in congestion) Information courtesy of Les Cottrell from the SLAC group at Stanford
Parallel TCP Reno • TCP Reno with 16 streams • Parallel streams heavily used in HENP & elsewhere to achieve needed performance, so it is today’s de facto baseline • However, hard to optimize both the window size AND number of streams since optimal values can vary due to network capacity, routes or utilization changes Information courtesy of Les Cottrell from the SLAC group at Stanford
FAST TCP • Based on TCP Vegas • Uses both queuing delay and packet losses as congestion measures • Developed at Caltech by Steven Low and collaborators • Beta code available soon Information courtesy of Les Cottrell from the SLAC group at Stanford
Scalable TCP • Uses exponential increase everywhere (in slow start and congestion avoidance) • Multiplicative decrease factor b = 0.125 • Introduced by Tom Kelly of Cambridge Information courtesy of Les Cottrell from the SLAC group at Stanford
Highspeed TCP • Behaves like Reno for small values of cwnd • Above a chosen value of cwnd (default 38) a more aggressive function is used • Uses a table to indicate by how much to increase cwnd when an ACK is received • Available with web100 • Introduced by Sally Floyd Information courtesy of Les Cottrell from the SLAC group at Stanford
Highspeed TCP Low priority • Mixture of HS-TCP with TCP-LP (Low Priority) • Backs off early in face of congestion by looking at RTT • Idea is to give scavengers service without router modifications • From Rice University Information courtesy of Les Cottrell from the SLAC group at Stanford
Binary Increase Control TCP (BIC TCP) • Combine: • An additive increase used for large cwnd • A binary increase used for small cwnd • Developed Injong Rhee at NC State University Information courtesy of Les Cottrell from the SLAC group at Stanford
Hamilton TCP (H TCP) • Similar to HS-TCP in switching to aggressive mode after threshold • Uses an heterogeneous AIMD algorithm • Developed at Hamilton U Ireland Information courtesy of Les Cottrell from the SLAC group at Stanford
SLAC TCP Testing • TCP • No Rate based transport protocols (e.g. SABUL, UDT, RBUDP) at the moment • No iSCSI or FC over IP • Sender mods only, HENP model is few big senders, lots of smaller receivers • Simplifies deployment, only a few hosts at a few sending sites • No DRS • Runs on production nets • No router mods (XCP/ECN), no jumbos Information courtesy of Les Cottrell from the SLAC group at Stanford
SLAC preliminary test results • Advanced stacks behave like TCP-Reno single stream on short distances for up to Gbits/s paths, especially if window size limited • TCP Reno single stream has low performance and is unstable on long distances • P-TCP is very aggressive and impacts the RTT badly • HSTCP-LP is too gentle, this can be important for providing scavenger service without router modifications. By design it backs off quickly, otherwise performs well • Fast TCP is very handicapped by reverse traffic • S-TCP is very aggressive on long distances • HS-TCP is very gentle, like H-TCP has lower throughput than other protocols • Bic-TCP performs very well in almost all cases Information courtesy of Les Cottrell from the SLAC group at Stanford
SLAC preliminary test results With optimal window all stacks within ~20% of one another, except Reno 1 stream on medium and long distances P-TCP & S-TCP get best throughput Information courtesy of Les Cottrell from the SLAC group at Stanford
Internet2 information • Stanislav Shalunov developed i2perf (http://www.internet2.edu/~shalunov/i2perf) • i2perf initially developed for FAST TCP to measure RTT • Testing Topology (10-29-2003) • Reno, from Seattle to Atlanta, RTT = 57.6ms • FAST, from Raleigh to Atlanta, RTT = 23.7ms • FAST, from Seattle to Atlanta, RTT = 57.4ms • FAST, from Pittsburgh to Atlanta, RTT = 26.9ms • FAST testing planned over TransPAC (waiting on kernel mods from CalTech) • More information at http:// www.internet2.edu/~shalunov/talks
TransPAC Work Setup test from SURFNET to APAN Setup took 3 days due to time differences Purpose of this test was to establish contact personnel and identify equipment needs Only standard tests done Kernel mods and TCP alternatives require more time to setup More testing planned
Future Plans • Possible future testing to and from the following sites: • Indiana University • MIT (David Lapsley) • L. A. • other Abilene locations • StarLight • APAN (Tokyo XP and others?) • Looking for groups interested in testing TCP • Contact me to help coordinate testing over TransPAC
For More Information • FAST TCP • http://netlab.caltech.edu/FAST/ • Scalable TCP • http://www-lce.eng.cam.ac.uk/~ctk21/scalable/ • Highspeed TCP • http://www.icir.org/floyd/hstcp.html • Highspeed TCP Low-Priority • http://dsd.lbl.gov/DIDC/PFLDnet2004/papers/Kuzmanovic.pdf • Binary Increase Control (BIC) TCP • http://www.csc.ncsu.edu/faculty/rhee/export/bitcp/
Even More Information • Hamilton TCP • http://www.hamilton.ie/net/main.htm?tcp • SLAC TCP • http://www-iepm.slac.stanford.edu/monitoring/bulk/fast/ • Internet2 (Stanislav Shalunov) • http:// www.internet2.edu/~shalunov/ • TransPAC • http://www.transpac.org • APAN NOC • http:www.jp.apan.net/noc/
Questions and discussion John Hicks Indiana University jhicks@iu.edu