150 likes | 319 Views
Impact of Bottleneck Queue on Long Distant TCP Transfer. August 25, 2005 NOC-Network Engineering Session Advanced Network Conference in Taipei. Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) <masaki@nict.go.jp> <tanaka@kddnet.ad.jp>. APAN Requirements on Transport.
E N D
Impact of Bottleneck Queue on Long Distant TCP Transfer August 25, 2005 NOC-Network Engineering Session Advanced Network Conference in Taipei Masaki Hirabaru (NICT) and Jin Tanaka (KDDI)<masaki@nict.go.jp> <tanaka@kddnet.ad.jp>
APAN Requirements on Transport Advanced ► High Speed International ► Long Distant Difficulty in Congestion Avoidance is in proportion to:Bandwidth-Delay Product (BWDP) Single TCP flowNo fairness considered
Long Distant Rover Control (at least) 7 minutes one way delay Image Command Earth Mars When operator saw collision, it was too late.
Long-Distance End-to-End Congestion Control Overflow B A C Sender(JP) Receiver(US) Merge (Bottleneck) A+B > C Feedback 200ms round trip delay BWDP: Amount of data sent but not yet acknowledged 64Kbps x 200ms = 1600B ~ 1 Packet 1Gbps x 200ms = 25MB ~ 16700 Packets
Analyzing Advanced TCP Dynamic Behavior in a Real Network(Example: From Tokyo to Indianapolis at 1G bps with HighSpeed TCP) Throughput RTT Window Sizes Packet Losses The graphs were generatedthrough web100. The data was obtained during e-VLBI demonstration at Internet2 Member Meetingin October 2003.
TCP Performance Measurement in Testbedfocus on bottleneck queue queuing delay (q) + trip delay (t) 1/2RTT < t < RTT overflow(loss) dummynetFreeBSD 5.1 GbE GbE Sender ReceiverLinux TCP 1500B MTU Only 800 Mbps available RTT 200ms(100ms one-way)
TCP’s Way of Rate Control (slow-start) rate RTT (200ms) 1Gbps 20ms 40ms 80ms 160ms average rate 100Mbps t 150 Mbps average rate overflows with a 1000-packet queue
Bottleneck bandwidth and queue size TCP Burstyness (a) HighSpeed (b) Scalable (c) BIC (d) FAST
Measuring Bottleneck Queue Sizes Capacity C Sender Receiver lost packet packet train measured packet Queue Size = C x (Delaymax – Delaymin) Switch / Router Queue Size Measurement Result * set to 100M for measurement cross traffic injectedfor measurement
Typical Bottleneck Cases b-1) Queue~100 Queue~1000 a) Switch Router VLANs 1Gbps (10G) Switch Router 100Mbps(1G) b-2) Switch/Router 9.5G WAN-PHY 802.1q Tag 10G LAN-PHY Ethernet Untag
Solutions by Advanced TCPs How can wee foresee collision (queue overflow)? • Loss-Based ► AQM (Advanced Queue Management)Reno, Scalable, High-Speed, BIC, … • Delay-BasedVegas, FAST • Explicit Router NotificationECN, XCP, Quick Start, SIRENS, MaxNet
Queue Management Methods • FIFO (First In First Out) 1 full 2 4 5 4 3 2 1 3 drop 5 6 6 • RED (Random EarlyDetection) 1 threshold 2 4 6 4 3 2 1 3 drop 5 5 6
PAUSE HOLB (Head of Line Blocking) Switch empty blocked 1 fast 1 1 1 1 2 full wait 2 2 2 2 slow full output queue input queue Note: Ethernet flow-control (PAUSE frame in 802.3x) may produce HOLB (Head of line blocking),resulting in less performance at a backbone switch.
Summary • Add an interface to a router. Or, • Use a switch with an appropriate interface queue. • Let’s consider making use of AQM on a router. Future Plan • 10G bps congestion through TransPAC2 and JGN II with large delay (>=100 ms)