290 likes | 457 Views
Congestion Control. Jennifer Rexford Advanced Computer Networks http://www.cs.princeton.edu/courses/archive/fall08/cos561/ Tuesdays/Thursdays 1:30pm-2:50pm. Goals For Today’s Class. TCP reliable delivery Detecting loss and retransmitting data TCP congestion control
E N D
Congestion Control Jennifer Rexford Advanced Computer Networks http://www.cs.princeton.edu/courses/archive/fall08/cos561/ Tuesdays/Thursdays 1:30pm-2:50pm
Goals For Today’s Class • TCP reliable delivery • Detecting loss and retransmitting data • TCP congestion control • Additive increase, multiplicative decrease • Slow start at beginning and after a timeout • Inefficiency of TCP, and new TCP variants • Discussion of congestion control • What should be the goals? • What support should the network provide? • How to be robust to greedy and malicious users?
TCP: Reliable Delivery • Detect missing data: sequence number • Used to detect a gap in the stream of bytes • ... and for putting the data back in order • Detect bit errors: checksum • Used to detect corrupted data at the receiver • …leading the receiver to drop the packet • Recover from lost data: retransmission • Sender retransmits lost or corrupted data • Two main ways to detect lost packets • Retransmission timeout and fast retransmission
TCP Congestion Control: Van Jacobson Paper • Conservation of packets • For a connection in equilibrium • A new packet should enter only as old one leaves • New mechanisms in TCP • Slow start for new connections • Starting with a small congestion window • Accurate round-trip time estimation • Including RTT variance when setting timeout values • Exponentially back off for repeated retransmissions • Congestion avoidance by adaptive window sizing • Additive increase, and multiplicative decrease
Packet ACK Automatic Repeat reQuest (ARQ) • Automatic Repeat reQuest • Receiver sends acknowledgment (ACK) when it receives packet • Sender waits for ACK and timeouts if it does not arrive within some time period • Simplest ARQ protocol • Stop and wait • Send a packet, stop and wait until ACK arrives Sender Receiver Timeout Time
Packet Packet Packet Packet Packet ACK ACK ACK ACK ACK Reasons for Retransmission Timeout Timeout Timeout Packet Timeout Timeout Timeout ACK lost DUPLICATE PACKET Early timeout DUPLICATEPACKETS Packet lost
How Long Should Sender Wait? • Sender sets a timeout to wait for an ACK • Too short: wasted retransmissions • Too long: excessive delays when packet lost • TCP sets timeout as a function of the RTT • Expect ACK to arrive after an “round-trip time” • … plus a fudge factor to account for queuing • But, how does the sender know the RTT? • Can estimate the RTT by watching the ACKs • Smooth estimate: keep a running average of the RTT • EstimatedRTT = a * EstimatedRTT + (1 –a ) * SampleRTT • Compute timeout: TimeOut = 2 * EstimatedRTT
A Flaw in This Approach • An ACK doesn’t really acknowledge a transmission • Rather, it acknowledges receipt of the data • Consider a retransmission of a lost packet • If you assume the ACK goes with the 1st transmission • … the SampleRTT comes out way too large • Consider a duplicate packet • If you assume the ACK goes with the 2nd transmission • … the Sample RTT comes out way too small • Simple solution in the Karn/Partridge algorithm • Only collect samples for segments sent one single time
Fast Retransmission • Better solution possible under sliding window • Although packet n might have been lost • … packets n+1, n+2, and so on might get through • Idea: have the receiver send ACK packets • ACK says that receiver is still awaiting nth packet • And repeated ACKs suggest later packets have arrived • Sender can view the “duplicate ACKs” as an early hint • … that the nth packet must have been lost • … and perform the retransmission early • Fast retransmission • Sender retransmits data after the “triple duplicate ACK”
Congestion is Unavoidable in IP • Best-effort delivery • Let everybody send • Try to deliver what you can • … and just drop the rest • If many packets arrive in short period of time • The node cannot keep up with the arriving traffic • … and the buffer may eventually overflow
The Problem of Congestion • What is congestion? • Load is higher than capacity • What do IP routers do? • Drop the excess packets • Why is this bad? • Wasted bandwidth for retransmissions “congestion collapse” Increase in load that results in a decrease in useful work done. Goodput Load
Inferring From Implicit Feedback • What does the end host see? • Round-trip loss • Round-trip delay ?
Host Adapts Sending Rate Over Time • Congestion window • Maximum number of bytes to have in transit • I.e., # of bytes still awaiting acknowledgments • Upon detecting congestion • Decrease the window size (e.g., divide in half) • End host does its part to alleviate the congestion • Upon not detecting congestion • Increase the window size, a little at a time • And see if the packets are successfully delivered • End host learns whether conditions have changed
Leads to the TCP “Sawtooth” Window size Loss halved Time
Receiver Window vs. Congestion Window • Flow control • Keep a fast sender from overwhelming a slow receiver • Congestion control • Keep a set of senders from overloading the network • Different concepts, but similar mechanisms • TCP flow control: receiver window • TCP congestion control: congestion window • TCP window: min{congestion window, receiver window}
How Should a New Flow Start Need to start with a small CWND to avoid overloading the network. Window But, could take a long time to get started! t
“Slow Start” Phase • Start with a small congestion window • Initially, CWND is 1 Max Segment Size (MSS) • So, initial sending rate is MSS/RTT • That could be pretty wasteful • Might be much less than the actual bandwidth • Linear increase takes a long time to accelerate • Slow-start phase • Sender starts at a slow rate (hence the name) • … but increases the rate exponentially • … until the first loss event
Slow Start and the TCP Sawtooth Window Loss t Exponential “slow start” Why is it called slow-start? Because TCP originally had no congestion control mechanism. The source would just start by sending a whole receiver window’s worth of data.
Two Kinds of Loss in TCP • Timeout • Packet n is lost and detected via a timeout • E.g., because all packets in flight were lost • After timeout, blasting away for the entire CWND would trigger a very large burst in traffic • So, better to start over with a low CWND • Triple duplicate ACK • Packet n is lost, but packets n+1, n+2, etc. arrive • Receiver sends duplicate acknowledgments • And the sender retransmits packet n quickly • Do a multiplicative decrease and keep going
Repeating Slow Start After Timeout Window timeout Slow start in operation until it reaches half of previous cwnd. t Slow-start restart: Go back to CWND of 1, but take advantage of knowing the previous value of CWND.
TCP Achieves Some Notion of Fairness • Effective utilization is not the only goal • We also want to be fair to the various flows • … but what the heck does that mean? • Simple definition: equal shares of the bandwidth • N flows that each get 1/N of the bandwidth? • But, what if the flows traverse different paths? • E.g., bandwidth shared in proportion to the RTT
Limitations on TCP Performance • Round-trip time • Throughput proportional to 1/RTT • Receiver window • Throughput is limited by window/RTT • Slow start and additive increase • Certain number of RTTs needed to send the data, even in the absence of any congestion • Packet loss and congestion window decreases • Throughput proportional to 1/sqrt(loss) • Duplicate ACKs don’t happen for short transfers and bursty loss, and timeout losses are expensive
Questions About Congestion Control • What should be the goal? • Efficient use of network resources? • Fair division of the network resources across flows? (With or without consideration of RTTs?) • Minimizing the time for flows to complete? • How should sources infer congestion? • Packet loss (as in TCP Reno)? • Packet delay (as in TCP Vegas and FAST TCP)? • Probing to measure available bandwidth? • How should sources adapt sending rates? • Additive increase, multiplicative decrease? • Explicit instruction from the network?
Questions About Router Support • Should routers help sources infer congestion? • Only implicitly by dropping and delaying packets? • Dropping packets early to warn of congestion? • Should routers give explicit feedback? • Marking packets early to warn of congestion? • Multiple bits to signal the level of congestion? • Should routers help in adapting sending rates? • Explicit assignment of sending rates to sources? • Should routers schedule packets at the flow level? • From FIFO queuing to weighted fair queuing? • Should routers move traffic to other paths? • Load-sensitive routing to alleviate congestion?
Measurement and Modeling of TCP • Rich area of research • Measurement of congestion control “in the wild” • Simulation of variants of TCP congestion control • Modeling of throughput as a function of loss • Reverse engineering and design using optimization theory and control theory • Models of fairness from the economics literature • Some examples • http://conferences.sigcomm.org/sigcomm/1998/tp/abs_25.html • http://ccr.sigcomm.org/archive/1997/jul97/ccr-9707-mathis.html • http://www.statslab.cam.ac.uk/~frank/rate.pdf • http://netlab.caltech.edu/publications/fast-network05.pdf • http://netlab.caltech.edu/publications/FAST-ToN-final-060209-2007.pdf • http://www.ana.lcs.mit.edu/dina/XCP/ • http://yuba.stanford.edu/rcp/
What About Cheating? • Some folks are more fair than others • Running multiple TCP connections in parallel • Modifying the TCP implementation in the OS • Use the User Datagram Protocol (UDP) • Using WAN accelerators ACK Internet Appliance Appliance
What to Do About Cheating? • What is the impact • Good guys slow down to make room for others • Bad guys get an unfair share of the bandwidth • Possible solutions? • Routers detect cheating and drop excess packets? • Peer pressure (accountability framework)? • Pricing, so heavy users have to pay more? • Move congestion control to the network? • Let senders battle it out (decongestion control)? • Move to a resource reservation model?