1 / 25

ECE 4450:427/527 - Computer Networks Spring 2014

ECE 4450:427/527 - Computer Networks Spring 2014. Dr. Nghi Tran Department of Electrical & Computer Engineering. Lecture 7.2: Resource Allocation and Congestion Control. Problem Statement.

vinson
Download Presentation

ECE 4450:427/527 - Computer Networks Spring 2014

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE 4450:427/527 - Computer NetworksSpring 2014 Dr. Nghi Tran Department of Electrical & Computer Engineering Lecture 7.2: Resource Allocation and Congestion Control ECE 4450:427/527

  2. Problem Statement • We have seen enough layers of protocol hierarchy to understand how data can be transferred among processes across heterogeneous networks • Problem • How to effectively and fairly allocate resources among a collection of competing users? • It is related to Resource Allocation and Congestion Control ECE 4450:427/527

  3. Resource • Resources • Bandwidth of the links • Buffers at the routers and switches ECE 4450:427/527

  4. Congestion • Packets contend at a router for the use of a link, with each contending packet placed in a queue waiting for its turn to be transmitted over the link • When too many packets are contending for the same link • The queue overflows • Packets get dropped • Network is congested! • Network should provide a congestion control mechanism to deal with such a situation ECE 4450:427/527

  5. Congestion Control & Resource Allocation • Congestion control and Resource Allocation • Two sides of the same coin • If the network takes active role in allocating resources • The congestion may be avoided • No need for congestion control • On the other hand, we can always let the sources send as much data as they want • - Then recover from the congestion when it occurs • - Easier approach but it can be disruptive because many packets many be discarded by the network before congestions can be controlled ECE 4450:427/527

  6. Congestion Control & Resource Allocation • Congestion control and resource allocations involve both hosts and network elements such as routers • In network elements, e.g., routers: • Various queuing disciplines can be used to control the order in which packets get transmitted and which packets get dropped: Queuing issues • At the hosts’ end • The congestion control mechanism paces how fast sources are allowed to send packets:TCP Congestion Control ECE 4450:427/527

  7. Queuing: FIFO FIFO: Simple queuing model First come first served Drop packets if buffers is full (tail drop) FIFO: scheduling policy Tail drop: drop policy Problem? All flows are treated the same Fast rate flow fills up buffer Slow rate may not get service ECE 4450:427/527

  8. Fair Queuing • Attempt to neutralize the advantage of high rate flows • Maintain a queue for each flow • Service the queues in a round-robin • (RR) fashion • Drop packets when a queue is full • How about packet sizes? • Not all flows use the same packet size.For example, if a router is managing two flows, one with 1000-byte packets and the other with 500-byte packets. A simple round-robin will give the first flow two thirds of the link’s bandwidth and the second flow only one-third of its bandwidth ECE 4450:427/527

  9. Fair Queuing • What we really want is bit-by-bit round-robin; that is, the router transmits a bit from flow 1, then a bit from flow 2, and so on. • Can we interleave the bits from different packets? • The FQ mechanism therefore simulates this behavior by first determining when a given packet would finish being transmitted if it were being sent using bit-by-bit round-robin, and then using this finishing time to sequence the packets for transmission. ECE 4450:427/527

  10. Fair Queuing • To understand the algorithm for approximating bit-by-bit round robin, consider the behavior of a single flow • For this flow, assume a clock ticks once each time one bits transmitted. Then • Pi : denote the length of packet i, i.e., number of clock ticks • Si: time when the router starts to transmit packet i • Fi: time when router finishes transmitting packeti • Clearly, Fi = Si + Pi ECE 4450:427/527

  11. Fair Queuing • When do we start transmitting packet i? • Depends on whether packet i arrived before or after the router finishes transmitting packet i-1 for the flow • Let Ai denote the time that packet i arrives at the router • Then Si = max(Fi-1, Ai) • Fi = max(Fi-1, Ai) + Pi ECE 4450:427/527

  12. Fair Queuing • Now for every flow, we calculate Fi for each packet that arrives using our formula • We then treat all the Fi as timestamps • Next packet to transmit is always the packet that has the lowest timestamp • The packet that should finish transmission before all others ECE 4450:427/527

  13. Fair Queuing • Fair Queuing Example of fair queuing in action: (a) packets with earlier finishing times are sent first; (b) sending of a packet already in progress is completed ECE 4450:427/527

  14. TCP Congestion Control • The idea: Each source to determine how much capacity is available in the network, so that it knows how many packets it can safely have in transit. But how? • TCP maintains a variable for each connection, called CongestionWindow, which is used by the source to limit how much data it is allowed to have in transit at a given time. • The congestion window is congestion control’s counterpart to flow control’s advertised window. ECE 4450:427/527

  15. TCP Congestion Control • TCP’s effective window is revised as follows: • MaxWindow = MIN(CongestionWindow, AdvertisedWindow) • EffectiveWindow = MaxWindow− (LastByteSent − LastByteAcked). • That is, MaxWindow replaces AdvertisedWindow in the calculation of EffectiveWindow. • But now, how to determine CongestionWindow? Remember we obtain AdvertisedWindow from Receiver!! ECE 4450:427/527

  16. TCP Congestion Control • The answer is that the TCP source sets the CongestionWindow based on the level of congestion it perceives to exist in the network. • This involves decreasing the congestion window when the level of congestion goes up and increasing the congestion window when the level of congestion goes down. • The mechanism is commonly called additive increase/multiplicative decrease (AIMD) ECE 4450:427/527

  17. TCP Congestion Control: AIMD • How does the source determine that the network is congested and that it should decrease the congestion window? • TCP interprets timeouts as a sign of congestion and reduces the rate at which it is transmitting. • Specifically, each time a timeout occurs, the source sets CongestionWindow to half of its previous value. • This halving of the CongestionWindow for each timeout corresponds to the “multiplicative decrease” part of AIMD. ECE 4450:427/527

  18. TCP Congestion Control: AIMD • We also need to be able to increase the congestion window to take advantage of newly available capacity in the network. • This is the “additive increase” part of AIMD, and it works as follows. • Every time the source successfully sends a CongestionWindow’s worth of packets—that is, each packet sent out during the last RTT has been ACKed—it adds the equivalent of 1 packet to CongestionWindow. ECE 4450:427/527

  19. TCP Congestion Control: AIMD • Additive Increase Multiplicative Decrease Packets in transit during additive increase, with one packet being added each RTT. ECE 4450:427/527

  20. TCP Congestion Control: Slow Start • The additive increase mechanism just described is the right approach to use when the source is operating close to the available capacity of the network, but it takes too long to ramp up a connection when it is starting from scratch. • TCP therefore provides a second mechanism, ironically called slow start, that is used to increase the congestion window rapidly from a cold start. • Slow start effectively increases the congestion window exponentially, rather than linearly. ECE 4450:427/527

  21. TCP Congestion Control: Slow Start • Specifically, the source starts out by setting CongestionWindow to one packet. • When the ACK for this packet arrives, TCP adds 1 to CongestionWindow and then sends two packets. • Upon receiving the corresponding two ACKs, TCP increments CongestionWindow by 2—one for each ACK—and next sends four packets. • The end result is that TCP effectively doubles the number of packets it has in transit every RTT. ECE 4450:427/527

  22. TCP Congestion Control: Slow Start Packets in transit during slow start. ECE 4450:427/527

  23. Congestion Control: Fast Retransmit • The mechanisms described so far were part of the original proposal to add congestion control to TCP. • It was soon discovered, however, that the coarse-grained implementation of TCP timeouts led to long periods of time during which the connection went dead while waiting for a timer to expire. • Because of this, a new mechanism called fast retransmit was added to TCP: Receiver: Send duplicate ACK; Transmitter: Retransmit when seeing 3 duplicate ACKs. ECE 4450:427/527

  24. Congestion Control: Fast Retransmit • Fast retransmit • Waiting for timeout may be expensive • Arrival of duplicate ACKs indicate out-of-order packet arrival • If three duplicate acknowledgments are received, retransmit ECE 4450:427/527

  25. Congestion Control: Further Discussions • TCP: Try to control congestion when it happens • Alternative is to try avoid congestion in the first place • Goal is to predict the congestion and take early precautions • Congestion avoidance mechanisms (but not widely adopted) • DECbit scheme • Random early detection (RED) • Source-based congestion avoidance ECE 4450:427/527

More Related