1 / 30

EE 122: Router Support for Congestion Control: RED and Fair Queueing

This article discusses traditional router mechanisms affecting congestion management, such as scheduling and buffer management, and introduces two congestion control techniques: Random Early Detection (RED) and Fair Queueing (FQ).

cbarton
Download Presentation

EE 122: Router Support for Congestion Control: RED and Fair Queueing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EE 122: Router Support for Congestion Control: RED and Fair Queueing Ion Stoica Oct. 30 – Nov. 4, 2002

  2. Router Support For Congestion Management • Traditional Internet • Congestion control mechanisms at end-systems, mainly implemented in TCP • Routers play little role • Router mechanisms affecting congestion management • Scheduling • Buffer management • Traditional routers • FIFO • Tail drop istoica@cs.berkeley.edu

  3. Router Packet Processing • Scheduling: • Decide when to send a packet • Decide which packet to transmit • Buffer management: • Decide when to drop a packet • Decide which packet to drop istoica@cs.berkeley.edu

  4. FIFO: First-In First-Out • Maintain a queue to store all packets • When to send? • When there is at least one packet in the queue • Which packet to send? • The packet at the head of the queue Next to transmit Arriving packet Queued packets istoica@cs.berkeley.edu

  5. Tail-drop Buffer Management • When to drop? • When buffer full • Which packet to drop • The arriving packet Next to transmit Arriving packet Drop istoica@cs.berkeley.edu

  6. Drawbacks of FIFO with Tail-drop • Synchronizing effect for multiple TCP flows • Buffer lock out by misbehaving flows • Burst or multiple consecutive packet drops istoica@cs.berkeley.edu

  7. Synchronizing effect for multiple TCP flows • When buffer overflows many/all flows experience losses • Flows which experience losses will scale down their transmission (by reducing the window size) • This will cause the link to be underutilized • … the process repeats • Net result: inefficient link utilization istoica@cs.berkeley.edu

  8. Buffer Lock Out by Misbehaving Flows • “Aggressive” flows can squeeze out TCP flows • Example: one 10 Mbps link shared by • 1 UDP flow that sends at 10 Mbps, and does not implement any congestion control • 31 TCP flows UDP UDP (#1) UDP (#1) - 10 Mbps TCP (#2) TCP (#2) . . . . . . TCP (#32) TCP (#32) Bottleneck link (10 Mbps) istoica@cs.berkeley.edu

  9. Burst of Packet Losses • A TCP flow sends a burst of packets that arrive at a router when buffer is full • All packets in the burst are lost • This may cause RTO to trigger: cwnd  1 • For example, with TCP Reno if there are three packet losses in a window, this will almost sure trigger an RTO istoica@cs.berkeley.edu

  10. FIFO Router with Two TCP Flows istoica@cs.berkeley.edu

  11. Random Early Detection (RED) Routers (Floyd & Fall 93] • Goals • Avoid synchronizing multiple TCP flows • Avoid burst of packet drops istoica@cs.berkeley.edu

  12. RED • FIFO scheduling • Buffer management: • Probabilistically discard packets • Probability is computed as a function of average queue length (why average?) Discard Probability 1 0 Average Queue Length queue_len min_th max_th istoica@cs.berkeley.edu

  13. RED (cont’d) • min_th – minimum threshold • max_th – maximum threshold • avg_len – average queue length • avg_len = (1-w)*avg_len + w*sample_len Discard Probability 1 0 min_th max_th queue_len Average Queue Length istoica@cs.berkeley.edu

  14. RED (cont’d) • If (avg_len < min_th)  enqueue packet • If (avg_len > max_th)  drop packet • If (avg_len >= min_th and avg_len < max_th)  enqueue packet with probability P Discard Probability (P) 1 0 queue_len min_th max_th Average Queue Length istoica@cs.berkeley.edu

  15. RED (cont’d) • P = max_P*(avg_len – min_th)/(max_th – min_th) • Improvements to spread the drops (see textbook) Discard Probability max_P 1 P 0 Average Queue Length queue_len min_th max_th avg_len istoica@cs.berkeley.edu

  16. RED Advantages • Absorb burst better • Avoids synchronization • Signal end systems earlier istoica@cs.berkeley.edu

  17. RED Router with Two TCP Sessions istoica@cs.berkeley.edu

  18. Problems with RED • No protection: if a flow misbehaves it will hurt the other flows • Example: 1 UDP (10 Mbps) and 31 TCP’s sharing a 10 Mbps link UDP UDP (#1) UDP (#1) - 10 Mbps TCP (#2) TCP (#2) . . . . . . TCP (#32) TCP (#32) Bottleneck link (10 Mbps) istoica@cs.berkeley.edu

  19. Solution? • Round-robin among different flows [Nagle ‘87] • one queue per flow istoica@cs.berkeley.edu

  20. Round-Robin Discussion • Advantages: protection among flows • Misbehaving flows will not affect the performance of well-behaving flows • Misbehaving flow – a flow that does not implement any congestion control • FIFO does not have such a property • Disadvantages: • More complex than FIFO: per flow queue/state • Biased toward large packets – a flow receives service proportional to the number of packets (When is this bad?) istoica@cs.berkeley.edu

  21. Solution? • Bit-by-bit round robin • Can you do this in practice? • No, packets cannot be preempted (why?) • …we can only approximate it istoica@cs.berkeley.edu

  22. Fair Queueing (FQ) [DKS’89] • Define a fluid flow system: a system in which flows are served bit-by-bit • Then serve packets in the increasing order of their deadlines • Advantages • Each flow will receive exactly its fair rate • Note: • FQ achieves max-min fairness istoica@cs.berkeley.edu

  23. Max-Min Fairness • Denote • C – link capacity • N – number of flows • ri – arrival rate • Max-min fair rate computation: • compute C/N • if there are flows i such that ri <= C/N, update C and N • if no, f = C/N; terminate • go to 1 • A flow can receive at most the fair rate, i.e., min(f, ri) istoica@cs.berkeley.edu

  24. f = 4: min(8, 4) = 4 min(6, 4) = 4 min(2, 4) = 2 8 10 4 6 4 2 2 Example • C = 10; r1 = 8, r2 = 6, r3 = 2; N = 3 • C/3 = 3.33  C = C – r3 = 8; N = 2 • C/2 = 4; f = 4 istoica@cs.berkeley.edu

  25. Implementing Fair Queueing • Idea: serve packets in the order in which they would have finished transmission in the fluid flow system istoica@cs.berkeley.edu

  26. Example Flow 1 (arrival traffic) 1 2 3 4 5 6 time Flow 2 (arrival traffic) 1 2 3 4 5 time Service in fluid flow system 1 2 3 4 5 6 1 2 3 4 5 time Packet system 1 2 1 3 2 3 4 4 5 5 6 time istoica@cs.berkeley.edu

  27. System Virtual Time: V(t) • Measure service, instead of time • V(t) slope – rate at which every active flow receives service • C – link capacity • N(t) – number of active flows in fluid flow system at time t V(t) time Service in fluid flow system 1 2 3 4 5 6 1 2 3 4 5 time istoica@cs.berkeley.edu

  28. Fair Queueing Implementation • Define • - finishing time of packet k of flow i (in system virtual time reference system) • - arrival time of packet k of flow i • - length of packet k of flow i • The finishing time of packet k+1 of flow i is istoica@cs.berkeley.edu

  29. FQ Advantages • FQ protect well-behaved flows from ill-behaved flows • Example: 1 UDP (10 Mbps) and 31 TCP’s sharing a 10 Mbps link istoica@cs.berkeley.edu

  30. Big Picture • FQ does not eliminate congestion  it just manages the congestion • You need both end-host congestion control and router support for congestion control • end-host congestion control to adapt • router congestion control to protect/isolate • Don’t forget buffer management: you still need to drop in case of congestion. Which packet’s would you drop in FQ? • one possibility: packet from the longest queue istoica@cs.berkeley.edu

More Related