1 / 115

Congestion Control and Traffic Management in High Speed Networks

Congestion Control and Traffic Management in High Speed Networks. Carey Williamson. University of Calgary. Introduction . The goal of congestion control is to regulate traffic flow in the network in order to avoid saturating or overloading intermediate nodes in the network.

Gabriel
Download Presentation

Congestion Control and Traffic Management in High Speed Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Congestion Controland Traffic Management in High Speed Networks Carey Williamson University of Calgary

  2. Introduction • The goal of congestion control is to regulate traffic flow in the network in order to avoid saturating or overloading intermediate nodes in the network

  3. Congestion: Effects • Congestion is undesirable because it can cause: Increased delay, due to queueing within the network Packet loss, due to buffer overflow Reduced throughput, due to packet loss and retransmission • Analogy: “rush hour” traffic

  4. Congestion: Causes • The basic cause of congestion is that the input traffic demands exceed the capacity of the network • In typical packet switching networks, this can occur quite easily when: - output links are slower than inputs - multiple traffic sources competing for same output link at the same time

  5. Buffering: A Solution? • Buffering in switches can help alleviate short term or transient congestion problems, but... • Under sustained overload, buffers will still fill up, and packets will be lost • only defers the congestion problem • More buffers means more queuing delay • beyond a certain point, more buffering makes the congestion problem worse, because of increased delay and retransmission

  6. Motivation • The congestion control problem is even more acute in high speed networks • Faster link speeds mean that congestion can happen faster than before e.g., 64 kilobyte buffer @ 64 kbps: 8.2 seconds @ 10 Mbps: 52 milliseconds @ 1 Gbps: 0.52 milliseconds

  7. Motivation (Cont’d) • Buffer requirements increase with link speeds e.g., to store 1 second worth of traffic @ 64 kbps: 8 kilobytes @ 10 Mbps: 1.25 Mbytes @ 1 Gbps: 125 Mbytes

  8. Motivation (Cont’d) • Heterogeneity of link speeds - just because you add new high speed links to a network doesn’t mean that the old low speed links go away - interconnecting high speed and lower speed networks creates congestion problems at the point of interconnect

  9. Motivation (Cont’d) • Traffic is bursty - high peak-to-mean ratio, peak rates - e.g., data traffic: 10-to-1, 1-10 Mbps - e.g., video traffic: 20-to-1, 5-100 Mbps - can statistically multiplex several channels, but if too many are active at the same time, congestion is inevitable

  10. Motivation (Cont’d) • Reaction time is bounded by the propagation delay - in a high-speed wide-area network, the delay x bandwidth product is HUGE!!! - d x b tells you how many bits fit in the “pipe” between you and the receiver - by the time you realize that network is congested, you may have already sent another Mbit or more of data!!!

  11. Reactive versus Preventive • There are two fundamental approaches to congestion control: reactive approaches and preventive approaches • Reactive: feedback-based • attempt to detect congestion, or the onset of congestion, and take action to resolve the problem before things get worse • Preventive: reservation-based • prevent congestion from ever happening in the first place, by reserving resources

  12. Reactive versus Preventive (Cont’d) • Most of the Internet approaches are reactive schemes • TCP Slow Start • Random-Early-Detection (RED) Gateways • Source Quench • The large d x b product means that many of these approaches are not applicable to high speed networks • Most ATM congestion control strategies are preventive, reservation-based

  13. Congestion Control in ATM • When people discuss congestion control in the context of high speed ATM networks, they usually distinguish between call-level controls and cell-level controls

  14. Call-Level Control • An example of the call-level approach to congestion control is call admission control (to be discussed later this semester) • Tries to prevent congestion by not allowing new calls or connections into the network unless the network has sufficient capacity to support them

  15. Call-Level Control (Cont’d) • At time of call setup (connection establishment) you request the resources that you need for the duration of the call (e.g., bandwidth, buffers) • If available, your call proceeds • If not, your call is blocked • E.g., telephone network, busy signal

  16. Call-Level Control (Cont’d) • Tradeoff: aggressive vs conservative • Want to accept enough calls to have reasonably high network utilization, but don’t want to accept so many calls that you have a high probability of network congestion (which might compromise the QOS requirements that you are trying to meet)

  17. Call-Level Control (Cont’d) • Problems: Can be unfair - denial of service, long access delay Hard to specify resource requirements and QOS parameters precisely - may not know, or may cheat - congestion can still occur

  18. Cell-Level Control • Also called input rate control • Control the input rate of traffic sources to prevent, reduce, or control the level of congestion • Many possible mechanisms: Traffic shaping, traffic policing, UPC Leaky bucket (token bucket) Cell tagging (colouring), cell discarding Cell scheduling disciplines

  19. Congestion Control in ATM • There is actually a complete spectrum of traffic control functions, ranging from the very short-term (e.g., traffic shaping, cell discarding) to the very long-term (e.g., network provisioning) • See [Gilbert et al 1991]

  20. Time Scale ATM Traffic Control Schemes

  21. Time Scale ATM Traffic Control Schemes Short Term usec

  22. Time Scale ATM Traffic Control Schemes Long Term Months, years Short Term usec

  23. ATM Traffic Control Schemes Long Term Resource Provisioning Admission Control Routing, Load Balancing Call Duration Explicit Congestion Notification Fast Reservation Protocol Node to Node Flow Control Propagation Delay Time Usage Parameter Control Priority Control Traffic Shaping Cell Discarding Cell Time Time Scale

  24. ATM Traffic Control Schemes Usage Parameter Control Priority Control Traffic Shaping Cell Discarding Cell Time Time Scale

  25. ATM Traffic Control Schemes Explicit Congestion Notification Fast Reservation Protocol Node to Node Flow Control Propagation Delay Time Time Scale

  26. ATM Traffic Control Schemes Admission Control Routing, Load Balancing Call Duration Time Scale

  27. ATM Traffic Control Schemes Long Term Resource Provisioning Time Scale

  28. ATM Traffic Control Schemes Long Term Resource Provisioning Admission Control Routing, Load Balancing Call Duration Explicit Congestion Notification Fast Reservation Protocol Node to Node Flow Control Propagation Delay Time Usage Parameter Control Priority Control Traffic Shaping Cell Discarding Cell Time Time Scale

  29. ATM Traffic Control Schemes • Preventive controls: Resource provisioning Connection admission control Call routing and load balancing Usage parameter control Priority control Traffic shaping Fast reservation protocol

  30. ATM Traffic Control Schemes • Reactive controls: Adaptive admission control Call routing and load balancing Adaptive usage parameter control Explicit congestion notification (forward or backward) Node to node flow control Selective cell discarding

  31. Leaky Bucket • One of the cell-level control mechanisms that has been proposed is the leaky bucket (a.k.a. token bucket) • Has been proposed as a traffic policing mechanism for Usage Parameter Control (UPC), to check conformance of a source to its traffic descriptor • Can also be used as a traffic shaper

  32. Leaky Bucket (Cont’d) • Think of a bucket (pail) with a small hole in the bottom • You fill the bucket with water • Water drips out the bottom at a nice constant rate: drip, drip, drip...

  33. Leaky Bucket (Cont’d)

  34. Leaky Bucket (Cont’d) Bucket

  35. Leaky Bucket (Cont’d) Empty Bucket

  36. Leaky Bucket (Cont’d) Bucket Hole

  37. Leaky Bucket (Cont’d) Water Bucket Hole

  38. Leaky Bucket (Cont’d)

  39. Leaky Bucket (Cont’d) Drip

  40. Leaky Bucket (Cont’d)

  41. Leaky Bucket (Cont’d)

  42. Leaky Bucket (Cont’d)

  43. Leaky Bucket (Cont’d) Constant rate stream of drips, all nicely spaced, periodic

  44. Leaky Bucket (Cont’d) Storage area for drips waiting to go Constant rate stream of drips, all nicely spaced, periodic

  45. Leaky Bucket (Cont’d) • A leaky bucket flow control mechanism is then a software realization of this very simple idea • Packets (cells) waiting for transmission arrive according to some (perhaps unknown) arrival distribution • Tokens arrive periodically (deterministically) • Cell must have a token to enter network

  46. Leaky Bucket (Cont’d) Incoming Tokens at rate r tokens/sec Incoming Cells (generated by traffic source with rate X) To Network +

  47. Leaky Bucket (Cont’d) Incoming Tokens at rate r tokens/sec Incoming Cells To Network + 5 1 4 3 2

  48. Leaky Bucket (Cont’d) Incoming Tokens Incoming Cells To Network + 5 4 3 2 1

  49. Leaky Bucket (Cont’d) Incoming Tokens Incoming Cells To Network + 1 5 4 3 2

  50. Leaky Bucket (Cont’d) Incoming Tokens Incoming Cells To Network + 2 5 4 3 1

More Related