1.29k likes | 1.56k Views
QoS in The Internet: Scheduling Algorithms and Active Queue Management. Principles for QOS Guarantees. Consider a phone application at 1Mbps and an FTP application sharing a 1.5 Mbps link. bursts of FTP can congest the router and cause audio packets to be dropped.
E N D
QoS in The Internet:Scheduling Algorithms and Active Queue Management
Principles for QOS Guarantees • Consider a phone application at 1Mbps and an FTP application sharing a 1.5 Mbps link. • bursts of FTP can congest the router and cause audio packets to be dropped. • want to give priority to audio over FTP • PRINCIPLE 1: Marking of packets is needed for router to distinguish between different classes; and new router policy to treat packets accordingly
Principles for QOS Guarantees (more) • Applications misbehave (audio sends packets at a rate higher than 1Mbps assumed above); • PRINCIPLE 2: provide protection (isolation) for one class from other classes (Fairness)
B A The path as perceived by a packet! Bandwidth Delay QoS MetricsWhat are we trying to control? • Four metrics are used to describe a packet’s transmission through a network – Bandwidth, Delay, Jitter, and Loss • Using a pipe analogy, then for each packet: • Bandwidth is the perceived width of the pipe • Delay is the perceived length of the pipe • Jitter is the perceived variation in the length of the pipe • Loss is the perceived leakiness if the pipe
Internet QoS Overview • Integrated services • Differentiated Services • MPLS • Traffic Engineering
Intserv/ RSVP Circuit Switched IP diffserv ATM Hard State Dedicated Circuit Soft State No State No State inside the network Flow information at the edges Packet Switched QoS: State information • No State Vs. Soft State Vs. Hard State
Policer Classifier Policer QoS Router Queue management Policer Per-flow Queue Scheduler Classifier shaper Policer Per-flow Queue Per-flow Queue Scheduler shaper Per-flow Queue
Class 1 Class based scheduling Class 2 flow 1 Class 3 Classifier flow 2 Scheduler Class 4 flow n Buffer management First come first serve Queuing Disciplines
DiffServ Domain Classification / Conditioning PHB LLQ/WRED Premium Gold Silver Bronze DiffServ
Differentiated Service (DS) Field • DS filed reuse the first 6 bits from the former Type of Service (TOS) byte to determine the PHB 0 5 6 7 DS Field 0 4 8 16 19 31 Version HLen TOS Length Identification Flags Fragment offset IP header TTL Protocol Header checksum Source address Destination address Data
PATH PATH PATH PATH B B B B Data B A A A A RESV RESV RESV RESV Phop = A Reserved buffer and bw Phop = R2 Phop = R1 Integrated Services RSVP and Traffic Flow example A RESV message containing a flowspec and a filterspec must be sent in the exact reverse path.The flowspec (T-spec/R-spec) defines the QoS and the traffic characteristics being requested. A R2 R1 R3 B Reserved buffer and bw Reserved buffer and bw Data Routers enforce MF classification and put packets in the appropriate queue. The scheduler will then serve these queues. R4 Admission/policy control determines if the node has sufficient available resources to handle the request. If request is granted, bandwidth and buffer is allocated. PATH message will leave the IP address of the previous hop node in each router. Contains Sender Tspec, Sender Temp, Adspec. RSVP maintains soft state information (DstAddr, Protocol, DstPort) in the routers. All packets will get MF classification treatment and put in the appropriate queue.
IntServ: Per-flow classification Receiver Sender
Per-flow buffer management Receiver Sender
Per-flow scheduling Receiver Sender
Round #2 Round #1 … Round Robin (RR) • RR avoids starvation • All sessions have the same weight and the same packet length: A: B: C:
Round #1 Round #2 … RR with variable packet length A: B: C: But the Weights are equal !!!
#1 #2 #3 #4 … Solution… A: B: C:
#1 #2 … round length = 8 Weighted Round Robin (WRR) WA=3 WB=1 WC=4
Normalize WA=7 WB=1 WC=4 round length = 13 … WRR – non Integer weights WA=1.4 WB=0.2 WC=0.8
Weighted round robin • Serve a packet from each non-empty queue in turn • Can provide protection against starvation • It is easy to implement in hardware • Unfair if packets are of different length or weights are not equal • What is the Solution? • Different weights, fixed packet size • serve more than one packet per visit, after normalizing to obtain integer weights
Problems with Weighted Round Robin • Different weights, variable size packets • normalize weights by meanpacket size • e.g. weights {0.5, 0.75, 1.0}, mean packet sizes {50, 500, 1500} • normalize weights: {0.5/50, 0.75/500, 1.0/1500} = { 0.01, 0.0015, 0.000666}, normalize again {60, 9, 4} • With variable size packets, need to know mean packet size in advance • Fairness is only provided at time scales larger than the schedule
Max-Min Fairness • An allocation is fair if it satisfies max-min fairness • each connection gets no more than what it wants • the excess, if any, is equally shared
Max-Min FairnessA common way to allocate flows N flows share a link of rate C. Flow f wishes to send at rate W(f), and is allocated rate R(f). • Pick the flow, f, with the smallest requested rate. • If W(f) < C/N, then set R(f) = W(f). • If W(f) > C/N, then set R(f) = C/N. • Set N = N – 1. C = C – R(f). • If N>0 goto 1.
Max-Min FairnessAn example Round 1: Set R(f1) = 0.1 Round 2: Set R(f2) = 0.9/3 = 0.3 Round 3: Set R(f4) = 0.6/2 = 0.3 Round 4: Set R(f3) = 0.3/1 = 0.3 W(f1) = 0.1 1 W(f2) = 0.5 C R1 W(f3) = 10 W(f4) = 5
Fair Queueing • Packets belonging to a flow are placed in a FIFO. This is called “per-flow queueing”. • FIFOs are scheduled one bit at a time, in a round-robin fashion. • This is called Bit-by-Bit Fair Queueing. Flow 1 Bit-by-bit round robin Classification Scheduling Flow N
R(f1) = 0.1 1 R(f2) = 0.3 C R1 R(f3) = 0.3 R(f4) = 0.3 Order of service for the four queues: … f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,… Weighted Bit-by-Bit Fair Queueing Likewise, flows can be allocated different rates by servicing a different number of bits for each flow during each round. Also called “Generalized Processor Sharing (GPS)”
6 5 4 3 2 1 0 Time A1 = 4 3 3 3 2 2 2 2 2 2 1 1 1 B1 = 3 C2 = 1 C1 = 1 D1 = 1 D2 = 2 Weights : 3:2:2:1 6 5 4 3 2 1 0 Time A2 = 2 A1 = 4 B1 A1 A1 A1 B1 = 3 C3 = 2 C2 = 1 C1 = 1 Round 1 D1 = 1 D2 = 2 Weights : 3:2:2:1 6 5 4 3 2 1 0 Time D1, C2, C1 Depart at R=1 A2 = 2 A1 = 4 D1 C2 C1 B1 B1 A1 A1 A1 B1 = 3 C3 = 2 C2 = 1 C1 = 1 Round 1 D1 = 1 D2 = 2 Weights : 3:2:2:1 Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1
6 5 4 3 2 1 0 Time B1, A2 A1 Depart at R=2 A2 = 2 A1 = 4 3 3 3 2 2 2 2 2 2 1 1 1 B1 A2 A2 A1 D1 C2 C1 B1 B1 A1 A1 A1 B1 = 3 C3 = 2 C2 = 1 C1 = 1 D1 = 1 D2 = 2 Weights : 3:2:2:1 6 5 4 3 2 1 0 D2, C3 Depart at R=2 Time A2 = 2 A1 = 4 D2 D2 C3 C3 B1 A2 A2 A1 D1 C2 C1 B1 B1 A1 A1 A1 B1 = 3 C3 = 2 C2 = 1 C1 = 1 3 Round 2 Round 1 D1 = 1 D2 = 2 Sort packets Weights : 3:2:2:1 6 5 4 3 2 1 0 Time Departure order for packet by packet WFQ: Sort by finish time of packets A2 = 2 A1 = 4 D2 D2 C3 C3 B1 B 1 B1 A2 A2 A1 A1 A1 A1 D1 C2 C1 B1 = 3 C3 = 2 C2 = 1 C1 = 1 D1 = 1 D2 = 2 Weights : 3:2:2:1 Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1 Round 2 Round 1 Weights : 1:1:1:1
Packetized Weighted Fair Queueing (WFQ) Problem: We need to serve a whole packet at a time. Solution: • Determine what time a packet, p, would complete if we served it flows bit-by-bit. Call this the packet’s finishing time, Fp. • Serve packets in the order of increasing finishing time. Also called “Packetized Generalized Processor Sharing (PGPS)”
WFQ is complex • There may be hundreds to millions of flows; the linecard needs to manage a FIFO queue per each flow. • The finishing time must be calculated for each arriving packet, • Packets must be sorted by their departure time. • Most efforts in QoS scheduling algorithms is to come up with practical algorithms that can approximate WFQ! 1 Egress linecard 2 Calculate Fp Find Smallest Fp Departing packet Packets arriving to egress linecard 3 N
When can we Guarantee Delays? • Theorem If flows are leaky bucket constrained and all nodes employ GPS (WFQ), then the network can guarantee worst-case delay bounds to sessions.
Queuing Disciplines • Each router must implement some queuing discipline • Queuing allocates both bandwidth and buffer space: • Bandwidth: which packet to serve (transmit) next - This is scheduling • Buffer space: which packet to drop next (when required) – this buffer management • Queuing affects the delay of a packet (QoS)
Traffic Sources Traffic Classes Class A Class B Class C Drop Scheduling Buffer Management Queuing Disciplines
Drop!!! ACK… ACK… ACK… AQM AQM TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP Congestion Queue Queue Queue Queue Queue Queue Sink Sink Sink Sink Sink Inbound Link Inbound Link Inbound Link Inbound Link Inbound Link Inbound Link Router Router Router Router Router Router Outbound Link Outbound Link Outbound Link Outbound Link Outbound Link Outbound Link Sink Sink Sink Sink Sink Sink Sink Congestion Notification… ACK… ACK… ACK… Active Queue Management • Advantages • Reduce packet losses • (due to queue overflow) • Reduce queuing delay
Policer Classifier Policer QoS Router Queue management Policer Per-flow Queue Scheduler Classifier shaper Policer Per-flow Queue Per-flow Queue Scheduler shaper Per-flow Queue
Packet Drop Dimensions Aggregation Single class Per-connection state Class-based queuing Drop position Tail Head Random location Overflow drop Early drop
Typical Internet Queuing • FIFO + drop-tail • Simplest choice • Used widely in the Internet • FIFO (first-in-first-out) • Implies single class of traffic • Drop-tail • Arriving packets get dropped when queue is full regardless of flow or importance • Important distinction: • FIFO: scheduling discipline • Drop-tail: drop policy (buffer management)
FIFO + Drop-tail Problems • FIFO Issues: (irrespective of the aggregation level) • No isolation between flows: full burden on e2e control (e..g., TCP) • No policing: send more packets get more service • Drop-tail issues: • Routers are forced to have have large queues to maintain high utilizations • Larger buffers => larger steady state queues/delays • Synchronization: end hosts react to the same events because packets tend to be lost in bursts • Lock-out: a side effect of burstiness and synchronization is that a few flows can monopolize queue space
cwnd Slow Start Congestion Avoidance W* W+1 W 4 W*/2 2 1 RTT Time RTT Synchronization Problem • Because of Congestion Avoidance in TCP
Synchronization Problem All TCP connections reduce their transmission rate on crossing over the maximum queue size. The TCP connections increase their tx rate using the slow start and congestion avoidance. The TCP connections reduce their tx rate again. It makes the network traffic fluctuate. Total Queue Queue Size Time
Global Synchronization Problem Max Queue Length Can result in very low throughput during periods of congestion
Global Synchronization Problem • TCP Congestion control • Synchronization: leads to bandwidth under-utilization • Persistently full queues: leads to large queueing delays • Cannot provide (weighted) fairness to traffic flows – inherently proposed for responsive flows bottleneck rate Aggregate load Rate Flow 1 Flow 2 Time
Lock-out Problem Max Queue Length • Lock-Out:In some situations tail drop allows a single connection or a few flows (misbehaving flows: UDP) to monopolize queue space, preventing other connections from getting room in the queue. This "lock-out" phenomenon is often the result of synchronization.
Bias Against Bursty Traffic Max Queue Length During dropping, bursty traffic will be dropped in benchs – which is not fair for bursty connections
Active Queue ManagementGoals • Solve lock-out and full-queue problems • No lock-out behavior • No global synchronization • No bias against bursty flow • Provide better QoS at a router • Low steady-state delay • Lower packet dropping
RED (Random Early Detection) • FIFO scheduling • Buffer management: • Probabilistically discard packets • Probability is computed as a function of average queue length Discard Probability 1 0 Average Queue Length queue_len min_th max_th
RED operation Min thresh Max thresh Average queue length P(drop) 1.0 MaxP Avg length minthresh maxthresh