320 likes | 447 Views
Using Edge-To-Edge Feedback Control to Make Assured Service More Assured in DiffServ Networks. K.R.R.Kumar, A.L.Ananda, Lillykutty Jacob Centre for Internet Research School of Computing National University of Singapore . Outline. Introduction Need for QoS Solutions TCP over DiffServ
E N D
Using Edge-To-Edge Feedback Control to Make Assured Service More Assured in DiffServ Networks K.R.R.Kumar, A.L.Ananda, Lillykutty Jacob Centre for Internet Research School of Computing National University of Singapore
Outline • Introduction • Need for QoS • Solutions • TCP over DiffServ • Issues • CATC • Key Observations • Design Considerations • Topology • Edge-to-Edge Feedback Architecture • Marking Algorithm • Simulation Details • Results and Analysis • Deployment • Inferences and Future work
Introduction • Need for QoS • An exponential growth in traffic resulted in deterioration of QoS. • Over provisioning of networks could be a solution. • A better solution: An intelligent network service with better resource allocation and management methods,
Solutions • Integrated Service • Per flow based QoS. • Not scalable. • Differentiated services • QoS for aggregated flows • Scalable • The philosophy: simpler at the core (AQM), complex at the edges.
DiffServ Meter Packets Forward Classifier Marker Shaper/ Dropper Drop Logical View of a Packet Classifier and Traffic Conditioner
DiffServ cont’d.. • Per-Hop behaviours • Expedited forwarding: Deterministic QoS • Assured forwarding: Statistical QoS • Classifier • Traffic Conditioner • Token Bucket (TB), Time Sliding Window (TSW) • Meter • Marker • Shaper/Dropper
TCP over DiffServ • Recent measurements have shown TCP flows being in majority (95% approx. of byte share). • TCP flows are much more sensitive to transient congestion. • Unruly flows like UDP kills TCP traffic • Bandwidth assurance affected by size of target rate. • Biased against • Longer RTTs • Smaller window sizes
Congestion Aware Traffic Conditioner (CATC) • Key Observations • Markers ,one of the major building blocks of a traffic conditioner helps in resource allocation. • Proper understanding of transient congestion in the network helps. • Edge routers have a better understanding of the domain traffic. • An early indication of congestion in a network helps to prioritize the packets in advance. • Existing feedback mechanisms are end-to-end. Eg: ECN
CATC cont’d.. • Design Considerations • Markers should • Be least sensitive to marker or TCP parameters. • Be transparent to end hosts. • Maintain optimum marking. • Minimize synchronizations. • Be fair to different target sizes. • Be congestion aware.
Edge-to-Edge Feedback architecture • Two edge routers • Control sender (CS) and control receiver (CR) • Upstream: • At CS: • CS sends control packets (CP) at regular interval of time, control packet interval (cpi). • CPs are given highest priority. • At Core: • Core routers maintain the status of drops of the best effort packets. • Information maintained as a status flag to a max. of cpi time. • CP’s congestion notification (CN) bit set or reset based on status flag. • At CR: • Responds to the incoming CP with a CN bit set by setting the congestion echo (CE) bit of the outgoing acknowledgement.
Feedback arch. Cont’d • Downstream • At CS: • Maintains a parameter, congestion factor (cf). • Cf is set to 1 or 0 based on status of the CE bit in acknowledgement received.
Marking algorithm For each packet arrival If avg_rate cir then mp=mp+(1- avg_rate/cir)*(1+cf*(cir/cir_max)); mark the packet using : cp 11 w.p. mp (marked packets) cp 00 w.p. (1-mp) (unmarked packets)
Marking Algo. Cont’d.. • else if avg_rate > cir then mp=mp+ (1- avg_rate/cir)*(1-cf*(cir/cir_max)); mark the packet using : cp 11 w.p. mp (marked packets) cp 00 w.p. (1-mp) (unmarked packets)
Marking Algo. Cont’d.. where, avg_rate = the rate estimate on each packet arrival mp = marking probability ( 1) cir = committed information rate (target rate) cf = congestion factor cir_max = maximum committed information rate also, cp denotes ‘codepoint’ and w.p. denotes ‘with probability’.
Algo cont’d.. • Marking probability computation based on: • cir • avg_rate • cf • cir_max among all cirs.
Algo. Cont’d.. • The effect on mp: • i)Flow component (1- avg_rate/cir) constantly compares the average rate observed with the target rate to keep the rate closer to the target. • ii)Network component cf*(cir/cir_max) provides a dynamic indication of congestion level status in the network. The marking probability increment is done in proportion to the target rate by multiplying cf with a weight factor cir/cir_max to mitigate the impact of the target rates.
Simulation Details • NS (2.1b7a) simulator on Red Hat 7.0 • Modified Nortel’s DiffServ module for our architecture implementation. • Core routers use RIO like mechanism • FTP bulk data transfer for TCP traffic
Simulation details cont’d.. • Experiments conducted: • Assured services (AS) for aggregates. • AS in under- and well- subscribed cases. • AS in the oversubscribed case. • Protection from BE UDP flows • Effect of UDP flows with assured (target) rates.
Analysis CATC • Able to achieve the target rates for the under- and well- subscribed cases. • Maintain the achieved rate close to its target rate. • Total link utilization remains more or less constant throughout.
Analysis CATC • Achieves goodput close to the target rates. • Succeeds in taking the share of BE TCP and UDP flows in the worst case scenario. • The average link utilization pretty good. • The AS UDP flow gets its assured rate.
Deployment • MPLS over DiffServ. • Marker anywhere (lack of sensitivity to marker parameters).
Inferences and Future work • The architecture is transparent to TCP sources and hence doesn’t require any modifications at the end hosts. • The edge-to-edge feedback control loop helps the marker to take proactive measures in maintaining the assured service effectively, especially during periods of congestion. • A single feedback control is used for an aggregated flow. Hence this architecture is scalable to any number of flows between the two edge gateways. • The architecture is adaptive to changes in load and network conditions. • The marking algorithm takes care of any bursts in the flows.
Future work • Extend present architecture to take care of drops in priority queues. • A new algorithm to incorporate this.