250 likes | 350 Views
OverQos: An Overlay based Architecture for Enhancing Internet Qos. L Subramanian*, I Stoica*, H Balakrishnan + , R Katz* *UC Berkeley, MIT + USENIX NSDI’04, 2004. Outline. Introduction OverQos Architecture Controlled-Loss Virtual Link (CLVL) OverQoS Implementation Two Sample Application
E N D
OverQos: An Overlay based Architecture for Enhancing Internet Qos L Subramanian*, I Stoica*, H Balakrishnan+, R Katz* *UC Berkeley, MIT+ USENIX NSDI’04, 2004
Outline • Introduction • OverQos Architecture • Controlled-Loss Virtual Link (CLVL) • OverQoS Implementation • Two Sample Application • Evaluation • Conclusions
Introduction • Today’s Internet still continues to provide only a best-effort service. The main reason is the requirement of these proposals that all network elements implement QoS mechanisms. • The authors propose OverQoS, an overlay based QoS architecture for enhancing Internet QoS.
Introduction (cont.) • Enhancements: • Smoothing losses • Reduce or even eliminate the loss bursts by smoothing packet losses across time • Packet prioritization • Protect important packets • Statistical Bandwidth and Loss Guarantees
OverQoS Architecture (1/3) • Assumptions • The placement of overlay nodes is pre-specified • The end-to-end path on top of an overlay network is fixed • Using existing approaches like RON to determine the overlay path. • Terms • Virtual link – The IP path between two overlay nodes • Bundle – A stream of application data packets carried across the virtual link
OverQoS Architecture (2/3) • Overlay-based QoS challenges • Node Placement and Cross Traffic • Fairness • Should not hurt the cross traffic • Stability • Many virtual links overlapping on congested physical links should be able to co-exist
OverQoS Architecture (3/3) • A Solution builds on two principles • Bundle loss control • Using controlled-loss virtual link (CLVL) to bound the loss rate • Resource management within a bundle • Control the loss and bandwidth allocations
Bundle Loss Control • The CLVL provides a loss rate bound, q. • Using a combination of FEC and ARQ • The bandwidth overhead should be minimized • The total traffic consists of: • The traffic of the bundle • The redundancy traffic • The available bandwidth for the flows in the bundle b(t): Traffic bound at time t r(t): Fraction of redundancy traffic
Resource Management within a Bundle • If the traffic arrival rate is larger than available bandwidth c, the extra traffic is dropped at the entry overlay node • With priority • Statistical bandwidth guarantees • , where u represents the probability of not meeting the bandwidth guarantee • As long as the total allocated bandwidth is less than cmin
Overall picture • Application-OverQoS Interface • It needs to tunnel its packets through the overlay network using an OverQoS proxy • The proxy is responsible for signaling the application specific requirements to OverQoS • OverQoS proxy is application specific
Discussion • End-to-end Recovery vs. Overlay CLVL • Using FEC to apply end-to-end loss control is far more expensive than on an aggregate level • With a better distribution of overlay nodes, they expect the overlay links to have much smaller RTTs than end-to-end RTTs • ARQ recovery is better in overlay-level • Delay guarantees • Overlay has no control in queuing delays • Over-provisioning • Overlay are the right platform for translating intra domain QoS to end-to-end QoS guarantees
Controlled-Loss Virtual Link (CLVL) • Estimating b • Based on an N-TCP pipe abstraction which provides a bandwidth which is N times the throughput of a single TCP connection. • Use MulTCP to emulate the behavior • N is equal to the number of flows in the bundle • Node Architecture q: target loss-rate c: available bandwidth p: loss rate b: maximum sending rate
Controlled-Loss Virtual Link (CLVL) (cont.) • Achieving target loss rate q • FEC vs. ARQ trade-off • Bandwidth overhead and packet recovery time • FEC+ARQ based CLVL • Restrict # of retransmissions to at most one • The expected packet loss rate • The expected bandwidth overhead • The optimal solution is when r1 = 0 After two rounds Goal r is the redundancy factor Minimizes
OverQoS Implementation • Application-dependent proxy • Choosing parameters • N as the average number of flows observed over a larger period of time • q = 0.1% • Startup phase • Using a slow-start phase to estimate the initial value of b • FEC implementation • Operating on small window sizes (n < 1000) coding is not a bottleneck
Streaming Media Application • Two enhancements • The quality can be enhanced by converting bursty losses into smooth losses for streaming audio • Recovering packets preferentially can improve the quality for MPEG streaming • Not consume any additional bandwidth • Retransmits an important lost packet and drops a later lesser important packet
Streaming Media ApplicationEvaluation Average loss rate Mazu-Korea – 2% Intel-Lulea – 3% • Streaming Audio • MPEG streaming Increase 0.15 – 0.2 Perceptual Evaluation of Speech Quality (PESQ) (5 is ideal) Not only improves the quality in the average case but also the minimum quality of a stream
Counterstrike Application • Problem • Client unable to connect to the server • Cause skips or get disconnected • Alleviate the problem of bursty losses by performing: • Recover from bursty network losses by using an FEC+ARQ based CLVL • Smoothly drop data packets equivalent to the size of the burst at the overlay node • Identify control packets based on packet size and not drop these packets
Counterstrike ApplicationEvaluation • Sequence number plot illustrating smoothing of packet losses using OverQoS • Smoothing losses works well only when the bursty loss-periods are relatively short by compensating • Unable to achieve the target loss-rate due to congestion periods with very high loss-rates 10% loss-rate
Evaluation • Methodology • Wide-Area Evaluation Testbed • RON and PlanetLab – use 19 diverse nodes • Simulation Environment • Ns-2 – a single congested link of 10 Mbps where they vary the background traffic • Long lived TCP connections • Self similar traffic • Web traffic
Statistical Loss Guarantees q = 0.1% • Simulations • Wide Area Evaluation • Achieve target over 80 of the 83 virtual links • The causes of the other 3 virtual links • Short outages – a period of time all packets are lost (< 5s) • Bi-modal loss distributions – bursty losses
Statistical Bandwidth Guarantees • Stability of cmin Monitor 83 unique virtual links u = 0.01 and u = 0.005 Calculate cmin based on a history of 200 seconds The average sending rate of N-TCP is between 120Kbps to 2Mbps N-TCP, N = 10 • The value of cmin is very stable, which does not deviate more than 10% around its mean • Set P = 1%, the actual value is no more than 1.3% The value of cmin is greater than 100Kbps for more than 80% of the links
OverQoS Cost • Overhead Characteristics The burstier the background traffic, the higher the amount of FEC required to recover from these losses The difference between avg. loss & FEC+ARQ is the amount of FEC used in the second round
OverQoS Cost (cont.) • Delay Characteristics • Two reasons for increasing delay • The recovery process • Support in-sequence delivery of packets Three different models • No packet ordering • End-to-end ordering • Hop-by-hop ordering • E2E is better than Hop-by-hop • Adding new OverQoS nodes increasing limited delay
Fairness and Stability • Three OverQoS bundles (with N=2, N=4, N=8) compete on a shared bottleneck under two different scenarios • No cross-traffic • Cross-traffic consisting of five long lived TCPs • Three OverQoS bundles co-exist with each other and with the background traffic • The ratio of throughputs of the three bundles is preserved
Conclusions • OverQoS can enhance Internet QoS without any support from the underlying IP network • OverQoS is able to achieve the three enhancements with little (i.e., 5%) or no extra bandwidth. • Future work • Combine admission control and path selection • Determine the “optimal” placement of the OverQoS nodes in the network