170 likes | 290 Views
A Distributed Algorithm for Weighted Max-Min Fairness in MPLS Networks. Fabian Skivée (Fabian.Skivee@ulg.ac.be) Guy Leduc (Guy.Leduc@ulg.ac.be). Outline. Introduction Weight Proportional Max-Min policy Proposed distributed WPMM algorithm Algorithm integration with RSVP Simulation results
E N D
A Distributed Algorithm for Weighted Max-Min Fairness in MPLS Networks Fabian Skivée (Fabian.Skivee@ulg.ac.be) Guy Leduc (Guy.Leduc@ulg.ac.be) Research Unit in Networking - University of Liège - 2004
Outline • Introduction • Weight Proportional Max-Min policy • Proposed distributed WPMM algorithm • Algorithm integration with RSVP • Simulation results • Conclusion Research Unit in Networking - University of Liège - 2004
Introduction • Our goal : sharing the available bandwidth among all the LSPs according to their weights • Consider a set of LSPs, each carrying many TCP connections. • Without explicit policy, more aggressive LSPs (with more flows) get more than their fair share, independently of their reservations. • The classical max-min rate allocation policy has been accepted as an optimal network bandwidth sharing criterion among user flows. • Extension with a weight : WPMM (Weighted Proportional Max-Min) Research Unit in Networking - University of Liège - 2004
Application • The fair rate allocated to an LSP could be used at the ingress by a 3 colour marker : • green : rate under the reserved rate • yellow : rate between reserved rate and fair rate • red : rate above the fair rate • In case of congestion, core routers discard the red packets first and possibly, during transient periods, some of the yellow packets by using WRED policy by example. Research Unit in Networking - University of Liège - 2004
Weight Proportional Max Min policy MR2 BW MR3 • L : a set of links • S : a set of LSPs • Each LSP s has : • a reserved rate RRs • a fair rate FRs • a maximal rate MRs • a weight ws • Admission control : Σ RRs ≤ Cl • A fair share allocates a LSP with a "small" demand what it wants, and distributes unused resources evenly to the "big" LSPs according to their weights Expected BW MR1 Shared BW Fair rate w1 w2 w3 LSPs Research Unit in Networking - University of Liège - 2004
Weight-proportional allocation policy The centralized Water Filling algorithm computes the exact fair rate for each LSP : Step 1 : First allocates to each LSP its reserved rate Step 2 : Increase the rate of each LSP proportionally to its weight until a link becomes fully utilized Step 3 : Freeze all the LSPs crossing this link and go to step 2 until all the LSPs are frozen Research Unit in Networking - University of Liège - 2004
Proposed distributed WPMM algorithm • We propose an algorithm that converges to the WPMM policy quickly through distributed and asynchronous iterations. • We used the RSVP signaling protocol to convey information through the network • We add 4 fields in the PATH and RESV packets: • RR, W : given at the creation of the LSP • explicit fair rate (ER) : the fair rate allocated by the network to this LSP • bottleneck (BN) : id of the LSP's bottleneck link Research Unit in Networking - University of Liège - 2004
Proposed distributed WPMM algorithm PATH • Periodically, the ingress sends a PATH packet • Each router computes a local fair share for the LSP and updates the ER and BN fields if its local fair rate is less than the actual fair rate • Upon receiving a PATH packet, the egress routers sends a RESV packet. • Each router updates its information with the RESV parameters ER and BN. RESV Research Unit in Networking - University of Liège - 2004
Local fair rate computation • = set of LSPs bottlenecked at link l • = set of LSPs not bottlenecked at link • = additional fair share per unit of weight for LSPs bottlenecked on this link l • Local fair rate for LSP i at link l is defined by • is computed by RR : reserved rate W : weight FR : fair rate C : link capacity Research Unit in Networking - University of Liège - 2004
Bottleneck-consistency Ul is bottleneck consistent if Bottleneck-consistency The key concept of this algorithm is the bottleneck marking strategy i.e. all LSPs not bottlenecked at a link must have a bottleneck elsewhere or reach their maximal rate, so they have an allocated fair rate less than the one proposed by the current link. Research Unit in Networking - University of Liège - 2004
Improvements provided by our algorithm • There is no similar work for MPLS networks • But there is interesting propositions in ATM • A naïve solution is to adapt Hou’s work to the MPLS context • With the flexibility provided by MPLS, we can improve this naïve solution by • updating the routers in the backward path, so they all have the same information as the ingress • adding a new parameter BN that conveys explicitly the bottleneck link in the path. This information improves considerably the convergence time Research Unit in Networking - University of Liège - 2004
Algorithm integration with RSVP • The RSVP signaling protocol is widespread in MPLS networks for the LSP establishment • RSVP Refresh Overhead Reduction Extensions* • if two successive PATH (or RESV) packets are the same, the upstream node only sends a refresh PATH. • The downstream node refreshes the LSP entry but doesn't process the whole PATH packet. • By associating a special bit (NRi) to each LSP i, we can determine if the LSP value has changed and so keeping this nice RSVP mechanism. * RFC 2961 Research Unit in Networking - University of Liège - 2004
Simulation results • We create a dedicated simulator and we compare our algorithm and Hou's solution adapted to MPLS networks with the WPMM allocation vector (computed by Water-Filling) • An iteration consists of simulating the RSVP protocol for each LSP in the topology • We stop when the mean relative error between the last rate vector and the WPMM rate allocation is under a fixed precision • We made extensive simulations on 63 topologies from 20 to 100 nodes with between 20 and 1000 LSPs Research Unit in Networking - University of Liège - 2004
Simulation results • Our solution is nearly 3 times faster than Hou's solution Average number of iterations on 63 topologies Research Unit in Networking - University of Liège - 2004
Simulation results For stabilizing 90% of the LSPs, our solution takes 4 iterations (16 with Hou's one) In the worst topology, it takes 36 iterations to converge (84 with Hou's solution) Research Unit in Networking - University of Liège - 2004
Conclusion • This distributed algorithm provides a scalable architecture to share the available bandwidth among all the LSPs according to their weights • By using a new update scheme and an explicit bottleneck link marking, our algorithm improves considerably the performance (between 2 and 4 times faster) • The compatibility with RSVP refresh extensions (RFC 2961) reduces the overhead Research Unit in Networking - University of Liège - 2004
Thanks for your attention This work was supported by: • the European ATRIUM project • the TOTEM project funded by the DGTRE (Walloon region) Contact : Fabian.Skivee@ulg.ac.be Research Unit in Networking - University of Liège - 2004