460 likes | 776 Views
Optimization and Distributed Algorithms for Resource Allocation in Multi-hop Wireless Networks. R. Srikant Department of ECE and CSL University of Illinois at Urbana-Champaign. Motivation. Objective: Fair and Efficient Resource Allocation in Multi-hop Wireless Networks Questions:
E N D
Optimization and Distributed Algorithms for Resource Allocation in Multi-hop Wireless Networks R. Srikant Department of ECE and CSL University of Illinois at Urbana-Champaign University of Illinois at Urbana-Champaign
Motivation • Objective: Fair and Efficient Resource Allocation in Multi-hop Wireless Networks • Questions: • What is the optimal network architecture? Does it naturally arise from the objective? • Are there distributed algorithms that implement the various layers of the protocol stack? • Where approximations are necessary for implementability, can we quantify the degree of approximation? • How easy is it to extend the model to accommodate other traffic models (multicast, network coding, etc.)? • Network designed for fixed number of flows. Stability with dynamic traffic? (Lin, Shroff, S.) University of Illinois at Urbana-Champaign
Closely Related Work • Scheduling/Routing: • Tassiulas-Ephremides; Tassiulas • Resource Allocation for the Internet: • Kelly et al; Low et al; S. • Resource Allocation in Wireless Networks • Stolyar; Neely, Modiano & Li; Lin&Shroff • Distributed Algorithms: • Lin & Rasool; Gupta, Lin & S., Joo &Shroff, Sarkar et al (slotted time) • Kar et al, Gupta-Stolyar (random access) • Xiao-Johansson-Boyd, Chiang, Huang-Berry-Honig (power control) • Extensions to network coding: • Eryilmaz & Lun, Ho et al, Chiang et al University of Illinois at Urbana-Champaign
Outline • A simple three-node example: Internet versus wireless networks • Joint scheduling, routing and congestion control for multi-hop wireless networks (Eryilmaz, S.) • Extensions to multicast traffic (Bui, Stolyar, S.) • Low-complexity distributed MAC algorithm (Sanghavi, Bui, S.) University of Illinois at Urbana-Champaign
Three-Node Internet User 1 ca=1 cb=1 User 0 User 2 subject to University of Illinois at Urbana-Champaign
Solution Solution: University of Illinois at Urbana-Champaign
Functional Decomposition Lagrange Multipliers (nodes): Congestion Control (sources): • Lagrange multipliers ≈ Queue lengths • But not true queue dynamics • Reasonable model for the Internet University of Illinois at Urbana-Champaign
Wireless Network User 1 cA=1 cB=1 User 0 User 2 subject to a is the fraction of time link A is used University of Illinois at Urbana-Champaign
Lagrange Multipliers University of Illinois at Urbana-Champaign
Decomposition Congestion control (sources and nodes): Maxweight MAC or Scheduling (network): Solution is an extreme point Earlier comment regarding queue lengths and Lagrange multipliers applies University of Illinois at Urbana-Champaign
Alternative Formulation User 1 cA=1 cB=1 User 0 User 2 subject to a0 is the fraction of time link A is used for user 0 University of Illinois at Urbana-Champaign
Decomposition Congestion control (per-flow queues): MAC or Scheduling (Backpressure): University of Illinois at Urbana-Champaign
Resource Constraints and Queueing Dynamics x1 μa1 x2 μb2 pa0 pb0 x0 μa0 μb0 subject to • Queue stability constraints: • Arrival rate into a queue is departure rate from previous queue • Still not precise: what happens if previous q=0? University of Illinois at Urbana-Champaign
Differences in the Two Formulations • Arrivals instantaneously arrive at all nodes in the route versus node-by-node queueing behavior • Sources react to sum of queue lengths versus Sources react to entry queue length • Why is it sufficient to react to only the entry queue length? • Back-pressure algorithm University of Illinois at Urbana-Champaign
Outline • A simple three-node example: Internet versus wireless networks • Joint scheduling, routing and congestion control for multi-hop wireless networks (Eryilmaz, S.) • Extensions to multicast traffic (Bui, Stolyar, S.) • Low-complexity distributed MAC algorithm (Sanghavi, Bui, S.) University of Illinois at Urbana-Champaign
Wireless Network Model • The network is represented by a graph: • = set of link rates that are allowable in a time slot, i.e., we have: • [t] 2, 8 t. i j (m,j) (i,n) (n,m) (m,w) n m (v,n) (w,m) w (m,v) (n,v) v Slot 1 Slot 2 time University of Illinois at Urbana-Champaign
Traffic Model • : The set of flows that share the network. • Each flow is described by a source-destination pair: No predefined routes. • Letxfdenote the rate of flowf • Letdenote the set of flow rates for which the corresponding link rates lie in. e(f)=j b(f)=i i flow f j n m w flow h flow g v • Uf ( xf ) is a (strictly) concave function that measures the utility of flow f as a function of xf. University of Illinois at Urbana-Champaign
Problem Statement • Design a mechanism that • guarantees stability of the queues, • allocates flow rates,{ xf }, that satisfy: • x*denotes the optimizer of the above problem, call it the fair allocation. University of Illinois at Urbana-Champaign
Node Model • Each node maintains a queue for each destination node. i qn,j s(i,n) (j) s(n,m) (j) m s(i,n) (k) qn,k s(n,v) (k) Node n v • In general, the evolution of a queue length is described by University of Illinois at Urbana-Champaign
Primal-Dual Congestion Controller • At the beginning of each time slot t, each flow, say f, has access to the queue length of its first node, denoted byqb(f)[t]. • Congestion Control: or • Increase rate when queue length is small • Decrease rate when queue length is large • K is a fixed parameter University of Illinois at Urbana-Champaign
Back-pressure Scheduler • Assign a weight to each edge; find a feasible set of edges with the maximum sum weight • The differential backlog of link(n,m) for destination d is given by • Differential backlog of the link isW(n,m)max[t]: the maximum value among all destinations • Then, choose the rate vector [t] 2that satisfies: University of Illinois at Urbana-Champaign
Node m An example: 1 2 5 W(n,m)max = (max{5-1,7-2,2-5})+=5 d(n,m) = 2 5 Node n 7 W(n,k)max = (max{5-6,7-8,2-4})+=0 d(n,k) = 2 6 Node k 8 4 University of Illinois at Urbana-Champaign
Queue Stability • Define the Lyapunov function where q*2 K*.Drift analysis results in Theorem 1:For some finite constant c, we have University of Illinois at Urbana-Champaign
Fair Allocation Theorem 2:There exists a finite B, such that for all f • For large K, the average rate allocation is fair • Tradeoff between delays and fairness University of Illinois at Urbana-Champaign
Stochastic Models • The set of allowable rates at each time instant can be time-varying • Don’t need to know the statistics of the channel • The capacity region is unknown, but instantaneous capacity region is known • Can model randomness in the arrival processes • Proof: conditional mean drift of the Lyapunov function has the form shown in the previous page • Result: University of Illinois at Urbana-Champaign
Stochastic model Fluid model • Intuition: M/M/1 queue where the arrival rate decreases with the queue length. K K K/2 K/(q-1) K/q . . . . . . 0 1 2 q The steady-state mean and the variance of the above Markov chain are both Θ(K). University of Illinois at Urbana-Champaign
Outline • A simple three-node example: Internet versus wireless networks • Joint scheduling, routing and congestion control for multi-hop wireless networks (Eryilmaz, S.) • Extensions to multicast traffic (Bui, Stolyar, S.) • Low-complexity distributed MAC algorithm (Sanghavi, Bui, S.) University of Illinois at Urbana-Champaign
Multi-rate multicast x1, U1(x1) μB • One sender, four receivers • Example of constraint: • Receivers can receive at different rates • Very important in wireless networks; otherwise, all rates will become zero frequently μA x2, U2(x2) x x3, U3(x3) μC x4, U4(x4) University of Illinois at Urbana-Champaign
Solution: Multi-rate multicast μB • Constraint: • A fictitious queueing network sending fictitious packets in the opposite direction enforces the constraints • The departures from the fictitious queues serves as tokens (credits) for the generation of real packets μA x μC University of Illinois at Urbana-Champaign
QoS Control: Delays μA x μC • Source can send a packet for every token, or • Source can generate 9 packets for every 10 tokens received • Tokens inform the source of the amount of resources reserved for it • Source can use this information, but sends at a smaller rate to reduce delays University of Illinois at Urbana-Champaign
Outline • A simple three-node example: Internet versus wireless networks • Joint scheduling, routing and congestion control for multi-hop wireless networks (Eryilmaz, S.) • Extensions to multicast traffic (Bui, Stolyar, S.) • Low-complexity distributed MAC algorithm (Sanghavi, Bui, S.) University of Illinois at Urbana-Champaign
Limitations of the Approach • Each source needs to know only its ingress queue length to perform congestion control (decentralized) • Routing, MAC, power control, etc. are done using the backpressure algorithm: centralized, infeasible • Question: Are there decentralized approximations to the backpressure algorithm that achieve a large fraction of the capacity region? • Fix power levels • Fix routing • Focus only on scheduling (which links should be turned ON or OFF) University of Illinois at Urbana-Champaign
Primary Interference Model Wireless Network == graph with nodes and edges Nodes == wireless devices Communication only between neighbors At any given time, a link can be “ON” or “OFF” Constraint: no two adjacent links can be “ON” at same time (ON links form a matching in the graph) (Corresponds to fixed power levels, orthogonalization, pairwise-only Communication: Hajek and Sasaki) University of Illinois at Urbana-Champaign
Scheduling Problem To decide what edges to turn ON at each time - so as to “maximize data rates” - abiding by interference constraints - assume one-hop flows (easy extension) Each edge has an associated queue Stochastic packet arrivals to each queue (not controlled, easy extension to controlled) OFF == no service for the queue ON == one packet served University of Illinois at Urbana-Champaign
Capacity Region Average arrival rate vector ( one for each edge, length of vector = |E| ) (capacity region) if and only if is in convex closure of all matchings. Max-Weight Matching (with queues as edge weights) renders the queues stable. 2 3 3 5 2 University of Illinois at Urbana-Champaign
Existing Algorithms Max-Weight Matching takes time to find new schedule. Maximal Matching achieves ; communication overhead scales with n Randomized Algorithm 1) In each time, generate random new matching s.t. 2) Switch if new better than This achieves . Needs random generator, network-wide compare University of Illinois at Urbana-Champaign
Communication Overheads Scheduling Service Scheduling Service Resources wasted in scheduling not accounted for, grow with n “Capacity” results only indicative of efficiency in service part. Growing overheads => what does “capacity region” mean ? University of Illinois at Urbana-Champaign
Main Result A constant-overhead algorithm that can achieve any fixed fraction of the capacity region. • In particular, given any we have an algorithm that • Achieves • Forms new schedule in handshake times. • (one handshake time = time for exchanging a control packet between neighbors.) University of Illinois at Urbana-Champaign
Algorithm: Idea Make local improvements to existing schedule. • A node that is not part of the matching initiates a “query” to possibly increase the weight of the previous matching • The query is propagated on a path where links in the matching and links not in the matching alternate • Query stops after steps • Compare weight of links not in the matching with weight of links in the matching • Flip the status of the links on the path if weight can increase 2 1 3 1 2 University of Illinois at Urbana-Champaign
Algorithm: Randomization • Initially, each node randomly becomes “active”, i.e., initiates a query. So, multiple simultaneous requests in network. • If a request reaches an active or dead node, request • “fails”: no new active node, edge not special. • If two requests collide at a node, both fail. • This process makes disjoint alternating paths and edges. • Net queue length info. propagated along till the end. • Decision of switch/no switch made at end, relayed back. • All selected edges implement switching decision. University of Illinois at Urbana-Champaign
Proof Sketch Recall randomized Algorithm: 1) In each time, generate random new s.t. 2) Switch if new better than Our Algorithm: a technique to generate this new , and switch if it is better. Theorem 1: The new generated by our algorithm satisfies University of Illinois at Urbana-Champaign
Proof Sketch So, we approximately meet the criterion of Tassiulas This implies corresponding rate region. Theorem 2: Given any , if there is an algo. that generates such that and switches if gain, then that algo achieves University of Illinois at Urbana-Champaign
Simulations University of Illinois at Urbana-Champaign
Simulations University of Illinois at Urbana-Champaign
Implications Theoretical: - Constant-time algorithms that can achieve any a-priori intended fraction of capacity region. - Precise accounting of overheads. Practical: - Allows protocol to be designed independent of network size. - = tunable parameter that allows selection of best protocol given channel coherence times, data type, etc. University of Illinois at Urbana-Champaign
Open Problems • Approximating back-pressure routing (packet-by-packet routing is complicated to implement) • Distributed algorithms for more complicated interference models • Distributed power control and scheduling • Admission control and routing for inelastic flows • Where are the biggest gains compared to the existing protocol stack? University of Illinois at Urbana-Champaign