1 / 70

Wireless Link: Service Quality

Wireless Link: Service Quality. EE206A (Spring 2002): Lecture #6. This Lecture. Channel state dependence QoS and fairness in wireless. Reading. Mandatory

stacia
Download Presentation

Wireless Link: Service Quality

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Wireless Link: Service Quality EE206A (Spring 2002): Lecture #6

  2. This Lecture • Channel state dependence • QoS and fairness in wireless

  3. Reading • Mandatory • [Bharghavan99] Bharghavan, V.; Songwu Lu; Nandagopal, T. Fair queuing in wireless networks: issues and approaches. IEEE Personal Communications, vol.6, (no.1), IEEE, Feb. 1999. p.44-53. • [Vaidya00] Vaidya, N.H.; Bahl, P.; Gupta, S. Distributed fair scheduling in a wireless LAN. MobiCom 2000. Proceedings of the Sixth Annual International Conference on Mobile Computing and Networking, Boston, MA, USA, 6-11 Aug. 2000. p.167-78. • Recommended • [Bhagwat96] Bhagwat, P.; Bhattacharya, P.; Krishna, A.; Tripathi, S.K. Enhancing throughput over wireless LANs using channel state dependent packet scheduling. Proceedings IEEE INFOCOM '96, San Francisco, CA, USA, 24-28 March 1996. p.1133-40 vol.3.

  4. Bursty Wireless Channel Errors • Burst errors due to fading, frequency collision etc.

  5. Location-dependent Channel Capacity and Errors • Contention and effective channel capacity are location dependent • Channel errors are location dependent • Due to interference, fading etc. • Bad interaction with how MAC schedules packets for transmission

  6. Problems with FIFO Scheduling in MAC [Bhagwat96] • Burst errors may be spatially selective • e.g. link to only one receiver may be under interference or in fade • During burst, all retransmission attempts to specific MH will fail • burst errors observed to be 50-100 ms long in WLANs • FIFO is basically causing head of line blocking! • other MHs starve even though link to them may be good • TCP to all MHs will increase RTT estimates, further increasing timeouts • poor resource utilization • fairness problem: MHs with bad link claiming more link time & b/w • a “fair” MAC is not enough in the presence of errors on the link

  7. Channel State Dependent Scheduling • Primary culprits: • CSMA/CA MAC makes repeated attempts even when channel is bad • FIFO dispatcher continues to send packets without regard to channel state • Solution: defer scheduled transmissions until next good period • transmit packets for other destinations (those marked good) meanwhile burst periods for different MHs are independent • potential risk: TCP sender may timeout • but TCP timers >> average burst durations • bad periods detected by radio feedback or multiple MAC transmit attempts • channels remain marked bad for an estimated burst interval length • round-robin scheduler (two sets: good & bad) worked best

  8. Providing QoS Communication Link • QoS provided by a combination of: • resource reservation at the flow level • “fair” resource allocation / packet scheduling at the packet level • Easy to do in point-to-point links

  9. Providing QoS in Wireless Links is Much Harder… • Distributed • tied to the multiple access problem • User mobility • makes resource reservation hard • Channel errors • make resource reservation meaningless (no guarantees!) • make packet scheduling and fair resource allocation hard • what does fair mean in an error prone channel? • Time varying channel

  10. QoS Scheduling for Communication Links • Scheduling • Admission control (for “schedulability”) • Policing (for “isolation”) • Goals: • meet performance and fairness metrics • high resource utilization (as measured by resource operator) • easy to implement • small work per data item, scale slowly with # of flows or tasks • easy admission control decisions • Schedulable region: set of all possible combinations of performance bounds that a scheduler can simultaneously meet

  11. Fairness • Intuitively • each connection gets no more than what it wants • the excess, if any, is equally shared • Fairness is intuitively a good idea • Fairness also provides protection • traffic hogs cannot overrun others • automatically builds firewalls around heavy users • reverse is not true: protection may not lead to fairness Transfer half of excess Unsatisfied demand A B A B C C

  12. Max-min Fairness • Maximize the minimum share of task or flow whose demand is not fully satisfied • Resources are allocated in order of increasing demand, normalized by weight • No task or flow gets a share larger than its demand • Task or flows with unsatisfied demands get resource shared in proportion to their weights

  13. Example • Given • Four flows with demands 4, 2, 10, 4 and weights 2.5, 4, 0.5, and 1, and a link with capacity C=16 • Steps • Normalize weights so that smallest is 1: 5, 8, 1, 2 • In each round give a flow a share  to its weight • Round 1: allocation is 5, 8, 1, 2 • Results in 1 and 6 units extra for flows 1 & 2 = 7 • Allocate this 7 to flows still in deficit according to re-normalized weights • Round 2: allocation is 7*1/3 and 7*2/3 to flows 3 & 4 • Results in 2.666 excess for flow 4 while flow 3 is still short • Allocate this 2.666 to flows still in deficit according to re-normalized weights • Round 3: allocation is 2.666 for flow 3 • Results in flow 3 with a total of 6, i.e. a deficit of 4

  14. Policing • Three criteria: • (Long term) Average (Sustained) Rate • 100 packets per sec or 6000 packets per min?? • crucial aspect is the interval length • Peak Rate • e.g., 6000 p p minute Avg and 1500 p p sec Peak • (Max.) Burst Size • Max. number of packets sent consecutively, i.e. over a short period of time

  15. Leaky Bucket Mechanism • Provides a means for limiting input to specified Burst Size and Average Rate. • Bucket can hold b tokens; token are generated at a rate of r token/sec unless bucket is full of tokens • Over an interval of length t, the number of packets that are admitted is less than or equal to (r t + b) • How can one enforce a constraint on peak rate? Figure from: Kurose & Ross

  16. Sharing a Fixed Capacity Communication Link • Which packet to send when? • FIFO • Priority queuing: preemptive, non-preemptive • Round robin • Weighted fair queuing • EDF • Which packet to discard if buffers at sender are full? • What if senders not at the same place? • Need multiple access mechanism • Need distributed implementation

  17. Fundamental Choices • # of priority levels • a priority level served only if higher levels don’t need service (multilevel priority with exhaustive service) • Work conserving vs. non-work conserving • never idle when packets await service • why bother with non-work conserving? • Degree of aggregation • cost, amount of state, how much individualization • aggregate to a class • members of class have same performance requirement • no protection within class • Service order within a level • FCFS (bandwidth hogs win, no guarantee on delays) • In order of a service tag (both protection & delay can be ensured)

  18. Non-work-conserving Disciplines • Idea • Delay packet till eligible • Reduces delay-jitter => fewer buffers in network • E.g. traffic remains smooth as it proceeds through the network • How to choose eligibility time? • rate-jitter regulator: bounds maximum outgoing rate • delay-jitter regulator: compensates for variable delay at previous hop • Do we really need it? • one can remove delay-jitter at an endpoint instead • but it also reduces expensive switch memory • easy to compute end-to-end performance • sum of per-hop delay and delay jitter leads to tight end-to-end delay and delay-jitter bounds • wastes bandwidth • but can serve background traffic or tasks • increases mean delay • always punishes a misbehaving source • more complex to implement (more state)

  19. The Conservation Law • The sum of mean delay for a flow or task, weighted by its mean utilization of the resource, is a constant if the scheduler is work-conserving • A work conserving scheduler can only reallocate delays among the flows or tasks • A non-work-conserving will only have a higher value

  20. Priority Queuing • Flows classified according to priorities • Preemptive and non-preemptive versions • What can one say about schedulability? Figure from: Kurose & Ross

  21. Round Robin • Scan class queues serving one from each class that has a non-empty queue Figure from: Kurose & Ross

  22. Weighted Round Robin • Round-robin unfair if packets are of different length or weights are not equal • Different weights, fixed packet size • serve more than one packet per visit, after normalizing to obtain integer weights • Different weights, variable size packets • normalize weights by meanpacket size • e.g. weights {0.5, 0.75, 1.0}, mean packet sizes {50, 500, 1500} • normalize weights: {0.5/50, 0.75/500, 1.0/1500} = { 0.01, 0.0015, 0.000666}, normalize again {60, 9, 4} • Problems • with variable size packets and different weights, need to know mean packet size in advance • fair only over time scales > round time • round time can be large • can lead to long periods of unfairness

  23. Generalized Processor Sharing (GPS) • Generalized Round Robin • In any time interval, allocates resource in proportion to the weights among the set of all backlogged connections (i.e. non empty queue) • Serves infinitesimal resource to each • Achieves max-min fairness • Provide a class with a differentiated amount of service over a given period of time • But is non-implementable Figure from: S. Keshav, Cornell

  24. Weighted Fair Queueing (WFQ) • Deals better with variable size packets and weights • GPS is fairest discipline • Find the finish time of a packet, had we been doing GPS • Then serve packets in order of their finish times • Note: finish time calculated on packet arrival Figure from: Kurose & Ross

  25. Approximating GPS by WFQ • GPS service order • WFQ: select the first packet that finishes in GPS

  26. WFQ Performance • Turns out that WFQ also provides performance guarantees • Bandwidth bound • ratio of weights * link capacity • e.g. connections with weights 1, 2, 7; link capacity 10 • connections get at least 1, 2, 7 units of b/w each • End-to-end delay bound • assumes that the connection doesn’t send ‘too much’ (otherwise its packets will be stuck in queues) • more precisely, connection should be leaky-bucket regulated • # bits sent in time [t1, t2] <= r (t2 - t1) + b

  27. WFQ + Leaky Bucket Figure from: Kurose & Ross

  28. Parekh-Gallager Theorem • Let • a connection be allocated weights at each of K WFQ schedulers along its path such that the bandwidth allocated at the k-th scheduler is gk • g = smallest gk • the connection be leaky-bucket regulated such that # bits sent in time [t1, t2] <= r (t2 - t1) + b • the kth scheduler have a rate r(k) • Let the largest packet allowed in the connection be Pc, and in the connection be Pn

  29. Significance • Theorem shows that WFQ can provide end-to-end delay bounds • So WFQ provides both fairness and performance guarantees • Bound holds regardless of cross traffic behavior • Can be generalized for networks where schedulers are variants of WFQ, and the link service rate changes over time

  30. Problems • To get a delay bound, need to pick g • the lower the delay bounds, the larger g needs to be • large g => exclusion of more competitors from link • g can be very large, in some cases 80 times the peak rate! • Sources must be leaky-bucket regulated • but choosing leaky-bucket parameters is problematic • WFQ couples delay and bandwidth allocations • low delay requires allocating more bandwidth • wastes bandwidth for low-bandwidth low-delay sources

  31. Packet Fair Queuing (PFQ) in Wireless Links: The Problem • In fluid fair queuing which PFQs approximate, during each infinitesimally small time window, the channel bandwidth is distributed fairly among all the backlogged flows • i.e. among flows that have data to transmit during that time window • Traditional PFQ algorithms assume error free channel • or, at least that either all flows can be scheduled or none • don’t address fairness when a subset of backlogged flows cannot transmit because of bad channel • In wireless domain, a packet flow may experience location-dependent channel error and hence may not be able to transmit during a a given time window • giving the channel to such flows is a waste and also not fair in terms of actual bandwidth they receive • Situation is as if we have a server that is servicing multiple queues and has an unpredictably time varying service rate which is different for each queue

  32. Why WFQ Fails in Wireless Channels? B(t1,t2) is the set of flows that were backlogged in [t1,t2]

  33. Solution: Wireless Fair Queuing • Goal: make short bursts of location-dependent channel errors transparent to users • Approach: dynamic reassignment of channel allocation over small time scales • a backlogged flow f that perceives channel error during a time window [t1, t2] is compensated over a later time window [t1’, t2’] when f perceives a clean channel • Compensation mechanism: swap channel access & reclaim later • grant additional channel access to f during [t1’, t2’] in order to make up for the lost channel access during [t1, t2] • this additional channel access is granted to f at the expense of flows that were granted additional channel access during [t1, t2] while f was unable to transmit any data • Many different proposals with different swapping mechanisms and flows between which swapping takes place, and different compensation models

  34. Network and Channel Model • The channel capacity is dynamically varying. • Channel errors are location-dependent and bursty in nature. • There is contention for the channel among multiple mobile hosts. • Mobile hosts do not have global channel state (in terms of which other hosts contending for the same channel have packets to transmit, etc.). • Mechanism (possibly imperfect) for predicting channel state (good vs. bad) • Basestation schedules the slots for uplink

  35. Service Model • Goal: Wireless fair queuing seeks to provide the same service to flows in a wireless environment as traditional fair queuing does in wireline environments • What does that mean? • bounded delay access to each flow • providing full separation between flows • degree to which service of one flow is unaffected by the behavior and channel conditions of another flow • long-term fairness and instantaneous fairness among backlogged flows • fair queuing can provide both • However, location-dependent errors preclude providing both instantaneous and long-term fairness • long-term fairness by swapping between error-prone & error-free flows • but not instantaneous fairness even in the fluid model in wireless case • Less stringent QoS inevitable in wireless: need to compromise on complete separation between flows to improve efficiency

  36. Problem in Defining Fairness • Fair queuing model: a flow with nothing to transmit in [t,t+] • not allowed to reclaim channel capacity that would have been allocated if the flow was backlogged at t • Wireless channel: flow may be backlogged but unable to transmit due to channel error • should the flow be compensated at a later time? • should channel error be treated same as or differently from empty queue? • currently... either all flows are permitted to transmit or none • Consider the scenario when flow f1 and f2 are both backlogged, but f1 perceives a channel error while f2 perceives a good channel • f2 will additionally receive the share of the channel that would have been granted to f1 in the error free case • Question: should the fairness model readjust the service granted to f1 and f2 in a future time window in order to compensate f1? • traditional fluid fair queuing model does not need to address this issue in a wireline model either all flows are permitted to transmit, or none

  37. So, What is a Reasonable Model for Wireless Fair Service? • Short-term fairness among flows that perceive a clean channel and long-term fairness for flows with bounded channel error. • Delay bounds for packets. • Short-term throughput bounds for flows with clean channels and long-term throughput bounds for all flows with bounded channel error. • Support for both delay sensitive and error sensitive data flows.

  38. Adapting PFQ to Wireless • Use an “error-free” fair service model with no channel error as reference • Monitor and estimate channel condition for backlogged flows, and exclude those flows that have bad channels from consideration • Calculate “lead” and “lag” for flows relative to the reference model • Compensate lagging flows that perceive a good channel at the expense of leading flows • Uplink flows: some mechanism for basestation to estimate the current state of uplink flows, channel conditions, and packet arrival times

  39. Many Algorithms… • Idealized Wireless Fair Queuing algorithm (IWFQ) [Lu97] • IWFQ-variant: Wireless Packet Scheduling (WPS) [Lu97] • CSDPS + Enhanced Class Based Queuing [Fragouli98] • Channel-condition Independent Fair Queuing algorithm (CIF-Q) [Ng98] • Server Based Fairness Approach (SBFA) [Ramanatha98] • Wireless Fair Service algorithm (WFS) [Lu98]

  40. Leading and Lagging Flow • Error-free service of a flow • service that it would have received at the same time instant if all channels had been error-free, under identical offered load. • Leading flow • if it has received channel allocation in excess of its error-free service. • Lagging flow • if it has received channel allocation less than its error-free service. • Flow in sync • neither leading nor lagging, it is said to be in sync • i.e. its channel allocation is exactly the same as its error-free service • How to compute lead and lag? • wireless scheduling algorithm can explicitly simulate the error-free service and calculates the difference of the queue size of a flow in the error-free service and the queue size of the flow in reality

  41. Addressing Fairness • Difference between a non-backlogged flow and a backlogged flow that perceives channel error • A flow that is not backlogged does not get compensated for lost channel allocation. • However, a backlogged flow f that perceives a channel error is compensated in the future • when it perceives a clean channel • this compensation is provided at the expense of those flows that received  additional channel allocation when f was unable to transmit. • This compensation model makes channel errors transparent to the user to some extent, but only at the expense of separation of flows. • Achieving a trade-off between compensation and separation • bound the amount of compensation that a flow can receive at any time • Wireless fair queuing seeks to make short error bursts transparent to the user so that long-term throughput guarantees are ensured • but prolonged error bursts are exposed to the user

  42. Separation vs. Compensation • Let flows f1, f2, and f3 be three flows that share a wireless channel, with equal weights • Let f1 perceive a channel error during a time window [0, 1), and during this time window, let f2 receive all the additional channel allocation that was scheduled for f1 • for example, because f2 has packets to send at all times, while f3 has packets to send only at the exact time intervals determined by its rate • Now suppose that f1 perceives a clean channel during [1, 2]. What should the channel allocation be? • during [0, 1], the channel allocation was as follows: • W1[0, 1) = 0, W2[0, 1) = 2/3, W3[0,1) = 1/3. • thus, f2 received 1/3 units of additional channel allocation at the expense of f1, while f3 received exactly its contracted allocation. • during [1, 2], what should the channel allocation be?

  43. Two Questions • Is it acceptable for f3 to be impacted due to the fact that f1 is being compensated even though f3 did not receive any additional bandwidth? • in order to provide separation for flows that receive exactly their contracted channel allocation, flow f3 should not be impacted at all by the compensation model. • in other words, the compensation should only be between flows that lag their error-free service and flows that lead that error-free service • Over what time period should f1 be compensated for its loss?i.e how long it takes for a lagging flow to recover from its lag? • a simple solution is to starve f2 in [1, 2] and allow f1 to catch up with the following allocation: W1[1, 2] = 2/3, W2[1, 2] = 0, W3[1, 2) = 1/3 • however, this may end up starving flows for long periods of time when a backlogged flow perceives channel error for a long time

  44. Possible Solution • One can bound the amount of compensation • Does not prevent pathological cases • a single backlogged flow among a large set of backlogged flows perceives a clean channel over a time window, and is then starved out for a long time till all the other lagging flows catch up • The compensation model must provide for a graceful degradation of service for leading flows while they give up their lead.

  45. Lead and Lag Models • Lag of a lagging flow: amount of additional service to which it is entitled in the future in order to compensate for lost service in the past • Lead of a leading flow: amount of additional service that the flow must relinquish in future to compensate for additional service received in past • Approaches to computing lag and lead of a flow 1. Lag is the difference between error-free service & real service received • a flow that falls behind its error-free service is compensated irrespective of whether its lost slots were utilized by other flows • Server Based Fairness Approach (SBFA) [Ramanathan98] uses this      2. Lag is the # of slots allocated to the flow during which it could not transmit due to channel error, but another backlogged flow that had no channel error transmitted in its place and increased its lead • the lag of a flow is incremented upon a lost slot only if another flow that took this slot is prepared to relinquish a slot in the future • IWFQ [Lu97], WFS [Lu98], and CIF-Q [Ng98] use this • Lead and lag may be upper bounded by flow-specific parameters • an upper bound on lag is the maximum error burst that can be made transparent to the flow • an upper bound on lead is the maximum # of slots that a flow must relinquish in future to compensate for additional service received in past.

  46. Compensation Models • Key component: determines how lagging flows make up their lag and how leading flows give up their lead. • The compensation model has to address three main issues: 1. When does a leading flow relinquish the slots that are allocated to it? 2. When are slots allocated for compensating lagging flows? 3. How are compensation slots allocated among lagging flows?

  47. Choices for When to Relinquish Slots • A leading flow relinquishes all slots till it becomes in sync • used by IWFW [Lu97] • the problem with this approach is that a leading flow that has accumulated a large lead because other flows perceive large error bursts may end up being starved of channel access at a later time when all lagging flows start to perceive clean channels. • A leading flow relinquishes a fraction of the slots allocated to it • the fraction of slots relinquished may be constant, as in CIF-Q [Ng98],or, may be proportional to the lead of the flow, as in WFS [Lu98]. • advantage is that service degradation is graceful • in WFS, for example, the degradation in service decreases exponentially as the lead of a flow decreases. • A leading flow to never relinquishes its lead • there is a separate reserved portion of the channel bandwidth that is dedicated for the compensation of lagging flows • SBFA [Ramanathan98] uses this approach

  48. Choices for Compensating Lagging Flow • Compensation slots are preferentially allocated till there is no lagging flow that perceives a clean channel • used in IWFQ [Lu97] • lagging flows have precedence in channel allocation over in-sync and leading flows • problem: may disturb in-sync flows and cause them to become lagging even if they perceive no channel error • Compensation slots are allocated only when leading flows relinquish slots • used in CIF-Q [Ng98] and WFS [Lu98] • explicit swapping: best suited for wireless fair service? • Compensation slots are allocated from a reserved fraction of the channel bandwidth that is set aside specifically to compensate lagging flows • used in SBFA [Ramanathan98] • problem: statically bounds the amount of the amount of compensation that can be granted

  49. Choices for Distributing Compensation Slots • The lagging flow with the largest lag is allocated the compensation slot • used in CIF-Q [Ng98] • The history of when flows became lagging is maintained, and the flows are compensated according to the order in which they became backlogged • used in IWFQ [Lu97] and SBFA [Ramanathan98] • The lagging flows are compensated fairly, i.e., each lagging flow receives the number of compensation slots in proportion to its lag • used in WFS [Lu98] • achieves the goal of short-term fairness in wireless fair service • but is computationally more expensive than the other two options

More Related