670 likes | 832 Views
Bandwidth Scheduling and Provisioning in Access and Wide Area Networks. Bin Wang Department of Computer Science and Engineering Wright State University Dayton, OH 45435. Outline. Bandwidth scheduling in Ethernet Passive Optical Network (EPON) Sliding scheduled traffic model
E N D
Bandwidth Scheduling and Provisioning in Access and Wide Area Networks Bin Wang Department of Computer Science and Engineering Wright State University Dayton, OH 45435
Outline • Bandwidth scheduling in Ethernet Passive Optical Network (EPON) • Sliding scheduled traffic model • Bandwidth scheduling over a point-to-point WDM link • Bandwidth provisioning in WDM networks • Look-ahead scheduling of a set of demands • Dynamic scheduling of a demand • Summary
Access Network - Passive Optical Networks • A single fiber is used to support multiple customers – 20km • No active equipment in the path highly reliable • Optical line terminal (OLT) in central office, which connected to the rest of the Internet • Optical network unit (ONU) on customer premises • Both upstream and downstream traffic on ONE fiber (1490nm down, 1310nm up) • EPON: Ethernet based PON draft designed by IEEE 802.3ah • 1000 Mbps downstream and 1000 Mbps upstream Rest of Internet
Why PON? • Reduced OpEx: passive network • High reliability • Reduced power expenses • Shorter installation times • Reduced CapEx: • 16-128 customers per fiber • 1 Fiber + N transceivers • Scalable • CO equipment shared new customers can be added easily as the network grows • Bandwidth is shared existing customer bandwidth can be changed on demand
Bandwidth Scheduling - Upstream • TDMA – a frame consists of N time slots • N ONUs • Each ONU is assigned a dedicated time slot • Traffic arriving to ONU is buffered till correct time slot for this ONU arrives • Traffic will be sent at full link speed upstream
Pros and Cons • Pros • simple • Dedicated bandwidth • Cons • Fixed frame (N time slots) • Potential long delay • No statistical sharing – low utilization • Loss due to buffer overflow; using a larger buffer increases delay
Dynamic Polling-based Bandwidth Scheduling • Use OLT polling ONUs to deliver data encapsulated in Ethernet frames from ONUs to OLT • To avoid walk times associated with polling (due to large RTT), polling requests and data transmission need to be properly scheduled • Interleaved polling with adaptive transmission cycle time
Dynamic Polling-based Bandwidth Scheduling • OLT maintains polling table • # of bytes in ONU’s buffer • requested transmission window • RTT to each ONU • OLT issue Grant message to ONU • Granted transmission window • ONU transmits up to the granted transmission window • At the end of transmission, ONU issues a Request to OLT • # of bytes in ONU’s buffer grant request
Dynamic Polling-based Bandwidth Scheduling • OLT properly times the sending of next Grant message to ONU BEFORE receiving transmission from ONU, given • RTT to ONUs • Transmission window of previous Grant • Guard time needed • Such that • The next transmission from ONU is received by OLT right AFTER the receipt of the last bit of previous ONU transmission grant grant
Dynamic Polling-based Bandwidth Scheduling • Upon receipt of transmission from ONU, OLT • Updates RTT to ONU • Updates # of bytes requested
Dynamic Polling-based Bandwidth Scheduling • The next transmission from ONU is received by OLT is right AFTER the receipt of the last bit of previous ONU transmission + some guard time
Maximum Transmission Window Size (Wimax) • Fixed: based on SLA for each ONU • Dynamic: based network conditions • Wimaxdetermines • guaranteed bandwidth available to ONU-i • max polling cycle • Large cycle increased delay for all packets • Small cycle more bandwidth wasted by guard time • polling cycle is variable depending on requested window sizes or network traffic condition • excessive bandwidth distributed to highly loaded ONUs
Maximum Transmission Window Size (Wimax) • Fixed service • Ignore the requested window size and always grants the max window TDMA • Limited service • Grants the requested # of bytes, but no more than Wmax • Constant credit • Add a constant credit to the requested window size • Granted window size = requested window size + x • Reduce average delay • Linear credit • Granted window size = requested window size + credit • Size of credit proportional to the requested window • Longer burst in last cycle is likely to continue in the next cycle
Other Scheduling Algorithms for EPON • Differentiated services • QoS for multiple classes of services (voice, data, video, etc)
Traffic Models • Following traffic models in open literature: • static traffic model • all demands are known in advance and do not change over time • dynamic random traffic model • a demand is assumed to arrive at a random time and last for a random amount of time • admissible set model • demands are from some prescribed traffic matrices • incremental traffic model • demands arrive sequentially. Once the demand is accommodated, the demand remains in the network indefinitely
Motivation • Many US DOE large-scale science applications must deliver Gbps throughput at scheduled time durations • These applications require provisioning of scheduled dedicated channels or bandwidth pipes at a specific time with certain duration • Bandwidth leasing market • Customers need bandwidth only for a limited period of time • Limited-time leasing of bandwidth possible in the future • These scheduled capacity demands are dynamic • demands only last during the specified intervals • they are not entirely random
New -- Sliding Scheduled Traffic Model • A demand (s, d, n, ℓ, r, ) • s: source • d: destination (or a destination set) • n: capacity requirement • : duration, or lasting time • [ℓ, r]: time window during which demand of duration must be satisfied • Example: (s, d, 1, 10:00, 13:00, 60 minutes) l r
Bandwidth scheduling over a point-to-point WDM link under sliding scheduled traffic model
Problem Settings • A single fiber link with W wavelengths • Time is slotted with T time slots over a day: • 0, 1, …, T-1 • Demands require lightpaths periodically • repeated every day • denoted by (a, b, L) – a discrete version of sliding scheduled model • starts in [a, b] and lasts L time slots (L<T) • demand satisfied in [a, b+L] • time flexibility defined as |[a,b]|-1 • Lightpath service (w, s, L): wavelength w is used for a duration of L time slots start from s per day (L<T)
Problem definition • Given a batch of lightpath demands, assign them lightpath services so that at any time there is at most one lightpath service per wavelength • W=2; T=8; Demands = (4,6,4) (3,3,2) (7,1,3) (1,3,4)
Traffic Constraint for Schedulability Conditions • Virtual Packet (VP) model • treat a demand (a, b, L) as a virtual packet that “arrives” at a and has a “transmission duration” or (work) of L • (σ, ρ): ρ is a measure of the average traffic (demand) rate, and σ is a measure of the traffic (demand) burstiness • ρ ≤W • A(t):work of virtual packets that arrive at time t • (σ, ρ) constrained traffic, total work in [x, y]
Theorem 1: Schedulability Condition when no lightpath wraps around at the end of [0, T-1] Suppose the batch of lightpath requests are (σ, ρ) constrained, Lmax is the max lasting time, and let: And A(t) = 0 for all (i.e., there is no virtual packet arrival in the last time slots) Then there is an assignment for the lightpath requests if their time flexibility is at least f
Theorem 2: General Schedulability Condition Suppose the batch of lightpath requests are (σ, ρ) constrained, and Let Then there is an assignment for the lightpath request if their time flexibility is at least f
Heuristic Scheduling Algorithms • First Come First Serve (FCFS) • Lowest valued wavelengths are used first • Demands with earlier arrival times are scheduled first, ties are broken randomly • Earliest Deadline First (EDF) • Lowest valued wavelengths are used first • Demands with the earliest deadline, b+L, is scheduled first • b+L is the last possible time slot for the end of the lightpath • Both schemes tend to assign lightpaths close to time 0 which creates peak bandwidth demand at time 0
Heuristic Scheduling Algorithms • Lowest wavelength, maximum duration (LWMD) • Wavelengths are filled with lightpath requests one wavelength at a time starting from wavelength 0 • When filling wavelength k, demands that have longer durations are scheduled first, ties broken randomly • (a, b, L): start times are considered in the start interval [a, b] beginning with a
Heuristic Scheduling Algorithms • Lowest wavelength, Fixed (LWFixed) • Wavelengths are filled with lightpath requests one wavelength at a time starting from wavelength 0 • Wavelength k is filled starting from time = 0 • Choose the longest unassigned request (a, b, L) that could start at time t and assign it starting from t • Continue to fill the wavelength from t+L • If no such request, then continue filling the wavelength from time t+1 • May create peak bandwidth demand at time 0
Heuristic Scheduling Algorithms • Lowest wavelength, Continuous (LWCont) • Wavelengths are filled with lightpath requests one wavelength at a time starting from wavelength 0 • Wavelength k >0 is filled by starting at a time t that depends on how wavelength k-1 was filled • If wavelength k-1’s last request was assigned time slots [x,y], then wavelength k is filled starting from y+1
Simulation • Call blocking rate • Ratio of # of calls blocked over # of calls • Traffic blocking rate • Ratio of the work of blocked lightpath requests over the work of all lightpath requests • Scenarios: • request duration evenly distributed in [1,31] • expected duration of lightpaths = 16 • Earliest start time for a demand randomly distributed • Blocking scenario W=30,T=64, 114 requests • Nonblocking scenario W not limited, T=64, 128 requests
Blocking scenario - call • FCFS, EDF high blocking rates • LWCont has about the lowest blocking rates over all flexibility times • Blocking rates decreases as time flexibility increases
Blocking scenario - traffic • FCFS, EDF high blocking rates • LWCont has about the lowest blocking rates over all flexibility times • Blocking rates decreases as time flexibility increases
Nonblocking scenario • Minimal # of wavelengths needed so that there is no blocking • (Cmin= , a lower bound on the # of wavelengths needed, M is work of the lightpath requests
Result Summary • Functions of the time flexibility f • FCFS and EDF have high blocking rates • LWCont has about the lowest blocking rates over all flexibility times • Blocking rates decreases as time flexibility increases except for LWFixed when the time flexibility is around 32 • LWMD and LWCont require minimal number of wavelengths • LWCont performs the best
Summary of Scheduling in P2P Link • Scheduling over a single WDM link under a flexible traffic model • Assigning periodic lightpath services which allow some time flexibility • Schedulability conditions for a set of demands • Heuristic scheduling algorithms
Bandwidth provisioning in WDM networks • Look-ahead scheduling of a set of demands with sub-wavelength capacity under sliding scheduled traffic model
Space-Time Traffic Grooming Problem • Given a set of sliding scheduled traffic demands M, • properly place demands within their time windows, • route and groom (by finding a route and assigning a proper wavelength to each demand in M) such that • non-blocking case • network has enough resources to accommodate all the demands in M to meet their specifications (i.e., capacity and schedule requirements) • goal is to minimize total network resources used
Space-Time Traffic Grooming Problem • blocking case • network does not have enough resources to accommodate all the demands as specified • goal is • to minimize the number of demands to be rearranged (i.e., to minimize the subset of demands that may have their starting time changed in order to have all the demands in the set M accommodated • to minimize of total network resources used • demand priority can also be considered if necessary
Time Conflict & Resource Conflicts • Temporal conflicts: Time conflicts of a set of scheduled demands M • Demands may overlap in time • Demands that are disjoint in time allow resource reuse • Spatial conflicts: Resource conflicts • Routes of demands may overlap • If not enough resources are available, conflicts result • Some demands may not be schedulable because of lack of resources
0 10 8 28 3 5 3 2 11 9 12 6 18 2 3 Interval Graph Modeling of Time Conflict Reduction Weak Edge Tight node: 2 x 8 > 10 Strong edge Loose node: 2 x 5 < 25
Lemma: no strong edge connects two loose nodes. • Theorem: let v be a loose node, A(v) be the set of nodes connected to v with strong edges, then all nodes in A(v) are tight nodes and are connected by strong edges pair wise.
Time Conflict Reduction Algorithm • Use an interval graph to model time conflicts among scheduled demands • Identify time conflicts that can be avoided • Remove time conflicts in a greedy manner to obtain proper placement of demand intervals within their allowed time windows
Performance of Time Conflict Reduction Algorithm • Demand length 10-90% of time window size • Weak time correlation among demands
Performance of Time Conflict Reduction Algorithm • Demand length 10-90% of time window size • Medium time correlation among demands
Performance of Time Conflict Reduction Algorithm • Demand length 10-90% of time window size • Strong time correlation among demands
Performance of Time Conflict Reduction Algorithm • Demand length 10-100% of time window size • Different time correlation among demands Demand length 10-100% of time window size
Time Window Based Routing and Grooming Algorithm • Divide a set of scheduled demands into subsets called time windows • Demands in a time window have time conflicts pair wise • Schedule demands in a time window according to demands’ decreasing resource requirements • Using a modified Dijkstra’s algorithm on a wavelength layered graph • If a demand is blocked due to unavailability of resources, rearrange the schedule of the demand • Schedule the demand earlier or later in time when the required resources are available
r1 r4 r6 r2 r7 r3 r5 Time Time Window 1 Time Window 2 Time Window 3 Demand Set Division