160 likes | 170 Views
This study explores scheduling elastic flows in IP networks, specifically analyzing the mean delay at flow level. It compares different scheduling disciplines such as FB, MLPS, and PS, and presents new results and comparisons among MLPS disciplines.
E N D
Mean Delay Analysis of Multi Level Processor SharingDisciplines Samuli Aalto (TKK) Urtzi Ayesta (before CWI, now LAAS-CNRS)
Teletraffic application: scheduling elastic flows • Consider a bottleneck link in an IP network • loaded with elastic flows, such as file transfers using TCP • if RTTs are of the same magnitude, then approximately fair bandwidth sharing among the flows • Intuition says that • favouring short flows reduces the total number of flows, and, thus, also the mean delay at flow level • How to schedule flows and how to analyse? • Guo and Matta (2002), Feng and Misra (2003), Avrachenkov et al. (2004), Aalto et al. (2004, 2005)
Queueing model • Assume that • flows arrive according to a Poisson process with rate l • flow size distribution is of type DHR (decreasing hazard rate) such as hyperexponential or Pareto • So, we have a M/G/1 queue at flow level • customers in this queue are flows (and not packets) • service time = the total number of packets to be sent • attained service = the number of packets sent • remaining service = the number of packets left
Scheduling disciplines at flow level • FB = Foreground-Background = LAS = Least Attained Service • Choose a packet from the flow with least packets sent • MLPS = Multi Level Processor Sharing • Choose a packet from a flow with less packets sent than a given threshold • PS = Processor Sharing • Without any specific scheduling policy at packet level, the elastic flows are assumed to divide the bottleneck link bandwidth evenly • Reference model: M/G/1/PS
Optimality results for M/G/1 • Schrage (1968) • If the remaining service times are known, then • SRPT is optimalminimizing the mean delay • Yashkov (1987) • If only the attained service times are known, then • DHR implies that FB (which gives full priority to the youngest job)is optimalminimizing the mean delay • Remark: In this study we consider work-conserving (WC) and non-anticipating (NA) service disciplines p such as FB, MLPS, and PS (but not SRPT) for which only the attained service times are known
FCFS+PS(a) a MLPS disciplines • Definition: MLPS discipline, Kleinrock, vol. 2 (1976) • based on the attained service times • N+1levels defined by N thresholds a1< … <aN • between levels, a strict priority is applied • within a level, an internal discipline (FB, PS, or FCFS) is applied
Earlier results: comparison to PS • Aalto et al. (2004): • Two levels with FB and PS allowed as internal disciplines • Aalto et al. (2005): • Any number of levels with FB and PS allowed as internal disciplines
Idea of the proof • Definition: Ux = unfinished truncated work with threshold x • sum of remaining truncated service times min{S,x} of those customers who have attained service less than x • Kleinrock (4.60) • implies that • so that
Mean unfinished truncated work E[Ux] bounded Pareto file size distribution
Question we wanted to answer to Given two MLPS disciplines, which one is better in the mean delay sense?
FCFS+PS(a) PS+PS(a) x a Locality result • Proposition 1: • For MLPS disciplines, unfinished truncated work Ux within a level depends on the internal discipline of that level but not on the internal disciplines of the other levels
New results 1: comparison among MLPS disciplines • Theorem 1: • Any number of levels with all internal disciplines allowed • MLPS derived from MLPS’ by changing an internal discipline from PS to FB (or from FCFS to PS) • Proof based on the locality result and sample path arguments • Prove that, for all x and t, • Tedious but straightforward
New results 2: comparison among MLPS disciplines • Theorem 2: • Any number of levels with all internal disciplines allowed • MLPS derived from MLPS’ by splitting any FCFS level and copying the internal discipline • Proof based on the locality result and mean value arguments • Prove that, for all x, • An easy exercise since, in this case, we have explicit formulas for the mean conditional delay
New results 3: comparison among MLPS disciplines • Theorem 3: • Any number of levels with all internal disciplines allowed • The internal discipline of the lowest level is PS • MLPS derived from MLPS’ by splitting the lowest level and copying the internal discipline • Proof based on the locality result and mean value arguments • Prove that, for all x, • In this case, we don’t have explicit formulas for the mean conditional delay • However, by truncating the service times, this simplifies to a comparison between PS+PS and PS
Note related to Theorem 3 • Splitting any higher PS level is still an open problem (contrary to what we thought in an earlier phase)! • Conjecture 1: • Let p = PS+PS(a) and p’ = PS+PS(a’) with a£a’ • For any x>a’, • This would imply the result of Theorem 3 for any PS level
Recent results • For IMRL (Increasing Mean Residual Lifetime) service times: • FB is not necessarily optimal with respect to the mean delay • a counter example where FCFS+FB and PS+FB are better than FB • in fact, there is a class of service time distributions for which FCFS+FB is optimal • If only FB and PS allowed as internal disciplines, then • any such MLPS discipline is better than PS • Note: DHR Ì IMRL