200 likes | 281 Views
Analysis of SRPT Scheduling: Investigating Unfairness. Nikhil Bansal (Joint work with Mor Harchol-Balter ). Client1. Client2. Client3. Server. Motivation Problem. Aim: “Good” Scheduling Policy Low Response times Fair. Time Sharing (PS).
E N D
Analysis of SRPT Scheduling:Investigating Unfairness Nikhil Bansal (Joint work with Mor Harchol-Balter)
Client1 Client2 Client3 Server Motivation Problem • Aim: • “Good” Scheduling Policy • Low Response times • Fair
Time Sharing (PS) Server shared equally between all the jobs: • Low response times • Fair • Does not require knowledge of sizes Can we do better ?
Shortest Remaining Proc. Time Optimal for minimizing mean response times. Objections: • Knowledge of sizes • Improvements significant ? • Starvation of large jobs Biggest fear
Questions • Smalls better Bigs worse • How do means compare • Elephant-mice property and implications
Arrivals queue Server Load( ) = (arrival rate).E[S] M/G/1 Queue Framework • Poisson Arrival Process with rate • Job sizes (S) iid general distribution F
Queueing Formulas for PS E[T(x)]: Expected Response time for job of size x [Kleinrock 71] Identical for all!
M/G/1 SRPT x ò l + - 2 2 ( t f ( t ) dt x ( 1 F ( x ))) x dt ò = + 0 E [ T ( x )] SRPT - r ( 1 ( t )) - r 2 2 ( 1 ( x )) 0 Waiting Time (E[W(x)]) Residence Time (E[R(x)]) • Load up to x • Variance up to x • Gains priority after it begins execution
All-Can-Win under srpt put c Thm: Every job prefers SRPT, when load <= ½, for all job size distributions. Proof: Know that If Key Observation Holds for all x, if load <= 0.5
What if load > 0.5 ? problem Still holds if Irrespective of The Heavy-Tailed Property: (Elephant -Mice) 1%of the big jobs make up at least50%of the load. For a distribution with the HT property, >99% of jobs better under SRPT In fact, significantly better, Under SRPT, Bounded by 4 Arbitrarily high
The very largest jobs • If load <= 0.5, all jobs favor SRPT. • At any load, > 99% jobs favor SRPT, if HT property. Moreover significantimprovements. What about the remaining 1% largest jobs?
1. Bounding the damage theorem Fill in… 2. As Implication: Mean slowdown of largest 1% under SRPT: Same as PS
Insert plots here: 1 for BP 1.1 with load 0.9 showing how all Do better 2 for exp with load 0.9 showing how some do bad.
Other Scheduling Policies • Non-preemptive: • First Come First Serve (FCFS) • Random • Last Come First Serve (LCFS) • Shortest Job First (SJF) Very bad mean Performance, for HT workloads • Preemptive: • Foreground Background (FB) • Preemptive LCFS Trivially worse Same as PS
Overload Add some lines for why good + we do work on this in paper
Actual Implementation Add a plot or couple of lines
Conclusions • Significant mean performance improvements. • Big jobs prefer SRPT under low-moderate loads. • Big jobs prefer SRPT even under high loads for heavy-tailed distributions.
Under h-t distributions Load = 0.9 Heavy-tailed distribution with alpha=1.1 Very largest job
Under light-tailed distributions Load=0.9 Exponential distribution