220 likes | 427 Views
CS523 Operating Systems. Fred Kuhns Applied Research Laboratory Computer Science Washington University. Priority-Driven Schedulers. Assumptions, some of which will be relaxed at the end of this section: Independent tasks No sporadic or aperiodic tasks Jobs ready when released
E N D
CS523Operating Systems Fred Kuhns Applied Research Laboratory Computer Science Washington University
Priority-Driven Schedulers • Assumptions, some of which will be relaxed at the end of this section: • Independent tasks • No sporadic or aperiodic tasks • Jobs ready when released • Jobs may be preempted at any time • Jobs never suspend themselves • Scheduling decisions made at job release and completion • Interrelease times may vary: use the minimum for period • Uni-Processor environment • System overhead (including context switch) is negligible compared to job execution times and periods • Unlimited priority levels CS523 – Operating Systems
What is a priority-driven scheduler • Event-driven, work-conserving schedulers • scheduling decision made when event occurs: for example a job is released or completes • Resources allocated to the highest priority job, and only left idle if no jobs are waiting (ready). • Jobs assigned explicit or implicit priorities • consequently FIFO and Round Robin may be considered priority driven. Priorities are defined by the queue ordering and may be varied when job executes • Algorithm defined by priorities used and a set of rules (preemption, priority changes etc). CS523 – Operating Systems
Concepts • Algorithms that consider a jobs urgency generally perform better than those that do not. • as the deadline for a job approaches, its urgency should increase. Contrast EDF with FIFO, the FIFO implicit priority is a jobs position in a queue. • Static versus Dynamic assignment of workloads • for now we only consider the case where a set of tasks are statically assigned to a processor. • Fixed versus Dynamic priority assignment. • Fixed priority: RM and DM • Dynamic Task, Fixed Job: EDF • Dynamic Job: LST (Least Slack First) CS523 – Operating Systems
Concepts: Schedulable Utilization • "A scheduling algorithm can feasibly schedule any set of periodic tasks on a processor if the total utilization of the tasks is equal to or less than the schedulable utilization of the algorithm" • Schedulable utilization is necessarily <= 1 • While dynamic priority algorithms have better average performance, they are less predictable during overload. In fact their worst case behavior is more difficult to predict • consider how RM with fixed priorities will perform during overload: the highest priority tasks will generally complete on time which the lowest will not. Contrast this with EDF which uses dynamic priorities. CS523 – Operating Systems
Dynamic Algorithms • Earliest Deadline First • Assigns priorities to jobs according to absolute deadline. The sooner the deadline the higher the priority • Priority assigned when job released. For example, job arrives and is placed in a queue using EDF algorithm. • Least Slack Time First (LST) • Job with the least amount of slack time is scheduled next. Dynamic job priority CS523 – Operating Systems
Schedulable Utilizations of EDF • A system of independent, preemptable tasks with relative deadlines equal to their respective periods can be feasibly scheduled on one processor if and only if the total utilization is equal to or less than 1. • How would you prove this? • What does this tell us about a system scheduled with the EDF algorithm? • What if relative deadlines are not equal to the periods? • in particular if deadlines are less than the period then use the notion of task density = e/min(D,p) CS523 – Operating Systems
Acceptance Test for EDF • Schedulability of the EDF algorithm. • if Dk pk then Δ = U and test is both necessary and sufficient. • Otherwise if Dk < pk for some k then the test is only a sufficient condition. CS523 – Operating Systems
Fixed Priority Algorithms • Rate Monotonic (RM) • Assigns priorities based on Task period: smaller the period the higher the priority. • if pi < pk (rate of Ti > rate of Tk), then i > k • Deadline Monotonic (DM) • Assigns priority based on task's deadline: smaller the relative deadline the higher the priority. • if Di < Dk, then i > k CS523 – Operating Systems
Fixed Priority Systems • No fixed priority algorithm is optimal • Under special cases can achieve a utilization of 1: • a set of tasks are simply periodic if for every pair of tasks Ti and Tk with pi < pk, pk = mpi, m an integer. • for the special case of simply periodic, independent, preemptable tasks with Di ≥ pi, a set of tasks is schedulable on a uniprocessor system iff U 1. • Among fixed-priority algorithms, DM is optimal • if a set of tasks can be scheduled using fixed priorities then DM will produce a feasible schedule • if Dk = x·pk for all k and some constant x then RM is identical to DM. CS523 – Operating Systems
Schedulability of Fixed Priority Algorithms • Sufficient utilization bounds for RM, let Dk = pk • Time-demand analysis (Dk<= pk) • assumes worst case task interactions: evaluates computational requirements at task critical instants • Simulation can be used • it is sufficient to perform simulation over largest period when all tasks are in phase CS523 – Operating Systems
Critical Instant • Central concept for fixed priority systems • A critical instant of a task is the "instant" in time when a job (in Ti) has its maximum response time. • If a task can be scheduled in its critical instant then it will not miss a deadline. • Occurs when a job in Tiis released concurrently with all higher priority jobs. • T = {T1, T2, ..., Tn}, ordered by priority (i < j, i < j) • Ti = {T1, T2, ..., Ti} • ri,c= rk,l for k in 1, 2, ..., i-1 and some c and l. CS523 – Operating Systems
Critical Instant • Lower priority task is preempted by higher priority tasks, delaying completion. • Maximum interference when all phases are equal advancing release time increases interference ei + 3ej CS523 – Operating Systems
Critical Instant • Advancing higher priority's phase • Critical time zone - time interval between critical instant and completion. advancing release time increases interference ei + 4ej CS523 – Operating Systems
i - 1 t ek k = 1 pk wi(t) = ei + w(t) t Time Demand Analysis • Computes total demand for processor time by a job released at a critical instant of the task and all higher priority tasks. • If this worst case response time is less than or equal to the jobs deadline, then it is schedulable. for 0 < t pi CS523 – Operating Systems
Schedulable Utilization for RM and DM • System of n independent, preemptable tasks with relative deadlines equal to their respective periods can be feasibly scheduled on a processor according to the RM algorithm if its total utilization is less than or equal to URM(n). For the special case of Dk = pk we have • URM(n) = n(21/n-1) • This is a sufficient condition, that is, it is possible for a set of tasks to fail this test but still be schedulable. • Advantage over a time-demand analysis is reduced complexity for on-line implementations. CS523 – Operating Systems
Practical Considerations • Preemptive versus Non-Preemptive • Self Suspension • Context Switches • Limited Priority Levels • Tick scheduling • Variable priority CS523 – Operating Systems
Blocking and Nonpreemptivity • Higher priority job is blocked by a lower priority job during the nonpreemptive interval resulting in a priority inversion. • Must adjust schedulability tests • let qi = maximum nonpreemptable execution time • qi <= ei • let bi(nb) = max(qk), i+1<=k<=n • Fixed priority systems, change time demand analysis as follows • Change utilization equation as follows: • For deadline driven systems CS523 – Operating Systems
Nonpreemptivity • EDF: In a system where jobs are scheduled on the EDF basis, a job Jk with relative deadline Dk can block a job Ji with relative deadline Di only if Dk is larger then Di. • In other words for a job to be blocked we need rk < ri and dk > di. CS523 – Operating Systems
Blocking Time Continued • Self-suspension or self-block • Give an example? • We can treat as another blocking factor, bi(ss). • If higher priority job self-suspends then its computation time may be deferred until the feasible interval for some job. • If a job Ji can self-suspend at most Ki times then the total possible blocking time is given by (K+1 represents the K self-suspensions and scheduling on release) CS523 – Operating Systems
Putting it together with system overhead • Context Switch, assume fixed job priorities. • let CS = context switch time: time to “place” or “remove” a job to/from the processor. So 2*CS is the total time context switch overhead for a given job. • can account for this by increasing a jobs execution time by 2*CS or 2(K+1)CS if self-suspend K times. • Updating our tests: CS523 – Operating Systems
Limited Priorities and Tick scheduling • Limited priorities • N task priorities, S system priorities and N !+ S, then must provide a mapping. • When N > S then nondistinct priorities result (different task priorities map to same system priority) • worst case a job is delayed by all other jobs with the same priority • Uniform mapping • Constant Ratio mapping and grid ratio • Tick Scheduling • tick period = p0 • the "tick" task has period p and an execution that is a function of the queue lengths • A job may wait on scheduling queue when runnable CS523 – Operating Systems