1 / 52

Implications of Classical Scheduling Results For Real-Time Systems

Implications of Classical Scheduling Results For Real-Time Systems. John A. Stankovic, Marco Spuri, Marco Di Natale and Giorgo Buttazzo. Introduction. Classical scheduling theory Vast amount of literature Not always directly applicable for RT systems Summarize implications and new results

kueng
Download Presentation

Implications of Classical Scheduling Results For Real-Time Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Implications of Classical Scheduling Results For Real-Time Systems John A. Stankovic, Marco Spuri, Marco Di Natale and Giorgo Buttazzo

  2. Introduction • Classical scheduling theory • Vast amount of literature • Not always directly applicable for RT systems • Summarize implications and new results • Provide important insight in making good design choices • Address common problems and design issues

  3. Contents • Preliminaries • Uni-processor systems • Preemptive vs. Non-preemptive • Precedence constraints • Shared resources • Overload • Multiprocessor systems • Static vs. Dynamic • Preemptive vs. Non-preemtive • Anomalies • An analogie

  4. Static vs. Dynamic scheduling • Static • The algorythm has complete preliminary knowledge regarding the task set, constraints, deadlines, computation times, release times • i.e. laboratory experiment, process control • Dynamic • The algorythm has complete knowledge on the current state,but nothing about the future • i.e. multi-agent problems

  5. On-line vs. Off-line • On-line calculating the schedule • Takes the current conditions into account • Decisions are based on the current conditions • Off-line scheduling is always done • Premilinary analysis (what we should expect) • A scheduling algorithm can be applied to both static and dynamic scheduling, and can be used on- or off-line • Dynamic case: static scheduling can be applied to the worst case off-line

  6. Metrics • Carefully choose metrics • Minimize the: • Sum of completion time • Weighted sum of completion time • Schedule length • Number of required processors • Maximum lateness (useful) • Number of tasks who miss their deadlines (usually used) • Deadlines are usually included as constraints

  7. Problem with the Lmax property

  8. Complexity theory NP P NP-Complete NP-Hard NP: a proof can be recognized in polynomial time, but no polynomial algorithm exist to solve the problem P: polynomial algorithm exist to find a proof NP-Complete: R ϵ NP-Complete if all NP problems can be polynomial tranformed to R and R ϵ NP NP-Hard: R ϵ NP-Hard if all NP problems are polynomial transformable to R

  9. Uni-processor systems • Problem definition syntax α | β | γ • α: machine environment (number of processors) • β: job characteristics (preemption, constraints...) • γ: optimality criterion (max lateness... etc.)

  10. Independent tasks 1. Preemption vs Nonpreemption • First thing to consider: use of preemption • Problem: 1 | nopmtn | Lmax • Single machine – no preemption – minimize maximum lateness • Jackson’s Rule: Any sequence is optimal that puts the jobs in order of nondecreasing due dates • EDF algorithm: Earliest Deadline First (ϵ P)

  11. Independent tasks 2.Release times • Release time: a task has release time ri if its execution cannot start before time ri • 1 | nopmtn, rj | Lmaxϵ NP • 1 | pmtn, rj | Lmaxϵ P • Jackson’s Rule Modified: Any sequence that any instant schedules the job with the earliest due date among all the eligible jobs is optimal with respect to Lmax

  12. Independent tasks 3. EDF and LLF • Proof of Jackson’s rule is the interchange argument (not discussed in the paper) • Usually allowing preemption decreases complexity • EDF and LLF algorithms are optimal in these cases • LLF = Least Laxity First (laxity = „slack time”) • Laxity = d-t-c Slack time t t c d

  13. Independent tasks 4.Rate monotonic approach • Rate-monotonic approach (Liu and Layland) • Shorter period – higher priority • A set of n independent periodic jobs can be scheduled by the rate monotonic policy if • pi: worst case execution time; Ti: period • 69% utilization can always be achieved • Both rate-monotonic and EDF are broadly used

  14. Precendence constraints 1. • Tasks are not independent anymore • i → j means that task i must precede task j • G(V,E) precedence graph can be constructed 1 | prec, nopmtn | Lmax • Lawler algorithm solves it in: O(n2) (ϵ P) • But task start times must be identical

  15. Precendence constraints 2. • If we introduce release times, the problem becomes exponential (1 | prec, nopmtn, ri | Lmax) ϵ NP • The general case cannot be solved • BUT: Polynomial algorithm exists when the precedence graph is a series-parallel graph

  16. Precendence constraints 3.Series-parallel graphs 1. • Graphs that can beconstruted from an emptygraph with two operators: • Or if their transitive closure does not contain a Z-graph

  17. Precendence constraints 4.Series-parallel graphs 2. • Series-parallel graphs only contain intrees OR outrees, but not both of them • The precedence problem than can than be solved with Lawler’s algorithm in O(|N| + |A|) ϵ P • |N| - number of nodes; |A| - number of edges

  18. Precendence constraints 5. • Bad news: Z-graphs almost always occur in RT systems • For example: an asyncronous send followed by a syncronous receive • Preemption again can reduce the complexity of the scheduling problem (1 | prec, pmtn, ri | Lmax) ϵ O(n2) • Baker’s algorithm (not discussed)

  19. Precedence constraints 6. • Another idea is to encode the precedences into the deadlines, and use EDF • Blazewicz: EDF is optimal for this case if we revise the deadlines and release dates of tasks according to these formulas: starting from tasks having no successor step by step starting from tasks having no predecessor step by step

  20. Precedence constraints 7. • Still no shared resources are taken into account • The general problem of scheduling tasks with precedence constraints and resource conflicts is still NP-hard • Solutions usually use heuristic and branch-and-bound methods

  21. Shared resources 1. • Problem is solved with mutual exclusion primitives • Several additional problems arise: • Mok: • When there are mutual exclusion constraints, it is impossible to find a totally on-line optimal run-time scheduler • It is even worse: The problem of deciding whether it is possible to schedule a set of periodic processes which use semaphores (only to enforce mutual exclusion) is NP-hard

  22. Shared resources 2. • Even deciding whether a solution exists, is NP-hard • Proof: polynomial tranformation to the 3-partition problem (a.k.a. Karp-reduction) • 3 partition problem ϵ NP • A given multiset of integers • Divide it into three equal-sum groups

  23. Shared resources 3. • Mok also points out: the reason for the NP-hardness is the different possible computation times of the mutually exclusive blocks • Confirmation: 1 | nopmtn, rj, pj=1 | Lmax and 1 | nopmtn, prec, rj, pj=1 | Cmax ϵ P

  24. Shared resources 4. • Somehow the algorithm should force using same length critical sections • Sha and Baker found efficient suboptimal solutions guaranteeing minimum level of performance • Kernelized monitor: Use longer time quantum on the processor, than the longest critical section:

  25. Shared resources 5. • Mok: If a feasible schedule exists for an instance of the process model with precedence constraints and critical sections, then the kernelized monitor scheduler can be used to produce a feasible schedule

  26. Shared resources 6. • Rate-monotonic approach – Priority Ceiling Procotol (PCP) • We assign priority to the mutex object • We prevent accessing all the mutexes based on this priority • Proved to be deadlock-free • Prevents unbounded priority inversion (a job can block only once) • Chen and Lin extended PCP to work with EDF

  27. Shared resources 7. • Stack Resource Policy (SRP) • A more general solution by Baker • A job should not be permitted to start • until the resources currently available are sufficient to meet its maximum requirements • until the resources currently available are sufficient to meet the maximum requirements of any single job that might preempt it • The first property prevents deadlocks, the second prevents multiple priority inversion

  28. Shared resources - summary • It is very important to deal with the problem of shared resources • The classical results are usually applicable to RT systems, but only in uniprocessor systems

  29. Overload and value 1. • If transient large overload occur, we still want a suboptimal schedule • Some tasks should meet their deadlines between all conditions • We associate values with tasks, so that we can define our preferences

  30. Overload and value 2. • EDF (and LLF) algorithms perform very poorly in overloaded conditions • EDF gives the highest priority to tasks with the closest deadline, so the „Domino effect” may occur • For example: all tasks miss their deadline, while a suboptimal solution could have been found

  31. Overload and value 3. • We use different metrics, however Lmax=0 could express that every task should meet its deadline • Task sets with values: wi • Smith’s rule: finding an optimal schedule for: is given by any sequence that puts jobs in order of non-decreasing ratios:

  32. Overload and value 4. • This solution does not work in general • All of these problems are in NP 1 | prec | ΣwjCj 1 | dj | ΣwjCj 1 | prec | Σcj 1 | prec, pj=1| ΣwjCj • These are solved by a polynomial algorithm: 1 | chain | ΣCj1 | series-parallel | ΣCj 1 | dj | ΣCj

  33. Overload and value 5. • Baruah: there is an upper bound on the performance of any on-line, preemptive algorithm working between overloaded conditions • Competitve factor: ratio of the cumulative values accomplished by the algorithm and the clairvoyant scheduler • No on-line scheduling algorithm exists with a competitive factor greater than 0.25

  34. Overload and value 6. Ratio to the clairvoyant scheduler • This is the achiveable competitive factor as the function of the load size 1 0.385 0.25 Load 1 2

  35. Summary of uni-processor results • Huge amount of theoretical results, • Many used algorithms are based on the EDF or the rate-monotonic scheduling • Operating in overload and fault-tolerant scheduling are the fields where additional research is necessary

  36. Multi-processor RT scheduling • Far less results are presented in this field • Almost all of the problems are NP-hard • The most important goal is to develop clever heuristics • There are serious anomalies that should be avoided • Processors are considered to be identical

  37. Deterministic (static) scheduling 1.Non-preemptive • Multiprocessor scheduling results usually consider tasks with constant execution time • Theorems for non-preemptive, partially ordered tasks with resource constraints and one single(!) deadline cases show that they are almost always NP-hard • The following theorems consider arbitrary partial, forest partial ordered and independent tasks • Forest partial order: in terms of the precedence graph

  38. Deterministic (static) scheduling 2.Non-preemptive

  39. Deterministic (static) scheduling 3.Non-preemptive • These cases are far less complex than a usual embedded system scheduling problem • No unit tasks • More shared resources • Tasks with different deadlines (!) • Heuristical algorithms must be used

  40. Deterministic (static) scheduling 4.Preemptive • Introducing preemption usually makes the problem easier, but: P | pmtn | ΣwjCj • McNaughton:For any instance of the multiproc. Scheduling problem, there exists a schedule with no preemption for which the value of the sum of computation times is as small as for any schedule with a finite number of preemptions • There is no advantage of preemption in this case • We should rather minimize overhead (such as context switches) and not use preemption

  41. Deterministic (static) scheduling 5.Preemptive • Lawler: The multiprocessing problem of scheduling P processors, with task preemption allowed and where we try to minimize the number of late tasks is NP-hard. (P | pmtn | ΣUj) ϵ NP • Uj – late tasks • Solution always requires heuristics!

  42. Dynamic scheduling 1. • There are very few theoretical results in this field • Consider the EDF algorithm (which is optimal in the uni-processor case): • Mok: Earlies deadline first scheduling is not optimal in the multiprocessor case • Example

  43. Example of EDF in multiprocess case • Ti(Ci, di): T1(1,1), T2(1,2), T3(3,3.5) P1 T1 T3 EDF P2 T2 t 1 2 3 3.5 4 P1 T1 T2 optimal P2 T3 t 1 2 3 3.5 4

  44. Dynamic scheduling 2. • Mok: For two or more processors, no deadline scheduling algorithm can be optimal without complete a priori knowledge of • deadlines • computation times • start times of tasks • This implies, that none of the classical scheduling algorithms can be optimal when used online

  45. Dynamic scheduling 3. • Possibilities • Analyse the worst case scenario. If a scheduling exists, than every run-time situation can be schedules • Use well-developed heuristics – this can really increase computational requirements (sometimes additional hardware required) • Baruah: No on-line scheduling algorithm can guarantee a cumulative value greater than one-half for the dual-processor case

  46. Multiprocessing anomalies 1. • Richard’s anomalies • Optimal schedule , fixed number of processors, fixed execution times, precedence constaints • Graham: For the stated problem, changing the priority list, increasing the number of processors, reducing execution times, or weakening the precedence constraints can increase the schedule length.

  47. Multiprocessing anomalies 2. • Weakening the constaints can ruin the schedule • Example: P1 P2 ↓ We decrease the C1 P1 P2 Static allocation: P1: T1; T2 P2: T3; T4; T5

  48. Multiprocessing anomalies 3. • Richard’s anomalies are the proof of that it is not always sufficient to schedule the worst-case • We can overcome these anomalies by having tasks simply idle if they finish earlier than their allocated computation time • This can be really inefficient • However their are solution for this [Shen]

  49. Similarity to bin-packing • Bin-packing problem is a famous algorithmic problem • There are N bins, each have a capacity • There are boxes, and we have to put them into those bins • Two variations: • What is the minimum number of required bins (same size) • Fixed number of bins given, minimize the maximum bin length

  50. Bin-packing implications • Several algorithms can be used, such as: first-fit (FF), best-fit (BF), first-fit decreasing (FFD), best-fit decreasing (BFD) • Theroetical boundaries exists: • For FF and BF worst case: (17/10) L* (L* = optimal) • FFD >= BFD boundary: (11/9) L* (L* = optimal) • In RT systems, we have much more constraints than the ones this analogie takes into account, but the implications might still be useful in off-line analysis

More Related