590 likes | 835 Views
Is It Time Yet?. Wing On Chan. Distributed Systems – Chapter 18 - Scheduling. Hermann Kopetz. Scheduling Algorithm Classifications. Real-Time Scheduling Soft Hard Dynamic Preemptive Non-preemptive Static Preemptive Non-preemptive. Scheduling Problem.
E N D
Is It Time Yet? Wing On Chan
Distributed Systems – Chapter 18 - Scheduling Hermann Kopetz
Scheduling Algorithm Classifications Real-Time Scheduling • Soft • Hard • Dynamic • Preemptive • Non-preemptive • Static • Preemptive • Non-preemptive
Scheduling Problem • Distributed hard real-time systems • Execute a set of concurrent RT transactions such that all time-critical transactions meet their deadlines • Transactions need resources (Computational, communication, and data) • Decomposition • Within a node • Communication resources
Hard RT Vs Soft RT Scheduling • Hard RT Systems • Deadlines must be guaranteed before execution starts. • Probability that the transaction finishes before it’s deadline not enough. • Off-line schedulability tests • Feasible static schedules
Hard RT Vs Soft RT Scheduling • Soft RT Systems • Violation of timing not critical • Cheaper resource-inadequate solutions can be used • Under adverse conditions, it is tolerable that transactions not meet their timing constraints
Dynamic Vs Static Scheduling • Dynamic (On-line) Scheduling • Only considers • Actual requests • Execution time parameters • Costly to find a schedule • Static Scheduling (Off-line) • Complete knowledge • Maximum execution time • Precedence constraints • Mutual exclusion constraints • Deadlines
Preemptive Vs Non-Preemptive • Preemptive • Can be interrupted by more urgent tasks • Safety assertions • Non-Preemptive • No interruptions • Shortest response = Longest + Shortest task • Reasonable for scenarios with many short tasks
Central Vs Distributed • Dynamic distributed RT systems • Central Scheduling • Distributed Algorithms • Requires up-to-date information in all nodes • Significant communication costs
Schedulability Test • Determine if a schedule exists • Exact • Necessary • Sufficient • Optimal scheduler • Optimal if schedule can be found when the exact schedulability test says there is one • Exact schedulability test • Belongs to the class of NP-complete problems
Schedulability Test • Sufficient schedulability test • Sufficient but not necessary condition • Necessary schedulability test • Necessary but not sufficient condition • Difference between deadline di and computation time ci (laxity) be non-negative
Periodic Tasks • Periodic Tasks • After the initial task request • All future requests known • Adding multiples of known period to initial request time
Periodic Tasks • Task set {Ti} of periodic tasks • Periods - pi • Deadlines - di • Processing requirements – ci • Sufficient to examine schedules with length equal to the least common multiple of the periods in {Ti}
Periodic Tasks • Necessary schedulability test • Sum of utilization factors i must be less than or equal to n, where n is the number of processors • = (ci / pi) <= n • i = Percentage of time the task Ti requires the service of a CPU
Sporadic Tasks • Request times are not known beforehand • Must be a minimum interval pi between any two request times of sporadic tasks • If pi doesn’t exist, then the necessary schedulability test will fail • Aperiodic tasks = No constraints on request times
Optimal Dynamic Scheduling • Consider a dynamic scheduler with full past knowledge only • Exact schedulability is impossible • New definition of optimal dynamic scheduler • Optimal if it can find a schedule whenever a clairvoyant scheduler can find a schedule
Adversary Argument • If there are mutual exclusion constraints between periodic and sporadic tasks, then in general, it is impossible to find an optimal totally online dynamic scheduler.
Adversary Argument • Necessary schedulability test • = (ci / pi) <= n • = (2/4) + (1/4) = (3/4) <= 1 • Suppose that when T1 starts, T2 requests service • Mutually exclusive • T2 has a laxity of d2 – c2 = 1 - 1 = 0 • T2 will miss it’s deadline
Adversary Argument • Clairvoyant scheduler • Schedule periodic task between sporadic tasks • Laxity of periodic task > execution time of sporadic task, so scheduler will always find a schedule
Adversary Argument • If the on-line scheduler has no future knowledge about sporadic tasks, scheduling becomes unsolvable. • Predictable hard RT systems are only feasible if there are regularity assumptions
Dynamic Scheduling • Dynamic scheduling algorithm • Determines task after occurrence of a significant event • Based on current task requests
Rate Monotonic Algorithm • Classic algorithm for hard RT systems with a single CPU • Dynamic preemptive algorithm • Static task priorities
Rate Monotonic Algorithm • Assumptions • All requests in set {Ti} are periodic • All tasks are independent. No precedence or mutual exclusion constraints • di = pi • The maximum ci is known and constant • Context switching time is ignored • = (ci / pi) <= n (21/n – 1) [approaches ln 2 or 0.7]
Rate Monotonic Algorithm • Algorithm defines task priorities • Short pi tasks get higher priority • Longer pi tasks get lower priority • During run-time, always run the highest priority • If all assumptions are met, all Ti meet their deadlines • Optimal for single processor systems
Earliest-Deadline-First Algorithm • Optimal dynamic preemptive algorithm • Uses dynamic priorities • Assumptions 1-5 of the Rate Monotonic Algorithm must also hold • can go up to 1, even with tasks that do not have pis that are multiples of the shortest period • After a significant event • Task with the shortest di gets the highest dynamic priority
Least-Laxity Algorithm • Optimal in single processor system • Same assumptions as Earliest-Deadline-First algorithm • At scheduling decision point • Task with the shortest laxity (di – ci) is given the highest dynamic priority • In multiprocessor systems • Earliest-deadline-first and least-laxity algorithms are not optimal • Least-laxity algorithm is able to handle task scenarios that the Earliest-deadline-first algorithm could not
Scheduling Dependent Tasks • Analysis of tasks with precedence and mutual exclusion constraints more useful • Scheduling competing with tasks for resources • Possible solutions • Provide extra resources. Simpler sufficient schedulability tests and algorithms. • Divide problem into 2 parts • One solved at compile time • One solved during run-time (Simpler of the two) • Add restricting regularity assumptions
Kernelized Monitor • For a set of short critical sections, the longest critical section less than a given duration q. • Allocates processor time in uninterruptible quantums of q. • Assumes all critical sections can be started and completed within this single uninterruptible • Process may only be interrupted after xq where x is an integer
Kernelized Monitor • Example: • Assume there are two periodic tasks • T1: c1 = 2, d1 = 2, p1 = 5 • T2: c21 = 2, c22 = 2, d2 = 10, p2 = 10 • T2 has two scheduling blocks • C22 of T2 is mutually exclusive to T1 • q = 2
Kernelized Monitor • At t = 5, the earliest-deadline algorithm will need to schedule T1 again but it can’t since T22 is block the critical section between T1 and T22
Kernelized Monitor • Region before the second activation of T1 is blocked • Forbidden region • Dispatcher must know about all forbidden regions during compile time
Priority Inversion • Consider three tasks T1, T2, and T3 with T1 having the highest priority • Scheduled with rate-monotonic algorithm • T1 and T3 require exclusive access to a resource protected by a semaphore S
Priority Inversion • T3 starts and has exclusive access to resource • T1 requests service but is blocked by T3 • T2 requests service and is granted service • T2 finishes • T3 finishes and releases S • T1 starts and finishes • Actual execution is T2, T3, then T1 • Solution: Priority Ceiling Protocol
Priority Ceiling Protocol • Priority ceiling (PC) of S = priority of the highest task that can lock S • T only enters a new critical section if it’s priority is higher than the PC of all semaphores locked by tasks != T • Runs at assigned priority unless in critical region and blocks higher priority tasks • Inherits highest priority of blocked tasks while in the critical region • Returns to assigned priority when exiting
Priority Ceiling Protocol • T3 starts • T3 locks S3 • T2 starts and preempts T3 • T2 is blocked when locking S3. T3 resumes at T2’s inherited priority
Priority Ceiling Protocol 5. T3 enters nested critical region and locks S1. 6. T1 starts and preempts T3 7. T1 is blocked when locking S1. T3 resumes 8. T3 unlocks S2. T1 awakens and preempts T3. T1 locks S1
Priority Ceiling Protocol 9. T1 unlocks S1 10. T1 locks S2 11. T1 unlocks S2 12. T1 completes. T3 resumes at priority of T2
Priority Ceiling Protocol 13. T3 unlocks S3. T2 preempts T3 and locks S3 14. T2 unlocks S3 15. T2 completes. T3 resumes 16. T3 completes
Priority Ceiling Protocol • One sufficient schedulability test • Set of n periodic tests {Ti}, periods pi, computation time ci • Worse case blocking time by lower priority tasks = Bi • i, 1 i n : (c1/p1) + (c2/p2) + …+ (ci/pi) + (Bi/p2) i(21/i – 1) • Not the only test. There are more complex ones. • Priority ceiling protocol – Predictable, non-deterministic scheduling protocol
Dynamic Scheduling In Distributed Systems • Hard to guarantee deadlines in single processor systems • Even harder in distributed systems or multiprocessor systems due to communication • Applications required to tolerate transient faults like message losses as well as detect permanent faults
Dynamic Scheduling In Distributed Systems • Positive Acknowldgement or Retransmission (PAR) • Large temporal uncertainty between shortest and longest execution time • Worse case – assume longest time. • Poor responsiveness of system • Masking Protocols • Send message k + 1 in case the tolerance of k is required. • No temporal problem but can’t detect permanent faults due to unidirectional communication
Dynamic Scheduling In Distributed Systems • Solutions? • No idea • Providing good temporal performance is a “fashionable research topic”
Static Scheduling • Static schedules guarantees all deadlines, based on known resources, precedence, and synchronization requirements, is calculated off-line • Strong regularity assumptions • Known times when external events will be serviced
Static Scheduling • System design • Maximum delay time until request is recognized + maximum transaction response time < service deadline • Time • Generally a periodic time-triggered schedule • Time line divided into a sequence of granules (cycle time) • Only one interrupt, a periodic clock interrupt for the start of a new granule • In distributed systems, synchronized to a precision of less than a granule
Static Scheduling • Periodic with pi being a multiple of the basic granule • Schedule period = least common multiple of all pi • All scheduling decisions made at compile-time and executed at run-time • Optimal schedule in a distributed system => NP complete
Search Tree • Precedence Graph • Tasks = Nodes, Edges = dependencies • Search Tree • Level = unit of time, Depth = period • Path to a leaf node = complete schedule • Goal: Find a complete schedule that observes all precedence and mutual exclusion constraints before the deadline
Heurisitc Function • Two terms: Actual cost of path, estimated cost to goal • Example • Estimate time needed to complete precedence graph (Time Until Response) (TUR) • Necessary estimate of TUR = (max exec time + communications) • If necessary estimate > deadline, prune branches of the node and backtrack to the parent
Increasing Adaptability • Weakness: Assumption of strictly periodic tasks • Proposed solutions for flexibility • Transformation of sporadic requests into periodic requests • Sporadic server task • Mode changes
Transformation Of Sporadic Requests To Periodic Requests • Possible to find a schedule if the sporadic task has a laxity • One solution: Replace T with a quasisporadic task T’ • c’ = c • d’ = d • p’ = min(p – d + 1, p) • Sporadic task with a short latency will demand a lot of resources, but will request it infrequently