400 likes | 813 Views
Task Allocation and Scheduling. Problem : How to assign tasks to processors and to schedule them in such a way that deadlines are met Our initial focus: uniprocessor task scheduling Extensions: to multiprocessors. Uniprocessor Task Scheduling. Initial Assumptions: Each task is periodic
E N D
Task Allocation and Scheduling • Problem: How to assign tasks to processors and to schedule them in such a way that deadlines are met • Our initial focus: uniprocessor task scheduling • Extensions: to multiprocessors
Uniprocessor Task Scheduling • Initial Assumptions: • Each task is periodic • Periods of different tasks may be different • Worst-case task execution times are known • Relative deadline of a task is equal to its period • No dependencies between tasks: they are independent • Only resource constraint considered is execution time • No critical sections • Preemption costs are negligible • Tasks must be completed for output to have any value
Standard Scheduling Algorithms • Rate-Monotonic (RM) Algorithm: • Static priority • Higher-frequency tasks have higher priority • Earliest-Deadline First (EDF) Algorithm: • Dynamic priority • Task with the earliest absolute deadline has highest priority
RMA • Task priority is inversely proportional to the task period (directly proportional to task frequency) • At any moment, the processor is either • idle if there are no tasks to run, or • running the highest-priority task available • A lower-priority task can suffer many preemptions • To a task, lower-priority tasks are effectively invisible
RMA • Example • Schedulability criteria: • Sufficiency condition (Liu & Layland, 1973) • Necessary & sufficient conditions (Joseph & Pandya, 1986; Lehoczky, Sha, Ding 1989)
RMA • Critical Instant of a Task: An instant at which a request for that task will have the largest response time • Critical Time-zone of a Task: Interval between a critical instant of that task and the completion time of that task • Critical Instant Theorem: Critical instant of a task T_i occurs whenever T_i arrives simultaneously with all higher-priority tasks
RMA: Scheulability Check • The Critical Instant Theorem leads to a schedulability check: • If a task is released at the same time as all of the tasks of higher priority and it meets its deadline, then it will meet its deadline under all circumstances
RMA: Schedulability Test • If a task is released simultaneously with all higher-priority tasks, determine when it will be done • If this completion time is no later than this task’s deadline, we have succeeded with this task • Find a systematic procedure to turn this process into a necessary-and-sufficient schedulability check
RMA: Schedulability • Start with a single-task set and obtain its schedulability conditions • Extend this to a two-task set • Exploit any intuition gained to generalize this
Earliest Deadline First (EDF) • Same assumptions as before • This is a dynamic priority algorithm: the relative priorities of tasks can change with time • The task with the earliest absolute deadline has the processor • Schedulability Test: Total utilization of task set must not exceed 1.
EDF • Lemma 3.8: If a deadline is missed for the first time at some time t_miss, the processor must have been continuously busy over [0,t_miss]. • Theorem 3.11: A task set is schedulable iff its total utilization is no greater than 1.
Critical Sections • Remove the assumption that tasks can be preempted at any time • If a task is within a critical section of code • It may be preempted • However, until that task finishes executing that critical section, no other task can enter it (irrespective of its priority) • Obvious effect: Some higher-priority tasks which also need to enter a critical section will have to wait • Less obvious effect: Priority-inversion can occur
Example From J. W. Liu: Real-Time Systems, Prentice-Hall, 2000
Critical Sections (contd.) From J. W. Liu, op cit.
Priority Inheritance Protocol • Key feature is the priority inheritance rule: • When a higher-priority task A gets blocked due to resource R by a lower-priority task B, B inherits the priority of A. • When B releases R, the priority of B reverts to the value it held before it inherited the priority of A. • Priority inheritance is transitive.
Priority Ceiling Protocol • The priority ceiling of any resource is the highest priority of all the tasks requiring that resource. • The current priority ceiling of the system is the highest priority ceiling of the resources currently locked. • A task that requires no critical section resources proceeds according to the traditional approach
When task A requests resource R, • If R is held by another task, it is blocked. • If R is free, • If A’s priority is greater than the current system priority ceiling, A is granted access to R • If A’s priority is not greater than the current system priority ceiling, then it is blocked unless A holds resources whose priority ceiling equals the system priority ceiling. • Blocking tasks inherit the priority of the tasks they block (as in the priority inheritance protocol)
The priority ceiling protocol: • Prevents deadlocks from ever occurring • Ensures that no task can be blocked for more than the duration of one critical section
Example From J. Liu, op cit.
Properties of the Ceiling Protocol • Deadlock is not possible • Transitive blocking does not occur, i.e., a task which blocks another task cannot itself be blocked. • Each task can be blocked for the duration of at most one critical section. • The longest critical section provides a bound on the blocking time.
IRIS Tasks • IRIS = Increased Reward for Increased Service • Also called “imprecise” tasks • Consist of: • Mandatory portion, which has to be executed • Optional portion • Reward function linking the execution time to resulting quality of output • Examples: Search and numerical algorithms
Identical Linear Reward Fn • If mandatory portion of all tasks is zero, EDF is optimal, i.e., it results in a maximal reward. • If mandatory portions of at least one task is non-zero, it gets more complex • See Algorithm IRIS1 on page 99
Non-identical Linear Rewards • Basic Idea: • Check if the mandatory portions can be scheduled. If not, then give up • Otherwise, keep augmenting the task set with optional portions of tasks in descending order of weights, and running IRIS1 on them
Identical Concave Rewards • Captures the property of diminishing returns seen in many iterative algorithms • Consider here tasks with zero mandatory portions • Tactic: Ensure that the optional time given to each task is as equal as possible • Example: Aperiodic tasks. • Start from the end of the schedule & work backwards
Sporadic Tasks • In EDF, simply use the deadline of the sporadic task to determine their priority • In RM, create a “sporadic server” periodic task that is a placeholder for the sporadic tasks. • Several obvious ways in which to manage the sporadic server
Task Assignment • Scheduling tasks on a multiprocessor is generally an NP-complete problem • Traditional heuristics do it in two steps: • Assign or allocate tasks to processors • Use a uniprocessor scheduling algorithm to schedule tasks assigned to each processor • Do this iteratively, if necessary
Assignment Algorithms • Bin packing: • First fit • Best fit
Fault-Tolerant Scheduling • Fault Tolerance: The ability of a system to suffer component failures and still function adequately • Fault-Tolerant Scheduling: Save enough time in a schedule that the system can still function despite a certain number of processor failures
FT-Scheduling: Model • System Model • Multiprocessor system • Each processor has its own memory • Tasks are preloaded into assigned processors • Task Model • Tasks are independent of one another • Schedules are created ahead of time
Basic Idea • Preassign backup copies, called ghosts. • Assign ghosts to the processors along with the primary copies • A ghost and a primary copy of the same task can’t be assigned to the same processor • For each processor, all the primaries and a particular subset of the ghost copies assigned to it should be feasibly schedulable on that processor
Requirements • Two main variations: • Current and future iterations of the task have to be saved if a processor fails • Only future iterations need to be saved; the current iteration can be discarded