710 likes | 1.38k Views
Contents. IntroductionCyclic schedulingRate monotonic scheduling op periodic independent tasksRate monotonic scheduling of other tasks. Why Schedule?. The purpose of a real-time scheduling algorithm is to ensure that critical timing constraints, such as deadlines and response time, are met. When
E N D
1. Real-Time Scheduling
2. Contents Introduction
Cyclic scheduling
Rate monotonic scheduling op periodic independent tasks
Rate monotonic scheduling of other tasks
3. Why Schedule? The purpose of a real-time scheduling algorithm is to ensure that critical timing constraints, such as deadlines and response time, are met.
When necessary, decisions are made that favor the most critical timing constraints, even at the cost of violating others.
Real-time scheduling is also used to allocate processor time between tasks in soft real-time embedded systems.
4. Contents Problem definition
Models
Schedules and schedulars
Task execution strategies
5. Problem definition Each computation we want to execute needs resources
Resources: processor, memory segments, communication, I/O devices etc.)
The computation must be executed in particular order (relative to each other and/or relative to time)
The possible ordering is either completely or statistically a priori known (described)
Scheduling: assignment of processor to computations;
Allocation: assignment of other resources to computations;
6. Req Example 1 Resources: 1 processor
A priori known ordering of computation
C1 every 15 millisecond, starting at time 0, until computation B1 happens; C1 takes 5 millisecond
C2 every 5 second, starting at time 0, forever; C2 takes 2 seconds
C3: after each C2, forever; c3 takes 1 second;
C4: between two consecutive C1's; C4 takes 10 msec
Computations can be interrupted at any time;
Each computation must be completed (deadline) before the same computation is repeated
7. Questions: Can this be scheduled?
How (give an example of a schedule with the above computations)
Is there a schedule?
There is no schedule
8. Example 2 Resources: 2 processors, each processor can execute any computation;
Ordering: a priori known ordering of computations (see example 1)
Schedule:
9. Scheduling model Model consists of (R, C, S)
R: Resource model (which resources are available)
C: Computation model (what computations (order, duration etc.) have to be performed)
S: Schedule (How will the computations be performed: assignment (in time) of computations to resources)
10. Resource models single processor
multiple-processors
multi-processor
11. Single Processor Resource Model One processor
All communication must take place on that processor
12. Multiple-processor Resource Model The system consists of a number of equivalent and identical processors,
i.e. a task can be executed on any processor and the computation time on each processor is identical
The communication costs (time to send a message are equal for each communication path)
13. Multi-processor Resource Model The system consists of a number of non-equivalent processors
i.e. a task can be executed on only a certain processor (s) and the computation time on each processor can be different
The communication costs (time to send a message are different for each communication path)
14. Computation Models Independent periodic tasks
Independent periodic or asynchronous tasks
Dependant periodic or asynchronous tasks
15. Computation Models Computation model: (V, O)
Unit of computation is a task
V, set of computations, V set of tasks
E, set of events that causes a task to execute; if an event e causes a task t to be executed (or scheduled for execution), this is denoted as e ® t
O is ordering of tasks and events: the ordering-precedence relation
16. Ordering Ordering with respect to
other tasks,
time (events);
Ordering can be specified using different specification techniques, e.g. Dataflow Graph Model, Pert Chart etc. or RTL. real-time logic
17. Independent Periodic tasks Computation model: (Vt, Ot)
Vt set of tasks = {Ti| 0=i=m}
Ot: {<Ti, (bi,ci,Pi,di)}>
tasks Ti are independent of each other
tasks are released for execution (ready to be executed) regularly at a given, a priori known frequency fi, starting after some initial delay bi;
a task Ti need ci time units to be executed
each task must be completed before the new incarnation of the same task is started (deadline di=Pi)
18. Independent Periodic or Asynchronous tasks Vt set of tasks = {Tperiodic, Tasynchronous}
Ot: {<Ti, bi,ci,fi,di)>
tasks are independent of each other (no synchronisation, sharing of devices etc.)
the set of tasks can be partitioned into two disjoint sets:
periodic: periodic tasks (see previous model)
asynchronous: asynchronous tasks, i.e. each of which has to be executed irregularly, but within a bounded frequency, fi is then the maximum bound;
each task must be completed before the new incarnation of the same task becomes ready for execution
19. Dependant tasks
Vt set of tasks = {Ti}
Ot
tasks may shared data, devices etc. They contain critical region
20. Schedules Schedule: V-->(Lp, Time), i.e. list of times at which task V is to be executed at a processor lp
Execution model: pre-emptive, non-pre-emptive execution
Feasible schedule: a schedule that satisfies the prescribed ordering of tasks
Often there is no feasible schedule
Sometimes it can be a priori decided whether there is a feasible schedule (given a resource and computation model)
21. Schedulers Scheduler: an algorithm that computes a schedule;
off-line schedulers
on-line schedulers
Schedulers are often computation- and resource models specific. (i.e. single-processor, independent periodic task schedulers, etc.)
The quality of a scheduler can be characterised:
does it generate a feasible schedule? The scheduler is then an optimal scheduler
how complex it is to generate a schedule, computation and/or memory requirements?
22. Schedulers Meta-scheduler questions: for a given resource and computation model is there an optimal scheduler, etc?
Schedulers have to realise the required temporary behaviour of the tasks
There are different basic strategies of task execution (task execution strategy): run to completion, pre-emption etc.
23. Task Execution Strategies Task execution is triggered by
Internal events, s.a. timers ---> regular, periodic task execution. This leads to Cyclic scheduling (regular, periodic tasks), Synchronous task execution
External events, this leads to irregularly scheduled tasks, that is multitasking, asynchronous task execution
24. Typical Problem for Cyclic Scheduler Periodical sampling
Many periodical sampling actions, with different period
Many periodical sampling actions, with different period, with different time offsets to each other
25. Multitasking Concurrent execution
on multiple processors, or
time-sharing of one processor
Task priorities: urgency of a task
Task deadlines
26. Task States State change are triggered by events
External events, such as hardware interrupts
Timing events: delays, period;
Internal events: errors, exceptions
Program events: calls to operating system
27. Multi-tasking Operating System Executive implements:
task state transitions
memory management, etc.;
System task implement
I/O operations;
file system, etc.
29. Scheduling Algorithms Static table-driven approaches;
Off-line analysis of feasible schedules --> table that determines when a task must begin execution
Multitasking-oriented
Static priority-driven pre-emptive approaches;
off-line analysis --> priorities are assigned, priority based pre-emptive scheduler is used
Dynamic planning-based approaches;
feasibility is determined at run-time. On arrival a task is accepted for execution only if the time constraint can be met
30. Scheduling Algorithms Dynamic best-effort approaches;
No feasibility is performed. The system tries to meet all deadlines and aborts a process whose deadline has been missed.
31. Deadline Scheduling Deadline scheduling
Process must complete by specific time
Used when results would be useless if not delivered on-time
Difficult to implement
Must plan resource requirements in advance
Incurs significant overhead
Service provided to other processes can degrade
32. Real-Time Scheduling Real-time scheduling
Related to deadline scheduling
Processes have timing constraints
Also encompasses tasks that execute periodically
Two categories
Soft real-time scheduling
Does not guarantee that timing constraints will be met
For example, multimedia playback
Hard real-time scheduling
Timing constraints will always be met
Failure to meet deadline might have catastrophic results
For example, air traffic control
33. Real-Time Scheduling Static real-time scheduling
Does not adjust priorities over time
Low overhead
Suitable for systems where conditions rarely change
Hard real-time schedulers
34. Common Real-Time Scheduling Rate-monotonic (RM) scheduling
Process priority increases monotonically with the frequency with which it must execute
Deadline RM scheduling
Useful for a process that has a deadline that is not equal to its period
35. Dynamic Real-Time Scheduling Dynamic real-time scheduling
Adjusts priorities in response to changing conditions
Can incur significant overhead, but must ensure that the overhead does not result in increased missed deadlines
36. Dynamic Real-time Scheduling Priorities are usually based on processes’ deadlines
Earliest-deadline-first (EDF)
Preemptive, always dispatch the process with the earliest deadline
Minimum-laxity-first
Similar to EDF, but bases priority on laxity, which is based on the process’s deadline and its remaining run-time-to-completion
37. Rate Monotonic Scheduling The rate monotonic algorithm (RMA) is a procedure for assigning fixed priorities to tasks to maximize their "schedulability."
A task set is considered schedulable if all tasks meet all deadlines all the time.
38. Rate Monotonic Scheduling Assign the priority of each task according to its period, so that the shorter the period the higher the priority
39. Example Consider a system with only two tasks, which we'll call Task 1 and Task 2.
Assume these are both periodic tasks with periods T1 and T2, and each has a deadline that is the beginning of its next cycle.
Task 1 has T1 = 50 ms, and a worst-case execution time of C1 = 25 ms.
Task 2 has T2 = 100 ms and C2 = 40 ms.
40. Example (2) Note that the utilization, Ui, of task i is Ci/Ti. Thus U1 = 50% and U2 = 40%.
This means total requested utilization
U = U1 + U2 = 90%.
It seems logical that if utilization is less than 100%, there should be enough available CPU time to execute both tasks.
41. Example (3) Let's consider a static priority scheduling algorithm. With two tasks, there are only two possibilities:
Case 1: Priority(t1) > Priority(t2)
Case 2: Priority(t1) < Priority(t2)
In Case 1, both tasks meet their respective deadlines
In Case 2, however, Task 1 misses a deadline, despite 10% idle time. This illustrates the importance of priority assignment.
42. Example (4)
43. RM Example In the example, the period of Task 1 is shorter than the period of Task 2.
Following RM's rule, we assign the higher priority to Task 1. This corresponds to Case 1 in Figure 1, which is the priority assignment that succeeded in meeting all deadlines.
RM is the optimal static-priority algorithm
If a task set cannot be scheduled using the RMA algorithm, it cannot be scheduled using any static-priority algorithm.
44. Limitation of RM Scheduling One major limitation of fixed-priority scheduling is that it is not always possible to fully utilize the CPU.
Even though RM is the optimal fixed-priority scheme, it has a worst-case schedule bound of:
Wn = n * (2^(1/n) - 1)
where n is the number of tasks in a system
As you would expect, the worst-case schedulable bound for one task is 100%.
But, as the number of tasks increases, the schedulable bound decreases, eventually approaching its limit of about 69.3% (ln 2).
45. Limitation of RM Scheduling RM scheduling is optimal with respect to maximum utilization over all static-priority schedulers.
However, this scheduling policy only supports tasks that fit the periodic task model, since priorities depend upon request periods.
Because the request times of aperiodic tasks are not always predictable, these tasks are not supported by the RM algorithm
These are instead typically scheduled using a dynamic priority scheduler such as EDF.
46. EDF Scheduling The EDF scheduling algorithm is a preemptive and dynamic priority scheduler
At each invocation of the scheduler, the remaining time before its next deadline, is calculated for each waiting task, and the task with the least remaining time is dispatched.
If a task set is schedulable, the EDF algorithm results in a schedule that achieves optimal resource utilization.
EDF is useful for scheduling aperiodic tasks, since the dynamic priorities of tasks do not depend on the determinism of request periods.
47. EDF Scheduling Tasks with the least time remaining before their deadline are executed before tasks with more remaining time.
However, EDF is shown to be unpredictable if the required utilization exceeds 100%, known as an overload condition.
48. EDF Example Two tasks:
T1= (3, 4, 4)
T2 = (2, 8, 8)
Initially T1 starts because it has the eraliest deadline (d=4)
At t=4, T1 interrupts T2 (why?)
49. EDF Example
50. EDF in RTOS Earliest deadline first (EDF) scheduling is a dynamic scheduling principle used in real-time operating systems (RTOS)
It places processes in a priority queue
Whenever a scheduling event occurs (task finishes, new task released, etc.) the queue will be searched for the task closest to its deadline.
This task will then be scheduled for execution next.
51. EDF Schedule Guarantee With scheduling periodic processes that have deadlines equal to their periods, EDF has a utilization bound of 100%
That is, EDF can guarantee that all deadlines are met provided that the total CPU utilization is not more than 100%.
So, compared to fixed priority scheduling techniques like rate-monotonic scheduling, EDF can guarantee all the deadlines in the system at higher loading.
52. EDF Example Consider 3 periodic processes scheduled using EDF, the following acceptance test shows that all deadlines will be met.
Process Ci Pi
P1 1 8
P2 2 5
P3 4 10
53. RTOS Most commercial real-time operating systems (RTOSes) employ a priority-based preemptive scheduler
These systems assign each task a unique priority level. The scheduler ensures that of those tasks that are ready to run, the one with the highest priority is always the task that is actually running
To meet this goal, the scheduler may preempt a lower-priority task in mid-execution.
54. Priority and Resource Sharing Tasks need to share resources to communicate and process data
This aspect of multi-threaded programming is not specific to real-time or embedded systems.
Any time two tasks share a resource, such as a memory buffer, in a system that employs a priority-based scheduler, one of them will usually have a higher priority.
55. Priority and Resource Sharing The higher-priority task expects to be run as soon as it is ready
However, if the lower-priority task is using their shared resource when the higher-priority task becomes ready to run, the higher-priority task must wait for the lower-priority task to finish with it
We say that the higher-priority task is pending on the resource.
56. Priority and Resource Sharing If the higher-priority task has a critical deadline that it must meet, the worst-case "lockout time" for all of its shared resources must be calculated and taken into account in the design
If the cumulative lockout times are too long, the resource-sharing scheme must be redesigned.
57. Priority and Resource Sharing Since worst-case delays resulting from the sharing of resources can be calculated at design time, the only way they can affect the performance of the system is if no design accounts for them.
58. Priority Inversion The real problem arises at run-time, when a medium-priority task preempts a lower-priority task using a shared resource on which the higher-priority task is pending
If the higher-priority task is otherwise ready to run, but a medium-priority task is currently running instead, a priority inversion is said to occur.
59. Priority inversion timeline
60. Priority Inversion This dangerous sequence of events is illustrated in the figure
Low-priority Task L and high-priority Task H share a resource
Shortly after Task L takes the resource, Task H becomes ready to run
However, Task H must wait for Task L to finish with the resource, so it pends
Before Task L finishes with the resource, Task M becomes ready to run, preempting Task L
While Task M (and perhaps additional intermediate-priority tasks) runs, Task H, the highest-priority task in the system, remains in a pending state.
61. How Harmful? Many priority inversions are innocuous or, at most, briefly delay a task that should run right away
But from time to time a system-critical priority inversion takes place
Such an event occurred on the Mars Pathfinder mission in July 1997
The Pathfinder mission is best known for the little rover that took high-resolution color pictures of the Martian surface and relayed them back to Earth.
62. Solutions Research on priority inversion has yielded two solutions
The first is called priority inheritance
This technique mandates that a lower-priority task inherit the priority of any higher-priority task pending on a resource they share
This priority change should take place as soon as the high-priority task begins to pend; it should end when the resource is released
This requires help from the operating system.
63. Second Solution The second solution, priority ceilings, associates a priority with each resource; the scheduler then transfers that priority to any task that accesses the resource
The priority assigned to the resource is the priority of its highest-priority user, plus one. Once a task finishes with the resource, its priority returns to normal.
64. Operating Systems
65. Real-Time OSs Real-Time OS: VxWorks, QNX, LynxOS, eCos, DeltaOS, PSX, embOS, ...
GPOS: no support for real-time applications, focus on ‘fairness’.
Users like GPOSs, e.g., Linux:
RTLinux (FSMLabs)
KURT (Kansas U.)
Linux/RT (TimeSys)
66. RT OSs Why?
Determinism / Predictability
Ability to meet deadlines
Traditional operating systems non-deterministic
Standards?
Real-Time POSIX 1003.1
Pre-emptive fixed-priority scheduling
Synchronization methods
Task scheduling options
67. Examples Lynx OS
Microkernel Architecture
Provides scheduling, interrupt, and synchronization support
Real-Time POSIX support
Easy transition from Linux
68. Examples QNX Neutrino
Microkernel Architecture
Add / remove services without reboots
Primary method of communication is message passing between threads
Every process runs in its own protected address space
Protection of system against software failure
“Self-healing” ?
69. Examples VxWorks
Monolithic Kernel
Reduced run-time overhead, but increased kernel size compared to Microkernel designs
Supports Real-Time POSIX standards
Common in industry
Mars missions
Honda ASIMO robot
Switches
MRI scanners
Car engine control systems
70. Examples MARS (Maintainable Real-Time System)
Time driven
No interrupts other than clock
Support for fault-tolerant, redundant components
Static scheduling of hard real-time tasks at predetermined times
Offline scheduling
Primarily a research tool
71. Examples RTLinux
“Workaround” on top of a generic O/S
Generic O/S – optimizes average case scenario
RTOS – need to consider WORST CASE scenarios to ensure deadlines are met
Dual-kernel approach
Makes Linux a low-priority pre-emptable thread running on a separate RTLinux kernel
Tradeoff between determinism of pure real-time O/S and flexibility of conventional O/S
Periodic tasks only