1 / 94

G572 Real-time Systems

This article discusses the use of Rate Monotonic Analysis (RMA) for analyzing and scheduling real-time systems. It covers topics such as task priority allocation, scheduling protocols, implementation schemes, and factors affecting deadline misses. The article also provides examples and insights on achieving 100% utilization and minimizing preemption effects.

hartmanl
Download Presentation

G572 Real-time Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. G572 Real-time Systems Rate Monotonic Analysis Xinrong Zhou (xzhou@abo.fi)

  2. RMA • RMA is one quantitative method which makes it possible to analyze if the system can be scheduled • With the help of RMA it is possible to: • select the best task priority allocation • select the best scheduling protocol • Select the best implementation scheme for aperiodic reality • If RMA rules are followed mechanically, the optimal implementation (where all hard deadlines are met) is reached with the highest possibility

  3. RMA • System model • scheduler selects tasks by priority • if a task with higher priority comes available, task currently running will be interrupt and the task with higher priority will be run (preemption) • Rate Monotonic:priority controls • monotonic, • frequency (rate) of a periodic process

  4. Contents • Under what conditions a system can be scheduled when priorities are allocated by RMA? • Periodic tasks • Blocking • Random deadlines • Aperiodic activities

  5. Why are deadlines missed? • Two factors: • The amount of computation capacity that is globally available, and • how this is used • Factor 1 can be easily quantified • MIPS • Max bandwidth of LAN • Factor 2 contains • Processing capacity that is used by the operating system • Distribution between tasks • RMA goals: • Optimize distribution in such a way that deadlines will be met; • or ”provide for graceful degradation”

  6. Utilization • Task Ti is dependent on: • Preemption of tasks with higher priority • describes relative usability grade of tasks with higher priority • Its own running time • ci • blocking by tasks with lower priority • Lower priority tasks contain critical resources

  7. Example Total utilization U = 88,39% 23 10 20 Missed deadline

  8. Harmonic period helps • Application is harmonic if every task’s period is exact portion of a longer period Total utilization U = 88,32% 10 20 30 36

  9. Liu & Layland • Suppose that: • Every task can be interrupted (preemption) • Tasks can be ordered by inverted period: • pr(Ti) < pr(Tj) iff Pj<Pi • Every task is independent and periodic • Tasks relative deadlines are similar to tasks’ periods • Only one iteration of all tasks is active at one time!

  10. Liu & Layland • When a group of tasks can be scheduled by RMA? • RMA 1 • If total utilization of tasks tasks can be scheduled so that every deadline will be met e.g. When the number of tasks increases, processor will be idling 32% of time!

  11. Variation of U(n) by n

  12. Example • In example below, we get utilization6/15+4/20+5/30 = 0,767 • because U(3) = 0,780, application can be scheduled

  13. 10 0 5 non RMA RMA Why shorter periods are prioritized? In realistic application. c is small compared with T. Shortest T first ensures that negative preemption effects will be minimized T2 T1 T1 misses deadline

  14. Why shorter periods are prioritized? • Slack: T-c = S • In example is c2 > T1 – c1 • In practice is c<< T e.g. slack is proportional to the period • By selecting the shortest period first, the preemption effect will be minimized • NOTE: • We are primarly interesting in shortening deadlines by priorities!

  15. How 100% utilization can be achieved? • Critical zone theorem: • If some number of independent periodic tasks start synchronously and every task meets its first deadline, then every future deadline will be met • Worst-case: task starts at the same time with higher-priority task • Scheduling points for task Ti: Ti’s first deadline, and the end of periods within Ti’s first deadline for every task with higher priority than Ti • If we can show that there are at least one scheduling point for one task, then if that task have time to run one time, this task can be scheduled

  16. Lehoczky, Sha and Ding • Idea: test every scheduling points RMA 2:

  17. Lehoczky, Sha and Ding

  18. Example • Utilization is 0,953 • U(3) = 0,779 • By RMA 1 tasks are not schedulable

  19. Analysis using RMA 2 200 300 100 150 40 40 40 40 40 20 60

  20. Overhead • Periodic overhead • overhead to make tasks periodic • overhead to switch between tasks • System overhead • overhead of operating system • UNIX, Windows NT: difficult to know what happens in background

  21. Periodic Overhead • To execute a task periodically, the clock must be read and the next execution time must be calculated Next_Time = Clock; loop Next_Time = Next_Time + Period; { ... periodic task code here ... } delay Next_Time – Clock; end loop • task switch: • saving and recalling tasks ”context” • add two task switches to the running queue

  22. Periodic Overhead • Data can be gathered by benchmarking • for eCOS operating system with: • Board: ARM AEB-1 Evaluation Board • CPU : Sharp LH77790A 24MHz

  23. Overhead • The whole overhead can be also measured by benchmarks • Program schedules tasks and measures the processing time • overhead = 100 - available processing capacity for an application • Example: • PC Pentium 166, Windows 98, Alsys Object Ada: 65% / kHz • CETIA VMTR2a, PowerPC 604, 100MHz, LynxOs 2.4 Alsys Object Ada: 19% / kHz • RMA 1 • Q = Overhead in application

  24. Example • rate is 150 : overhead is 2,85% (19% / kHz) • U(3) = 0,780 – 0,0285 = 0,752 • Utilization = 0,767 • System is not schedulable using RMA 1

  25. Overhead • It is also possible to add the overhead directly to the every task • 19% / kHz is equivivalent with 190s / job • RMA 1 • RMA 2

  26. Transient overloading (Busy Intervals) • Check the following example: Uwcet=1,03 Uaverage=0,51 U{T1,T2,T3}=0,78 • Tasks are not schedulable by RM using WCET (Worst Case Execution Time), but they are schedulable using average running time • Suppose that T1, T2, T4 are critical tasks and T3 is not a critical task

  27. Transient overloading • Solution: • increase T4 ’s priority by decreasing period • T4’ with parameters: c’4 = c4/2, a’4 =a4/2, T’4 = T4/2 • now T4’ has higher priority than T3 • then T4’ scheduled to run instead of T4 • if {T1, T2, T’4} are schedulable then there are enough time to end T4 • increase T4 ’s priority by increasing period at T3 • can be done only if T3 :s relative deadline can be larger than the original period • T3 will be 2 tasks T’3 and T’’3 with period 410 (2x210) and c’3 = c’’3 = 80, a’3 = a’’3 = 40 • tasks T’3 och T’’3 must be scheduled in such a way the they start at T3=210 • if {T1, T2, T’3, T’’3, T4} are schedulable, we have solved the problem

  28. Transient overloading • In general case: • C = set of critical tasks • NC = set of not-critical tasks • Pc,max  Pnc, min • C is schedulable under WCET • Procedure: • move those tasks in NC for which P  Pc,max to C • if C is schedulable under WCET, the system can be scheduled • make period for not critical tasks in C larger so that C will be schedulable • make period for critical tasks smaller and move not critical tasks with larger period to NC • continue until points 1-4 are satisfied

  29. Bad points of ”classic” RMA • It requires preemptive scheduler • blocking can stop the system • aperiodic activities must be ”normalized” • tasks which have jitter must be specially handled • RMA can’t analyze multiprocessor systems

  30. Blocking • If two tasks share the same resource, those tasks are dependent • Resources are serially and mutually exclusive used • Shared resources are often implemented using semaphores (mutex): • 2 operations • get • release • Blocking causes two problems: • priority inversion • deadlocks

  31. R reserved T1 Terminates Allocates R T2 Release R Terminates Interrupt T3 Blocking time Interrupt Try to allocate R Interrupt and allocates R Priority inversion • Priority inversion happens when a task with a lower priority blocks a task with a higher priority • e.g. Mars Rover was lost because of priority inversion Release R

  32. Deadlock • Deadlock means that there are circular resource allocation: • T1 allocates R1 • T2 interrupt T1 • T2 allocates R2 • T2 try to allocate R1 but blocks  T1 will be run • T1 try to allocate R2 but blocks • Both tasks are now blocked • deadlock

  33. Deadlock • Deadlocks are always design faults • Deadlocks can be avoided by • Special resource allocation algorithms • deadlock-detection algoritms • static analysis of the system • Resource allocation graphs • Matrix method • Deadlocks will be discussed more thoroughly with parallel programming

  34. Priority inversion: control of blocking time • It is difficult to avoid priority inversion when blocking happens • Scheduling using fixed priorities produces unlimited blocking time • 3 algorithms for minimization of blocking time: • PIP: Priority inheritance • PCP: Prioirity ceiling • HL: Highest locker

  35. Priority Inheritance • 3 rules: • Blocked task inherits a temporary priority from the blocking task with a higher priority (priority-inheritance rule) • Task can lock a resource only if no other task has locked the resource. Task blocks until resources are released, after which task will continue running on its original priority (allocation rule) • Inheritance is transitive: • T1 blocks T2, and T2 blocks T3 • T1 inherits T3’s priority

  36. Priority inheritance In the example Alowest priority Chighest priority

  37. Priority inheritance • Blocking time will be now shorter • Maximum blocking time for a task • Length of min(m,n) critical section • m = number of critical sections in application • n = number of tasks with higher priority • ”chain blocking” • PIP can’t prevent deadlock in all cases • priority ceiling algorithm can prevent deadlock and reduce the blocking time

  38. Priority Inheritance

  39. Priority Ceiling • Every resource has the highest priority level (”ceiling”): • Highest priority level of task which can lock (or require) the resource • Highest ceiling value (currently used) is stored in a system variable called system ceiling • A task can lock or release a resource if • It already has locked a resource which has ceiling=system ceiling, or • Task’s priority is higher than system ceiling • A task blocks until resource will be available and system ceiling variable decrements • Blocking task inherits its priority from that blocking task which has highest priority • Inheritance is transitive

  40. Priority Ceiling R1 ceiling = Prio B R2 ceiling = Prio C system ceiling = R1 ceiling

  41. Priority Ceiling • blocking time forc is now 0 • no ”chain blocking” possible • deadlock not possible in any cases • Complicated implementation

  42. Highest locker • Every resource has highest priority level (”ceiling”): highest priority of a task which can lock the resource • Task can lock or release resource and inherit resources ceiling + 1 as its new priority level • Simpler than PCP to implement • In practice same properties than PCP • ”best” alternative

  43. Highest Locker

  44. Cost of implementation • PIP protocol • Task generates a context-switch when it blocks • Task generates a context-switch when it becomes executable • Scheduler must keep one queue for semaphores of tasks ordered by priorities • Every time new task needs a semaphore, scheduler must adjust tasks priority to the resource owner’s priority • Owner (task) must possible inherit the priority • Every times resource is released, disinheritance procedure is executed

  45. Costs of implementation • PIWG ( Performance Issues Working Group) has developed benchmarks for comparing scheduling algorithms

  46. Cost of implementation • Note: • If the protocol is not in operating system, it must not be implemented by itself. (application level=>huge overhead) • ”No silver bullet” • Scheduling protocol can only minimize the possibility of blocking • Blocking must be prevented by improved design

  47. Similarities of scheduling protocols

  48. Algorithms in commercial RTOS • Priority inheritance • WindRiver Tornado/VxWorks • LynxOs • OSE • eCOS • Priority Ceiling • OS-9 • pSOS+

  49. Pushthrough blocking • Task which doesn’t use resource can be still blocked by a task with lower priority • Task with lower priority has temporarily inherited higher priority • T1 can temporarily run with T3’s priority • T2 is blocked even if T1 and T2 do not use the resource R

More Related