1 / 27

Could there be more scheduling algorithms than those presented in the previous lecture?

FYS 4220 / 9220 – 2012 / #8 Real Time and Embedded Data Systems and Computing Scheduling of Real-Time processes, part 2 of 2. Could there be more scheduling algorithms than those presented in the previous lecture? Yes indeed!. Pick your own scheduling strategy ….

gus
Download Presentation

Could there be more scheduling algorithms than those presented in the previous lecture?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FYS 4220 / 9220 – 2012 / #8Real Time and Embedded Data Systems and ComputingScheduling of Real-Time processes, part 2 of 2

  2. Could there be more scheduling algorithms than those presented in the previous lecture? Yes indeed! FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  3. Pick your own scheduling strategy … • The following is a list of common scheduling practices and disciplines (ref Wikipedia): • Borrowed-Virtual-Time Scheduling (BVT) • Completely Fair Scheduler (CFS) • Critical Path Method of Scheduling • Deadline-monotonic scheduling (DMS) • Deficit round robin (DRR) • Earliest deadline first scheduling (EDF) • Elastic Round Robin • Fair-share scheduling • First In, First Out (FIFO), also known as First Come First Served (FCFS) • Gang scheduling • Genetic Anticipatory • Highest response ratio next (HRRN) • Interval scheduling • Last In, First Out (LIFO) • Job Shop Scheduling • Least-connection scheduling • Least slack time scheduling (LST) • List scheduling • Lottery Scheduling • Multilevel queue • Multilevel Feedback Queue • Never queue scheduling • O(1) scheduler • Proportional Share Scheduling • Rate-monotonic scheduling (RMS) • Round-robin scheduling (RR) • Shortest expected delay scheduling • Shortest job next (SJN) • Shortest remaining time (SRT) • Staircase Deadline scheduler (SD) • "Take" Scheduling • Two-level scheduling • Weighted fair queuing (WFQ) • Weighted least-connection scheduling • Weighted round robin (WRR) • Group Ratio Round-Robin: O(1) The computer scientists must have had great fun in coming up with names! FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  4. Scheduling of real-time activities • To repeat: an absolute requirement in hard Real-Time systems is that deadlines are met! Missing a deadline may have serious to catastrophic consequences. This requirement is the baseline for true Real-Time Operating Systems! • Note that a RTOS should have guaranteed worst case reaction times. However, the actual values are obviously processor dependent. • Finding these numbers is another issue, you may have to dig them out yourself! • In soft Real-Time systems deadlines may however occasionally be missed without leading to catastropy. For such such applications standard OS’s can be used (Windows, Linux, etc) • Since Linux is popular, and a good choice for many embedded applications, a quick introduction will be given in a later lecture. FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  5. Multitasking under VxWorks • Multitasking provides the fundamental mechanism for an application to control and react to multiple, discrete real-world events. The VxWorks real-time kernel, wind, provides the basic multitasking environment. Multitasking creates the appearance of many threads of execution running concurrently when, in fact, the kernel interleaves their execution on the basis of a scheduling algorithm. • A concurrent activity is called a task in VxWorks parlance. Each task has its own context, which is the CPU environment and system resources that the task sees each time it is scheduled to run by the kernel. On a context switch, a task's context is saved in the task control block (TCB) • A TCB can be accessed through the taskTcb(task_ID) routine FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  6. VxWorks POSIX and Wind scheduling • POSIX scheduling is based on processes, while Windscheduling is based on tasks. Tasks and processes differ in several ways. Most notably, tasks can address memory directly while processes cannot; and processes inherit only some specific attributes from their parent process, while tasks operate in exactly the same environment as the parent task; • Tasks and processes are alike in that they can be scheduled independently; • VxWorks documentation uses the term preemptive priority scheduling, while the POSIX standard uses the term FIFO. This difference is purely one of nomenclature: both describe the same priority-based policy; • The POSIX scheduling algorithms are applied on a process-by-process basis. The Wind methodology, on the other hand, applies scheduling algorithms on a system-wide basis--either all tasks use a round-robin scheme, or all use a preemptive priority scheme. • The POSIX priority numbering scheme is the inverse of the Wind scheme. In POSIX, the higher the number, the higher the priority; in the Windscheme, the lower the number, the higher the priority, where 0 is the highest priority. Accordingly, the priority numbers used with the POSIX scheduling library (schedPxLib) do not match those used and reported by all other components of VxWorks. You can override this default by setting the global variable posixPriorityNumbering to FALSE. If you do this, the Wind numbering scheme (smaller number = higher priority) is used by schedPxLib, and its priority numbers match those used by the other components of VxWorks. FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  7. VxWorks Real-Time Processes (RTPs) – Wind River White Paper, 2005 • VxWorks has historically focused on supporting a lightweight, kernel-based threading model. It distinguished itself as an operating system that is highly scalable and robust, yet lightweight and fast. The focus was on real-time support (for example, keeping the maximum time to respond to an interrupt to an absolute minimum), low task-switching costs, and easy access to hardware. Essentially, VxWorks tried to stay out of the developer’s way, and placed few restrictions on what he or she could do. • Times have changed. The preponderance of CPUs with a memory management unit (MMU) means that many developers in the device software space wish to take advantage of these MMUs to provide partitioning of their systems and protect themselves from crashing a system if they make a programming error. The most common and familiar model for this is a process model, where fully linked applications run in a distinct section of memory and are largely walled off from other applications and services in the system. • Kernel mode, by comparison, trades off abstraction and protection of kernel components for the requirement of having direct access to hardware, tightly bound interaction with the kernel, and potentially higher responsitivity and performance. This trade-off requires that kernel development operate at a more sophisticated programming level, where the developer is more cognizant of subtle interactions with the hardware and the risk of hitting unrecoverable fault conditions. More fun !! FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  8. VxWorks Real-Time Processes (RTPs) – Wind River White Paper, 2005 • An RTP itself is not a schedulable entity. The execution unit within an RTP is a VxWorks task, and there may be multiple tasks executing within an RTP. Tasks in the same RTP share its address space and memory context, and cannot exist beyond the lifespan of the RTP. • Can RTPs be implemented under the vxsim simulator? Well, there is the option of building a Real Time Process project, but how it is done I have not understood! FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  9. VxWorks Kernel programming – Task Control The useofthetaskSpawn()routine is demonstrated in theRTlab programs. The arguments arethenewtask’sname (ASCII string), thetask’spriority, an optionsword, thestacksize, themainroutine (start) address) and 10 arguments to be passed to themainroutine as startup parameters. id = taskSpawn (name, priority, options, stacksize, main, arg1, … arg(10); TaskCreate, Switch, and Delete Hooks FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  10. VxWorks Kernel programming – Task Scheduling Taskareassigned a prioritywhencreated. One canalsochange a task’sprioritywhile it is executing by calling taskPrioritySet() . The kernel has 256 prioritylevels, numbered 0 through 255. Priority 0 is thehighest and 255 is thelowest. All applicationtasksshould be in thepriority range from 100 to 255. kernelTimeSlice(intticks) /* time-slice in ticks or 0 to disableround-robin */ TaskScheduler Control Routines FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  11. VxWorks Kernel programming – Tasking Extensions To allowadditionaltask-relatedfacilities to be added to the system, VxWorksprovides hook routinedthatallowadditionalroutines to be involkedwhenever a task is created, a taskcontextswiitchoccurs, or a task is deleted. User-installedswitch hook routinesarecalledwithinthekernelcontext and therefore do not have access to all VxWorksfacilities, for allowedroutinesseethedocumentation. TaskCreate, Switch, and Delete Hooks FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  12. VxWorks Task Control Block • Each task has its own context, which is the CPU environment and system resources that the task sees each time it is scheduled to run by the kernel. On a context switch, a task's context is saved in the task control block (TCB). A task's context includes: • a thread of execution; that is, the task's program counter • the tasks' virtual memory context (if process support is included) • the CPU registers and (optionally) coprocessor registers • stacks for dynamic variables and function calls • I/O assignments for standard input, output, and error • a delay timer • a time-slice timer • kernel control structures • signal handlers • task variables • error status (errno) • debugging and performance monitoring values • Note that in conformance with the POSIX standard, all tasks in a process share the same environment variables (unlike kernel tasks, which each have their own set of environment variables). • A VxWorks task will be in one of states listed on next page void ti (int taskNameOrId) Display complete information from a task’s TCB FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  13. VxWorks Task state transitions FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  14. Wind task state diagram and task transitions See taskLib for the task---( ) routines. Any system call resulting in a transition may affect scheduling! FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  15. VxWorks task scheduling • The default algorithm is Priority based pre-emptive • With a preemptive priority-based scheduler, each task has a priority and the kernel ensures that the CPU is allocated to the highest priority task that is ready to run. This scheduling method is preemptive in that if a task that has higher priority than the current task becomes ready to run, the kernel immediately saves the current task's context and switches to the context of the higher priority task • A round-robin scheduling algorithm attempts to share the CPU fairly among all ready tasks of the same priority. • Note: VxWorks provides the following kernel scheduler facilities: • The VxWorks native scheduler, which provides options for preemptive priority-based scheduling or round-robin scheduling. • A POSIX thread scheduler that provides POSIX thread scheduling support in user-space (processes) while keeping the VxWorks task scheduling. • A kernel scheduler replacement framework that allows users to implement customized schedulers. See documentation for more information. FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  16. Figure 3-2 :  Priority Preemption  igure 3-3 :  Round-Robin Scheduling  VxWorks task scheduling (cont) • Priority-bases preemtive • VxWorks supply routines for pre-emption locks which prevent context switches . • Round-Robin FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  17. Priority inversion - I Consider there is a task L, with low priority. This task requires resource R. Consider that L is running and it acquires resource R. Now, there is another task H, with high priority. This task also requires resource R. Consider H starts after L has acquired resource R. Now H has to wait until L relinquishes resource R. In some cases, priority inversion can occur without causing immediate harm—the delayed execution of the high priority task goes unnoticed, and eventually the low priority task releases the shared resource. However, there are also many situations in which priority inversion can cause serious problems. If the high priority task is left starved of the resources, it might lead to a system malfunction or the triggering of pre-defined corrective measures, such as a watch dog timer resetting the entire system. The trouble experienced by the Mars lander "Mars Pathfinder” is a classic example of problems caused by priority inversion in real-time systems. The existence of this problem has been known since the 1970s, but there is no fool-proof method to predict the situation. There are however many existing solutions, of which the most common ones are: FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  18. Priority inversion - II Some solutions to the problem: Disabling all interrupts to protect critical sections Brute force! A priority ceiling With priority ceilings, the shared mutex process (that runs the operating system code) has a characteristic (high) priority of its own, which is assigned to the task locking the mutex. This works well, provided the other high priority task(s) that tries to access the mutex does not have a priority higher than the ceiling Priority inheritance Under the policy of priority inheritance, whenever a high priority task has to wait for some resource shared with an executing low priority task, the low priority task is temporarily assigned the priority of the highest waiting priority task for the duration of its own use of the shared resource, thus keeping medium priority tasks from pre-empting the (originally) low priority task, and thereby affecting the waiting high priority task as well. Once the resource is released, the low priority task continues at its original priority level. FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  19. Priority Inversion  To illustrate an extreme example of priority inversion, consider the executions of four periodic processes: a, b, c and d; and two resources: Q and V Process Priority Execution Sequence Release Time a 1 EQQQQE 0 b 2 EE 2 c 3 EVVE 2 d 4 EEQVE 4

  20. Example of Priority Inversion Process d c b a 0 2 4 6 8 10 12 14 16 18 Preempted Executing Blocked Executing with Q locked Executing with V locked Process d has the highest priority

  21. Priority Inheritance  If process p is blocking process q, then p runs with q's priority. In the diagram below, q corresponds to d, while both a and c can be p Process d(q) c b a 0 2 4 6 8 10 12 14 16 18

  22. VxWorks Mutual-Exclusion semaphores and Priority inversion • The mutual-exclusion semaphore is a specialized binary semaphore designed to address issues inherent in mutual exclusion, including priority inversion, deletion safety, and recursive access to resources. • The fundamental behavior of the mutual-exclusion semaphore is identical to the binary semaphore, with the following exceptions:   • It can be used only for mutual exclusion. • It can be given only by the task that took it. • The semFlush( ) operation is illegal. • Priority inversion problem: FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  23. Figure 3-11 :  Priority Inversion  VxWorks: Priority inheritance • In the figure below, priority inheritance solves the problem of priority inversion by elevating the priority of t3 to the priority of t1 during the time t1 is blocked on the semaphore. This protects t3, and indirectly t1, from preemption by t2. The following example creates a mutual-exclusion semaphore that uses the priority inheritance algorithm:    • semId = semMCreate (SEM_Q_PRIORITY | SEM_INVERSION_SAFE); • Other VxWorks facilities which implement priority inheritance: next page FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  24. VxWorks POSIX Mutexes and Condition Variable • Thread mutexes (mutual exclusion variables) and condition variables provide compatibility with the POSIX 1003.1c standard. They perform essentially the same role as VxWorks mutual exclusion and binary semaphores (and are in fact implemented using them). They are available with pthreadLib. Like POSIX threads, mutexes and condition variables have attributes associated with them. Mutex attributes are held in a data type called pthread_mutexattr_t, which contains two attributes, protocol and prioceiling. • The protocol mutex attribute describes how the mutex variable deals with the priority inversion problem described in the section for VxWorks mutual-exclusion semaphores. • Attribute Name: protocol • Possible Values: PTHREAD_PRIO_INHERIT (default) and PTHREAD_PRIO_PROTECT • Access Routines: pthread_mutexattr_getprotocol( ) and pthread_mutexattr_setprotocol( ) • Because it might not be desirable to elevate a lower-priority thread to a too-high priority, POSIX defines the notion of priority ceiling. Mutual-exclusion variables created with priority protection use the PTHREAD_PRIO_PROTECT value. FYS4220 / 9220 2012 - lecture 8, part 2 of 2

  25. POSIX  POSIX supports priority-based scheduling, and has options to support priority inheritance and ceiling protocols  Priorities may be set dynamically  Within the priority-based facilities, there are four policies: • FIFO: a process/thread runs until it completes or it is blocked • Round-Robin: a process/thread runs until it completes or it is blocked or its time quantum has expired • Sporadic Server: a process/thread runs as a sporadic server • OTHER: an implementation-defined  For each policy, there is a minimum range of priorities that must be supported; 32 for FIFO and round-robin  The scheduling policy can be set on a per process and a per thread basis

  26. POSIX  Threads may be created with a system contention option, in which case they compete with other system threads according to their policy and priority  Alternatively, threads can be created with a process contention option where they must compete with other threads (created with a process contention) in the parent process • It is unspecified how such threads are scheduled relative to threads in other processes or to threads with global contention  A specific implementation must decide which to support

  27. Other POSIX Facilities POSIX allows:  priority inheritance to be associated with mutexes (priority protected protocol= ICPP)  message queues to be priority ordered  functions for dynamically getting and setting a thread's priority  threads to indicate whether their attributes should be inherited by any child thread they create

More Related