250 likes | 472 Views
Real-Time Kernels and Operating Systems. Basic Issue - Purchase commercial “off-the-shelf” system or custom build one Basic Functions Task scheduling Task dispatching Intertask communication For this discussion, a “task” is a unit of execution, a schedulable entity (also called “process”).
E N D
Real-Time Kernels and Operating Systems • Basic Issue - Purchase commercial “off-the-shelf” system or custom build one • Basic Functions • Task scheduling • Task dispatching • Intertask communication • For this discussion, a “task” is a unit of execution, a schedulable entity (also called “process”)
Some Terminology • Kernel, executive, or nucleus - the smallest part of an operating system that provides for task scheduling, dispatching, and intertask communication • New variants for the term “kernel” • Nano-kernel - provides for task dispatching • Micro-kernel - adds task scheduling • Kernel - adds intertask synchronization and communication • Executive - adds support for I/O • Operating System - adds user interface/command processor, security, and file management
Strategies Employed in the Design of Real-Time Kernels • Polled-Loop Systems • Phase/State-Driven Code • Interrupt-Driven Systems • Round Robin Systems • Foreground/Background Systems • Full-Featured Real-Time Operating Systems (RTOS)
Polled-Loop Systems • A flag representing whether or not some event has occurred is tested repeatedly. If the event has occurred, it is processed; then polling continues. If the event has not occurred, polling continues. • Basic requirement for polled-loop systems: • If the software is structured as a single, sequential loop on a single processor interfacing with a single device operating at a fixed rate, then polling is feasible so long as the cumulative execution time of the loop body is less than the time between event occurrences (or 1/Rate).
Phase/State-Driven Code • Based on the “Finite-State Machine” model • Some applications are conducive to the use of FSM models (compilers, network software, physical device control, etc); others are not. • May be implemented using “if-then-else” structures or using “state-transition” tables.
Interrupt-Driven Systems • An interrupt is a mechanism whereby one entity (usually an I/O device) is able to gain the immediate attention of another unit (usually the CPU) for the purpose of dealing with an asynchronous event. • In purely interrupt-driven systems, the main program is simply a “jump to self”. • Tasks are scheduled by way of either hardware dispatching or software dispatching.
Interrupt Dispatching • Hardware Dispatching • Multiple interrupt signals are processed by a special interrupt controller which prioritizes interrupts and provides for “vectoring” to a specific interrupt handler for a given interrupt • Software Dispatching • A single interrupt level • When any interrupt occurs, a software handler must determine which interrupt it is and then transfer control to the appropriate interrupt handler
Interrupt Handling • In either case, the “context” of the interrupted program must be saved • Context typically includes the program counter contents, register contents, and other pertinent data • If multiple interrupts can occur, the context of the interrupt program is usually saved on a stack.
Summary Attributes of Interrupt-Driven Systems • Easy to write • Fast response times with hardware dispatching • CPU cycles are wasted in the “jump to self” loop • May be vulnerable to timing variations and hardware failures
Round-Robin Systems • Several tasks are executed one after another, usually with each task being assigned a fixed “time slice”. If a task does not complete during a single time slice, it must wait until its next time slice occurs the next time around. • Following each time-slice interrupt, the context of the current task must be saved, and the context of the next task must be restored.l • Often used with a cyclic executive.
Foreground/Background Systems • The “jump to self” loop of interrupt-driven systems is replaced by code that performs useful processing. • Interrupt-driven processes comprise the “foreground”, and noninterrupt-driven processes comprise the “background”. • The background process is preempted by any foreground process needing to execute.
Background Processing • Background processes are non-critical. • So long as the foreground processes do not saturate the system, the background process will eventually complete. • The time T for a background process to complete may be very long, depending on the foreground time loading factor. • Specifically, T = E / (1 - P), where E is the background process’s non-contended execution time, and P is the foreground time loading factor.
Summary Attributes of Foreground/Background Systems • Good response times • Best when the number of foreground processes is fixed. Otherwise, tricky. • May be vulnerable to timing variations and hardware failures
Full-Featured Real-Time Operating Systems • Available as commercial, off-the-shelf products • Examples: VxWorks, QNX, VRTX • Rely on “task control block” model • Usually provide flexible task scheduling methods and extensive API’s for resource management
Typical Task State Transitions Running Block Dispatch Ready Preempt Blocked Event
Priority Preemptive Scheduling • Most common scheduling approach for real-time systems. • Tasks are scheduled in order of priority. • Higher-priority tasks can preempt lower-priority tasks. • A task’s priority is assigned based on its importance or urgency. • Task priorities may be fixed or dynamic.
Rate-Monotonic Systems • Special class of fixed-rate, priority preemptive systems • Task priorities are assigned such that higher execution rates correspond to higher priorities. • Rate-monotonic scheduling is optimal for fixed-priority tasks. • Does not take into consideration resource contention or context switch times. • Vulnerable to “priority inversion”
Priority Inversion • The priority assigned to a task does not reflect its criticality. • Several kinds of priority inversion: • A non-critical task with a high frequency of execution is assigned a higher priority than a critical task with a low frequency of execution. Solution - exchange execution rates where possible. • A low-priority task holds a resource needed by a high-priority task. Solution - “priority inheritance” in which the low-priority task temporarily inherits the priority of the high-priority task needing the resource