720 likes | 819 Views
EE5900 Cyber-Physical Systems. Deterministic and Stochastic Scheduling. What Is an Embedded System?. Embedded Systems An information processing system embedded into a larger product End user visualizes using the product, not the computer Other Definitions
E N D
EE5900 Cyber-Physical Systems Deterministic and Stochastic Scheduling
What Is an Embedded System? • Embedded Systems • An information processing system embedded into a larger product • End user visualizes using the product, not the computer • Other Definitions • Specialized computing device not deployed as general purpose computer. • A specialized computer system which is dedicated to a specific task. • A device not independently programmable by the user. • preprogrammed to perform a narrow range of functions • with minimal end user or operator intervention.
Application Areas • Essentially any product line being built today • Trains and automobiles • Telecommunication • Manufacturing • Smart Buildings • Robotics
Embedded Systems From Real Life • Typical system could integrate several technologies: • Microprocessor • Sensor technologies • Actuator technologies (e.g. mechatronics) • Power scavenging (e.g. magnetic inductance) • Wireless transceivers • Impossible without the computer • Meaningless without the electronics
Efficiency of Embedded Systems • Efficiency • Energy efficient • Code-size efficient (especially for systems on a chip) • Run-time efficient • Weight efficient • Cost efficient
Priority Scheduling • RMS Scheduling • EDF Scheduling
C’i t di Laxity Priority-driven Preemptive Scheduling Assumptions & Definitions • Tasks are periodic • No aperiodic or sporadic tasks • Job (instance) deadline = end of period • No resource constraints • Tasks are preemptable • Slack (Laxity) of a Task • Ti = di – (t + ci’) where di: deadline; t : current time; ci’ : remaining computation time.
Rate Monotonic Scheduling (RMS) • Schedulability check. A set of n tasks is schedulable on a uniprocessor by the RMS algorithm if the processor utilization (utilization test) satisfies ci is the execution time and pi is the period. This condition is sufficient, but not necessary • Task with the smallest period is assigned the highest priority (static priority). At any time, the highest priority task is executed.
RMS Scheduler - Example 1 Task set: Ti = (ci, pi) [computation time, period] T1 = (2,4) and T2 = (1,8) Schedulability check: 2/4 + 1/8 = 0.5 + 0.125 = 0.625 ≤ 2(√2 -1) = 0. 82 Active Tasks : {T2} Active Tasks : {T1} Active Tasks : {T1, T2} T11 T21 T12 0 2 3 4 6 8
RMS scheduler – Example 2 Task set: Ti = (ci, pi) T1 = (2,4) and T2 = (4,8) Schedulability check: 2/4 + 4/8 = 0.5 + 0.5 = 1.0 > 2(√2 -1) = 0. 82 Active Tasks : {T2} Active Tasks : {T2, T1} Active Tasks : {T2} Active Tasks : {T1, T2} T11 T21 T12 T21 6 8 0 2 3 4 Some task sets that FAIL the utilization-based schedulability test are also schedulable under RMS
Earliest Deadline First (EDF) • Schedulability check (off-line) - A set of n tasks is schedulable on a uniprocessor by the EDF algorithm if the processor utilization. • This condition is both necessary and sufficient. • Least Laxity First (LLF) algorithm has the same schedulability check.
EDF/LLF (cont.) • Schedule construction (online) • EDF/LLF: Task with the smallest deadline/laxity is assigned the highest priority (dynamic priority). • At any time, the highest priority task is executed. • It is optimal (i.e., whenever there is a feasible schedule, EDF can always compute it) when preemption is allowed and no resource constraint is considered. • Given any two tasks in a feasible schedule, if they are not scheduled in the order of earliest deadline, you can always swap them and still generate a feasible schedule.
EDF scheduler - Example Task set: Ti = (ci, pi, di) T1 = (1,3,3) and T2 = (4,6,6) Schedulability check: 1/3 + 4/6 = 0.33 + 0.67 = 1.0 Active Tasks : {T2} Active Tasks : {T2, T1} Active Tasks : {T1} Active Tasks : {T1, T2} T11 T21 T21 T12 0 1 3 5 6 Unlike RMS, Only those task sets which pass the schedulability test are schedulable under EDF
Comparison of RMS and EDF Process Period, T WCET, C T 5 2 1 T 7 4 EDF schedule 2 T1 0 5 10 15 20 25 30 35 T2 7 14 21 28 35 0 RMSschedule T1 0 5 10 15 20 25 30 35 Deadline miss T2 7 14 21 28 35 0
Resource sharing • Periodic tasks • Task can have resource access • Semaphore is used for mutual exclusion • RMS scheduling
Background – Task State diagram • Ready State: waiting in ready queue • Running State: CPU executing the task • Blocked: waiting in the semaphore queue until the shared resource is free • Semaphore types – mutex (binary semaphore), counting semaphore
Task State Diagram scheduling Activate Termination READY RUN Preemption Signal free resource Wait on busy resource WAITING Process/Task state diagram with resource constraints
Priority Inversion Problem Priority inversion is an undesirable situation in which a higher priority task gets blocked (waits for CPU) for more time than that it is supposed to, by lower priority tasks. Example: • Let T1 , T2, and T3 be the three periodic tasks with decreasing order of priorities. • Let T1 and T3 share a resource S.
Priority Inversion - Example • T3 obtains a lock on the semaphore S and enters its critical section to use a shared resource. • T1 becomes ready to run and preempts T3. Then, T1 tries to enter its critical section by first trying to lock S. But, S is already locked by T3and hence T1 is blocked. • T2 becomes ready to run. Since only T2 and T3 are ready to run, T2 preempts T3 while T3 is in its critical section. Ideally, one would prefer that the highest priority task (T1) be blocked no longer than the time for T3 to complete its critical section. However, the duration of blocking is, in fact, unpredictable because task T2got executed in between.
A higher priority task waits for a lower priority task Priority Inversion example Resource S is available and T1 is scheduled here Makes a request for resource S and gets blocked T1 T1 T1 Highest priority T2 completes L1 Preempted by higher priority task T1 T3 completes T2 T2 T3 is the only active task Medium priority Preempted by higher priority task T2 K3 K2 K1 T3 T3 T3 T3 Least priority 0 T1 and T3 share resource S Total blocking time for task T1 = (K2+K3) + (L1)
Priority Inheritance Protocol Priority inheritance protocol solves the problem of priority inversion. Under this protocol, if a higher priority task TH is blocked by a lower priority task TL, because TL is currently executing critical section needed by TH, TL temporarily inherits the priority of TH. When blocking ceases (i.e., TL exits the critical section), TL resumes its original priority.
Priority Inheritance Protocol – Deadlock Assume T2 has higher priority than T1
Scheduling tasks with precedence relations Conventional task set Scheduler {T1, T2} task set with precedence constraints T1 T2 Modify task parameters in order to respect precedence constraints Scheduler
Modifying task parameters for EDF • While using the EDF scheduler the task parameters need to be modified in order to respect the precedence constraints • Rj*≥ Max (Rj, (Ri* + Ci)) • Di*≥ Min (Di, (Dj* – Cj)) Ti Tj
Modifying the Ready times for EDF R1 = 0 R2 = 5 T1 1 T2 2 R4’ = max(R1+C1, R2+C2,R4) R3’ = max(R1 + C1, R3) Initial Task Parameters R3 = 0 R3’ = 1 R4 = 0 T3 2 T4 1 R4’ = 7 T5 3 R5 = 0 R5’ = 8 R5’ = max(R3’+C3, R4’+C4,R5)
Modifying the Ready times for EDF R1 = 0 R2 = 5 T1 1 T2 2 Modified Task Parameters R3’ = 1 T3 2 T4 1 R4’ = 7 T5 3 R5’ = 8
Modifying the Deadlines for EDF D2’ = Min( (D4’ – C4),(D3’ – C3), D1) D1 = 5 D2 = 7 D1’ = 3 D2’ = 7 T1 1 T2 2 D2’ = Min( (D4’ – C4), D2) Modified Task Parameters D3 = 5 T3 2 D3’ = 5 T4 1 D4 = 10 D4’ = 9 T5 3 D5 = 12 D3’ = Min( (D5 – C5), D3) D4’ = Min( (D5 – C5), D4)
Modifying the Deadlines for EDF D1’ = 3 D2’ = 7 T1 1 T2 2 Modified Task Parameters T3 2 D3’ = 5 T4 1 D4’ = 9 T5 3 D5 = 12
Summary • What are RMS and EDF? • Do they have fixed or dynamic priority? • What is the resource sharing? • What is the precedence constraint? • How to update task parameters in EDF?
Static Network Flow Scheduling • Static Network Flow Scheduling
Time Frame • Given a set of tasks, let H denote the smallest hyper period of all tasks. • T1=(1,4), T2=(1.8,5), T3=(1,20), T4=(2,20) • H=20 • Divide time into frames and frame size f should divide H. • f could be 2,4,5,10,20 • Choose small frame size since this will make the scheduling solution more useful
Network flow formulation • Denote all the tasks as J1,J2,…,Jn • Vertices • N job vertices • H/f time frame vertices • Source • Sink • Edges • Source to job vertex with capacity set to execution time ei • Job vertex to time frame vertex with capacity f if the job can run in the time frame • Time frame to sink with capacity f
Computing scheduling • If the obtained maximum flow is equal to the sum of execution time of all tasks, then the task set is schedulable.
Flow network • Given a directed graph G • A source node s • A sink node t Goal: To send as much information from s to t
Flows An s-t flow is a function f which satisfies: (capacity constraint) (conservation of flows (at intermediate vertices)
Value of the flow Maximum flow problem: maximize this value 3 4 G: 9 10 7 6 6 8 10 10 2 0 9 10 9 10 s t 10 9 Value = 19
Cuts • An s-t cut is a set of edges whose removal disconnect s and t • The capacity of a cut is defined as the sum of the capacity of the edges in the cut Minimum s-t cut problem: minimize this capacity of a s-t cut
Flows ≤ cuts • Let C be a cut and S be the connected component of G-C containing s.
Main result • Value of max s-t flow ≤ capacity of min s-t cut • (Ford Fulkerson 1956) Max flow = Min cut • A polynomial time algorithm
Residual graph • Key idea allow flows to push back f(e) = 2 Can send 8 units forward or push 2 units back. c(e) = 10 c(e) = 8 Advantage of this representation is not to distinguish send forward or push back c(e) = 2
Ford-Fulkerson Algorithm • Start from an empty flow f • While there is an s-t path P in residual graph update f along the original graph • Return f
Ford-Fulkerson Algorithm 0 flow 2 4 4 capacity G: 0 0 0 6 0 8 10 10 2 0 0 0 0 10 s 3 5 t 10 9 Flow value = 0
8 X 8 X 8 X Ford-Fulkerson Algorithm 0 flow 2 4 4 capacity G: 0 0 0 6 0 8 10 10 2 0 0 0 0 10 s 3 5 t 10 9 Flow value = 0 2 4 4 residual capacity Gf: 6 8 10 10 2 10 s 3 5 t 10 9
10 X X 2 10 X 2 X Ford-Fulkerson Algorithm 0 2 4 4 G: 0 8 8 6 0 8 10 10 2 0 0 8 0 10 s 3 5 t 10 9 Flow value = 8 2 4 4 Gf: 8 6 8 10 2 2 10 s 3 5 t 2 9 8
6 X X 6 6 X 8 X Ford-Fulkerson Algorithm 0 2 4 4 G: 0 10 8 6 0 8 10 10 2 2 0 10 2 10 s 3 5 t 10 9 Flow value = 10 2 4 4 Gf: 6 8 10 10 2 10 s 3 5 t 10 7 2
2 X 8 X X 0 8 X Ford-Fulkerson Algorithm 0 2 4 4 G: 6 10 8 6 6 8 10 10 2 2 6 10 8 10 s 3 5 t 10 9 Flow value = 16 2 4 4 Gf: 6 6 8 4 10 2 4 s 3 5 t 10 1 6 8
3 X 9 X 7 X 9 X 9 X Ford-Fulkerson Algorithm 2 2 4 4 G: 8 10 8 6 6 8 10 10 2 0 8 10 8 10 s 3 5 t 10 9 Flow value = 18 2 2 4 2 Gf: 8 6 8 2 10 2 2 s 3 5 t 10 1 8 8
Ford-Fulkerson Algorithm 3 2 4 4 G: 9 10 7 6 6 8 10 10 2 0 9 10 9 10 s 3 5 t 10 9 Flow value = 19 3 2 4 1 Gf: 9 1 6 7 1 10 2 1 s 3 5 t 10 9 9
Ford-Fulkerson Algorithm 3 2 4 4 G: 9 10 7 6 6 8 10 10 2 0 9 10 9 10 s 3 5 t 10 9 Cut capacity = 19 Flow value = 19 3 2 4 1 Gf: 9 1 6 7 1 10 2 1 s 3 5 t 10 9 9