370 likes | 474 Views
Anytime Control Algorithms for Embedded Real-Time Systems. L. Greco , D. Fontanelli, A. Bicchi Interdepartmental Research Center “E. Piaggio” University of Pisa. Introduction.
E N D
Anytime Control Algorithms for Embedded Real-Time Systems L. Greco, D. Fontanelli, A. Bicchi Interdepartmental Research Center “E. Piaggio” University of Pisa UC Berkeley EECS, Berkeley, CA
Introduction • General tendency in embedded systems: implementation of many concurrent real-time tasks on the same platform overall HW cost and development time reduction • Highly time-critical control tasks traditionally scheduled with very conservative approaches rigid, hardly reconfigurable, underperforming architecture • Modern multitasking RTOS (e.g. in automotive ECUs), schedule their tasks dynamically, adapting to varying load conditions and QoS requirements. UC Berkeley EECS, Berkeley, CA
Introduction • Real-time preemptive algorithms (e.g., RM or EDF) can suspend task execution on higher-priority interrupts • Guarantees of schedulability – based on estimates of Worst-Case Execution Time (WCET) – are obtained at the cost of HW underexploitation: e.g., RM can only guarantee schedulability if less than 70% CPU is utilized • In other terms: for most CPU cycles, a longer time is available than the worst-case guarantee • The problem of Anytime Control is to make good use of that extra time UC Berkeley EECS, Berkeley, CA
Anytime Paradigm + + + + • Anytime algorithms and filters… • The execution can be interrupted any time, always producing a valid output; • Increasing the computational time increases the accuracy of the output (imprecise computation) • Can we apply this to controllers? UC Berkeley EECS, Berkeley, CA
Example (I) + + + + + - UC Berkeley EECS, Berkeley, CA
Example (II)Regulation Problem – RMS comparison Not feasible Conservative:stable but poor performance UC Berkeley EECS, Berkeley, CA
Example (III)Regulation Problem – RMS comparison Unstable! Greedy: maximum allowed Gi UC Berkeley EECS, Berkeley, CA
Issues in Anytime Control • Hierarchical Design: • controllers must be ordered in a hierarchy of increasing performance; • Switched System Performance: • stability and performance of the switched system must be addressed; • Practicality: • implementation of both control and scheduling algorithms must be simple (limited resources); • Composability: • computation of higher controllers should exploit computations of lower controllers (recommended). UC Berkeley EECS, Berkeley, CA
Problem Formulation Consider a linear, discrete time, invariant plant and a family of stabilizing feedback controllers The closed-loop system is Controller i provides better performance than controller j if i > j (but WCETi > WCETj) UC Berkeley EECS, Berkeley, CA
Scheduler Description • Sampling instants: • Time allotted to the control task: • Worst Case Execution Times: • Time map: UC Berkeley EECS, Berkeley, CA
Scheduler DescriptionStochastic Scheduler as an I.I.D. Process A simple stochastic description of the random sequence can be given as an i.i.d. process At time t, the time slot is such that all controllers but no controller can be executed Pr UC Berkeley EECS, Berkeley, CA
Scheduler Description Stochastic Scheduler as a Markov Chain More general description with a finite state, discrete-time, homogeneous, irreducible aperiodic Markov chain Description Transition probability matrix: Steady state probabilities: UC Berkeley EECS, Berkeley, CA
Almost Sure Stability Definition:The MJLS is exponentially AS-stable if such that, and any initial distribution 0, the following condition holds Sufficient conditions m-step (lifted system) 1-step (average contractivity) Theorem: The MJLS is exponentially AS-stable if and only if such that the m-step condition holds [P. Bolzern, P. Colaneri, G.D. Nicolao – CDC ’04] UC Berkeley EECS, Berkeley, CA
Switching PolicyPreliminaries and Analysis Switching policy map • Upper bound on the index of the executable controller • Controller is computed, unless a preemption event forces • Examples: • Conservative Policy (non-switching, always av.) • Greedy Policy (if already AS-stable) UC Berkeley EECS, Berkeley, CA
Switching PolicySynthesis Problem Formulation Problem: Given and the invariant scheduler distribution , find a switching policy such that the resulting system is a MJLS with invariant probability distribution • The computational time allotted by the scheduler cannot be increased; • The probability of the i-th controller can be increased only by reducing the probabilities of more complex controllers. How can we build a switching policy ensuring ? UC Berkeley EECS, Berkeley, CA
Stochastic Policy Use of an independent, conditioning Markov chain s • Same structure (number of states) of the scheduler chain • : in the next sampling interval at most the i-th controller is computed (if no preemption occurs) How does the conditioning chain interact with the scheduler’s one? UC Berkeley EECS, Berkeley, CA
Merging Markov Chains Mixing Theorem: Consider two independent finite-state homogeneous irreducible aperiodic Markov chains s and t with state space and respectively. The stochastic process is a finite-state homogeneous irreducible aperiodic Markov chain characterized by Note: the extended chain ts has n2 states UC Berkeley EECS, Berkeley, CA
Merging Markov ChainsAggregating The goal is to produce a process with a desired stationary probability with cardinality n After mixing, use an aggregation function derived from the schedulability constraints • The i-th controller is executed if and only if: • (i.e. limiting controller) • (i.e. preemption) (aggregated process) UC Berkeley EECS, Berkeley, CA
Merging Markov ChainsAggregating (II) Remark: The aggregated process is a linear combination of two chains. Hence: Remark: The state evolution of the JLS driven by is the same as the one produced by an equivalent MJLS driven by the Markov chain , constructed associating to the index , hence the controlled system . Therefore: UC Berkeley EECS, Berkeley, CA
Markov Policy1-step contractive formulation Anytime Problem – (Linear Programming) Find a vector such that UC Berkeley EECS, Berkeley, CA
Example (Reprise) + + + + + - UC Berkeley EECS, Berkeley, CA
Example - Furuta PendulumRegulation Problem – RMS comparison Markovpolicy Improvement:> 55% UC Berkeley EECS, Berkeley, CA
Markov Policym-step contractive formulation (I) A 1-step contractive solution may not exist, but an m-step solution always exists for some m, since the minimal controller is always executable Look for a solution to the Anytime Problem for increasing m Key Idea: The switching policy supervises the controller choice so that some control patterns are preferred w.r.t. others UC Berkeley EECS, Berkeley, CA
Markov Policym-step contractive formulation (II) • Lifted Scheduler chain (nm states) • Conditioning chain not lifted (nm states) • : strings of symbols • Chain : • Mixing: • Aggregating: • Same as 1-step problem • Switching policy: every m steps a bet in advance for an m-string (elementwise minimum) UC Berkeley EECS, Berkeley, CA
Example (TORA) (I) + + + + + - UC Berkeley EECS, Berkeley, CA
Example (TORA) (II)Regulation Problem – RMS comparison Not feasible Conservative:stable but poor performance Greedy UC Berkeley EECS, Berkeley, CA
Example (TORA) (III)Regulation Problem – RMS comparison 4-step solution Markovpolicy Most likely control pattern: G2-G2-G2-G3 UC Berkeley EECS, Berkeley, CA
Tracking and Bumpless • In tracking tasks the performance can be severely impaired by switching between different controllers • The activation of higher level controller abruptly introduces the dynamics of the re-activated (sleeping) states (low-to-high level switching) • The use of bumpless-like techniques can assist in making smoother transitions • Practicality considerations must be taken into account in developing a bumpless transfer method UC Berkeley EECS, Berkeley, CA
Example (F.P.) (V)Tracking Problem – RMS comparison Not feasible Conservative:stable but poor performance UC Berkeley EECS, Berkeley, CA
Example (F.P.) (VI)Tracking Problem – Reference & output comparison Markovpolicy MarkovBumpless policy UC Berkeley EECS, Berkeley, CA
Example (F.P.) (VII)Tracking Problem – Greedy Policy Unstable! Greedy: maximum allowed Gi UC Berkeley EECS, Berkeley, CA
Example (F.P.) (VIII)Tracking Problem – RMS comparison Markovpolicy MarkovBumpless policy UC Berkeley EECS, Berkeley, CA
Example (TORA) (IV)Tracking Problem – RMS comparison Not feasible Conservative:stable but poor performance Greedy UC Berkeley EECS, Berkeley, CA
Example (TORA) (V)Tracking Problem – Reference & output comparison Greedy Markovpolicy MarkovBumpless policy UC Berkeley EECS, Berkeley, CA
Example (TORA) (VI)Tracking Problem – RMS comparison Greedy Markovpolicy MarkovBumpless policy UC Berkeley EECS, Berkeley, CA
Conclusions • Performance (not just stability) under switching must be considered for tracking • Ongoing work is addressing: • hierarchic design of (composable) controllers for anytime control • numerical aspects of the m-step solution • implementation on real systems UC Berkeley EECS, Berkeley, CA
Anytime Control Algorithms for Embedded Real-Time Systems L. Greco, D. Fontanelli, A. Bicchi Interdepartmental Research Center “E. Piaggio” University of Pisa UC Berkeley EECS, Berkeley, CA