860 likes | 959 Views
Distributed Stochastic Model Predictive Control. Outline. 1. Background 2. DSMPC ── Additive Uncertainty 3. DSMPC ── Parameter Uncertainty 4. DSMPC ── Additive Uncertainty & Parameter Uncertainty. 1. Background. Chemical Process. Smart Grid. Internet of Vehicles.
E N D
Distributed Stochastic Model Predictive Control
Outline • 1. Background • 2. DSMPC ── Additive Uncertainty • 3. DSMPC ── Parameter Uncertainty • 4. DSMPC ── Additive Uncertainty & Parameter Uncertainty
1. Background Chemical Process Smart Grid Internet of Vehicles Intelligence Agriculture Intelligent Transportation Intelligent Community
1. Background The control problem of a group of subsystems subject to uncertainties and coupled constraints Distributed Stochastic MPC • System Characteristics: • Multiple Subsystems • Wide Distribution • MultipleConstraints • Various Uncertainties
1. Background 云控制背景 Xia Yuanqing. From networked control systems to cloud control systems [C]. In: Proceedings of the 31st Chinese Control Conference (CCC), Hefei, China, 2012: 5878-5883. Xia Yuanqing. Cloud control systems [J]. IEEE/CAA Journal of Automatica Sinica, 2015, 2(2): 134-142. 夏元清. 云控制系统及其面临的挑战[J]. 自动化学报, 2016,42(1):1-12. Xia Yuanqing, Qin Y, Zhai D H, et al. Further results on cloud control systems[J]. Science China Information Sciences, 2016, 59(7): 1-5.
1. Background RobustV.S. Stochastic MPC Measurement Error Model Uncertainty Disturbances Robust MPC: • Worst-case uncertainties • Hard constraints Stochastic MPC: • Some known statistical description of the uncertainty • (probability distribution, or the first and second moments) • Probabilistic constraints
1. Background CentralizedV.S. Distributed MPC • Centralized MPC: The complete system is modeled, and all the control inputs are computed in one optimization problem. • Computational complexity • Communication bandwidth limitations • Reliance on a single processor Local control decisions are computed using local measurements and reduced-order models of the local dynamics. • Distributed MPC: • The dynamic performance of it is worse than centralized framework
1. Background Distributed Stochastic MPC The research of DSMPC is still in an embryonic stage, and lots of difficult but rather important problems still remain to be solved. How to coordinate efforts to ensure that the distributed decisions lead to coupled constraint satisfaction. How to utilize the probabilistic information to design controllers which ensure recursive feasibility and closed-loop stability. How to utilize the relationship between the global objective and the independent decision-making of subsystems to achieve a coordinated response of the entire system.
Outline • 1. Background • 2. DSMPC ── Additive Uncertainty • 3. DSMPC ── Parameter Uncertainty • 4. DSMPC ── Additive Uncertainty & Parameter Uncertainty
2.1. Problem statement Consider a system of Np linear uncertain subsystems: Remark • assumed i.i.d. for each subsystem p • : zero-mean, independent, with known distributions Orthotope,
2.1. Problem statement • Local probabilistic constraints: • Nc coupling constraints across multiple subsystems: • Open loop prediction strategy:
2.1. Problem statement Remark belong to a tube centred on : • State decomposition: :nominal :uncertain
2.2. Centralized SMPC 【Cost function】 Remark The system-wide cost function: where minimize numerically online • Quadratic in : • Finite cost computed offline
2.2. Centralized SMPC 【Constraint handling strategy】 A first step towards guaranteeing that the constraints are met in closed-loop operation is to ensure that the constraints are satisfied by the predicted state and input sequences for each subsystem at all time k.
2.2. Centralized SMPC • Theorem 1 Remark • Tightened linear constraints on nominal input/state predictions • Given the distribution of = sum of independent r.v.'s
2.2. Centralized SMPC Although the conditions of Theorem 1 ensure that constraints are satisfied over the entire prediction horizon for each subsystem p and each constraint c∈ Cat time k, the existence of cp(k) satisfying constraints does not ensure the existence of cp(k+1) generating predictions at time k+1 satisfying constraints. Consequently Theorem 1 does not guarantee the future feasibility of an online optimization problem incorporating the conditions in Theorem 1 as constraints. Consider the i-step-ahead prediction at time k: at time k+1, this term has already been realized where
2.2. Centralized SMPC Consider the i-step-ahead prediction at time k: worst case bound probabilistic bound where
2.2. Centralized SMPC • Predictions at time k must ensure feasibility at k+1, k+2, . . . • Hence tighten local constraints on nominal i-step-ahead prediction by ,where = maximum element of ith column of: k k+1 k+2 • Theorem 2 (a) Remark Satisfaction of local probabilistic constraints and recursive feasibility is ensured if
2.2. Centralized SMPC • Tighten coupling constraints on nominal i-step-ahead prediction by ,where = maximum element of ith column of: k k+1 k+2 where • Theorem 2 (b) Remark Satisfaction of coupling probabilistic constraints and recursive feasibility is ensured if
2.2. Centralized SMPC 【CMPC optimization】 Remark
2.2. Centralized SMPC Largest element of each column lies on the diagonal:
2.3. Distributed SMPC 【Distributed strategy】 Centralized Distributed Remark With the whole system at a state x(k), only one subsystem pis permitted to update at this time step. the subsystem optimizing at time k • Otherwise • If the new plan is obtained as the solution to the local optimization problem.
2.3. Distributed SMPC 【Terminal constraint】 Constraints in MPC optimization at time k for • can be computed using e.g. [Gilbert & Tan, 1991] • is convex
2.3. Distributed SMPC 【DMPC optimization】 At a time step k, the local optimization problem for pk : Remark • Quadratic Program (QP)
2.3. Distributed SMPC 【Main results】 • Theorem 3 Remark
2.3. Distributed SMPC Proof: Theorem 2
2.3. Distributed SMPC • Define the global cost as a Lyapunov function Summing over r time-steps: Hence
2.4. Numerical example Model parameters: • Four probabilistic constraints including local constraints and state coupling constraints: • Disturbance derived from truncated normal distributions:
2.4. Numerical example • The LQ-optimal gain: Prediction parameters: • Prediction horizon: • Terminal sets: Simulation parameters: • 1000 realizations of disturbance sequences Initial conditions:
2.4. Numerical example Local constraints and coupled constraints: DSMPC DSMPC Unconstrained optimal control Unconstrained optimal control
2.5. Conclusion Additive Uncertainty • Coupling probabilistic constraints are handled in a distributed way. • The property of recursive feasibility is guaranteed with respect to both local and coupling probabilistic constraints. • The stability of the large-scale system is analyzed in the presence of additive stochastic disturbances. Li Dai, Yuanqing Xia, Yulong Gao, Mark Cannon. Distributed stochastic MPC of linear systems with additive uncertainty and coupled probabilistic. IEEE Transactions on Automatic Control, 2017, 62(7), 3474-3481.
Outline • 1. Background • 2. DSMPC ── Additive Uncertainty • 3. DSMPC ── Parameter Uncertainty • 4. DSMPC ── Additive Uncertainty & Parameter Uncertainty
3.1. Theoretical background on gPCEs • Generalized polynomial chaos expansions (gPCEs) provide means to uniformly approximate any second-moment processes, which apply to most physical processes. n i.i.d. random variables the highest order’s of the polynomials Remark
3.1. Theoretical background on gPCEs • Once the coefficients are computed, by applying the orthogonality property of multivariate polynomials, the desired moments of can be computed directly from the coefficients of its gPCE.
3.2. Problem statement Consider a system of Np stochastic discrete-time linear subsystems: Remark
3.2. Problem statement • Local probabilistic constraints: • Nc coupling constraints across multiple subsystems: • Local and coupled hard constraints on the inputs:
To render the online solution computationally feasible, in the following, gPCEs are used to present a tractable formulation for the optimization problem. 3.3. DSMPC algorithm with gPCEs The local problem for subsystem p: Remark Remark
3.3. DSMPC algorithm with gPCEs • First consider the approximation of stochastic dynamics using gPCEs methods.
3.3. DSMPC algorithm with gPCEs Galerkin projection Remark Deterministic linear differential equations: Remark
3.3. DSMPC algorithm with gPCEs • We next consider the cost function in the gPCEs framework. The control input as follows:
3.3. DSMPC algorithm with gPCEs Remark RemarkC Remark: The cost function can be written as a quadratic form of
3.3. DSMPC algorithm with gPCEs • A more challenging difficulty that arises in practice is the need to approximate probabilistic constraints. Remark
3.3. DSMPC algorithm with gPCEs Remark Proof. Lemma 4.1 44
3.3. DSMPC algorithm with gPCEs • Remark Remark We consider the presence of probabilistic constraints on both states and inputs of the following form: Convex second-order cone constraints:
3.3. DSMPC algorithm with gPCEs Remark Remark Problem 2 is a convex optimization problem that can be solved very efficiently. quadratic function convex second- order cone constraints convex sets
3.4. Properties of gPCEs-based DSMPC invariant set
3.4. Properties of gPCEs-based DSMPC The next result shows that the resulting closed-loop system has the properties of probabilistic constraint satisfaction and recursive feasibility.
3.4. Properties of gPCEs-based DSMPC The stability of such stochastic systems can be determined by examining the stability of the moments of the solution, which are deterministic functions.
3.5. Numerical example Model parameters: • Four probabilistic constraints including local constraints and state coupling constraints: Initial conditions: