540 likes | 647 Views
Selv-optimaliserende regulering Anvendelser mot prosessindustrien, biologi og maratonløping. Sigurd Skogestad Institutt for kjemisk prosessteknologi, NTNU, Trondheim. HC, 31. januar 2012. Self-optimizing control From key performance indicators to control of biological systems.
E N D
Selv-optimaliserende reguleringAnvendelser mot prosessindustrien, biologi og maratonløping Sigurd Skogestad Institutt for kjemisk prosessteknologi, NTNU, Trondheim HC, 31. januar 2012
Self-optimizing controlFrom key performance indicators to control of biological systems Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim PSE 2003, Kunming, 05-10 Jan. 2004
Outline • Optimal operation • Implememtation of optimal operation: Self-optimizing control • What should we control? • Applications • Marathon runner • KPI’s • Biology • ... • Optimal measurement combination • Optimal blending example Focus: Not optimization (optimal decision making) But rather: How to implement decision in an uncertain world
Optimal operation of systems • Theory: • Model of overall system • Estimate present state • Optimize all degrees of freedom • Problems: • Model not available and optimization complex • Not robust (difficult to handle uncertainty) • Practice • Hierarchical system • Each level: Follow order (”setpoints”) given from level above • Goal: Self-optimizing
Process operation: Hierarchical structure RTO MPC PID
Engineering systems • Most (all?) large-scale engineering systems are controlled using hierarchies of quite simple single-loop controllers • Large-scale chemical plant (refinery) • Commercial aircraft • 1000’s of loops • Simple components: on-off + P-control + PI-control + nonlinear fixes + some feedforward Same in biological systems
y1 = c ? (economics) y2 = ? (stabilization) What should we control?
Self-optimizing Control Self-optimizing control is when acceptable operation can be achieved using constant set points (cs)for the controlled variables c (without re-optimizing when disturbances occur). c=cs
Optimal operation (economics) • Define scalar cost function J(u0,d) • u0: degrees of freedom • d: disturbances • Optimal operation for given d: minu0 J(u0,d) subject to: f(u0,d) = 0 g(u0,d) < 0
Implementation of optimal operation • Idea: Replace optimization by setpoint control • Optimal solution is usually at constraints, that is, most of the degrees of freedom u0 are used to satisfy “active constraints”, g(u0,d) = 0 • CONTROL ACTIVE CONSTRAINTS! • Implementation of active constraints is usually simple. • WHAT MORE SHOULD WE CONTROL? • Find variables c for remaining unconstrained degrees of freedom u.
Unconstrained variables Cost J Jopt copt Selected controlled variable (remaining unconstrained)
Implementation of unconstrained variables is not trivial:How do we deal with uncertainty? • 1. Disturbances d • 2. Implementation error n cs = copt(d*) – nominal optimization n c = cs + n d Cost J Jopt(d)
Problem no. 1: Disturbance d d ≠d* Cost J d* Jopt Loss with constant value for c copt(d*) Controlled variable )Want copt independent of d
Problem no. 2: Implementation error n Cost J d* Loss due to implementation error for c Jopt cs=copt(d*) c = cs + n )Want n small and ”flat” optimum
J Optimizer c cs n cm=c+n n Controller that adjusts u to keep cm = cs cs=copt u c d Plant uopt u )Want c sensitive to u (”large gain”)
Which variable c to control? • Define optimal operation: Minimize cost function J • Each candidate variable c: With constant setpoints cs compute loss L for expected disturbances d and implementation errors n • Select variable c with smallest loss
Constant setpoint policy: Loss for disturbances (“problem 1”) Acceptable loss ) self-optimizing control
Good candidate controlled variables c (for self-optimizing control) Requirements: • The optimal value of c should be insensitive to disturbances (avoid problem 1) • c should be easy to measure and control (rest: avoid problem 2) • The value of c should be sensitive to changes in the degrees of freedom (Equivalently, J as a function of c should be flat) • For cases with more than one unconstrained degrees of freedom, the selected controlled variables should be independent. Singular value rule (Skogestad and Postlethwaite, 1996): Look for variables that maximize the minimum singular value of the appropriately scaled steady-state gain matrix G from u to c
Examples self-optimizing control • Marathon runner • Central bank • Cake baking • Business systems (KPIs) • Investment portifolio • Biology • Chemical process plants: Optimal blending of gasoline Define optimal operation (J) and look for ”magic” variable (c) which when kept constant gives acceptable loss (self-optimizing control)
Self-optimizing Control – Marathon • Optimal operation of Marathon runner, J=T • Any self-optimizing variable c (to control at constant setpoint)?
Self-optimizing Control – Marathon • Optimal operation of Marathon runner, J=T • Any self-optimizing variable c (to control at constant setpoint)? • c1 = distance to leader of race • c2 = speed • c3 = heart rate • c4 = level of lactate in muscles
Further examples • Central bank. J = welfare. c=inflation rate (2.5%) • Cake baking. J = nice taste, c = Temperature (200C) • Business, J = profit. c = ”Key performance indicator (KPI), e.g. • Response time to order • Energy consumption pr. kg or unit • Number of employees • Research spending Optimal values obtained by ”benchmarking” • Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) • Biological systems: • ”Self-optimizing” controlled variables c have been found by natural selection • Need to do ”reverse engineering” : • Find the controlled variables used in nature • From this identify what overall objective J the biological system has been attempting to optimize
Looking for “magic” variables to keep at constant setpoints.How can we find them? • Consider available measurements y, and evaluate loss when they are kept constant (“brute force”): • More general: Find optimal linear combination (matrix H):
Optimal measurement combination (Alstad) • Basis: Want optimal value of c independent of disturbances ) • copt = 0 ¢ d • Find optimal solution as a function of d: uopt(d), yopt(d) • Linearize this relationship: yopt = F d • F – sensitivity matrix • Want: • To achieve this for all values of d: • Always possible if
Dealing with complexity Main simplification: Hierarchical decomposition The controlled variables (CVs) interconnect the layers Process control OBJECTIVE Min J (economics); MV=y1s RTO cs = y1s Follow path (+ look after other variables) CV=y1 (+ u); MV=y2s MPC y2s Stabilize + avoid drift CV=y2; MV=u PID u (valves)
Outline • Control structure design (plantwide control) • A procedure for control structure design I Top Down • Step 1: Define operational objective (cost) and constraints • Step 2: Identify degrees of freedom and optimizate for disturbances • Step 3: What to control ? (primary CV’s) (self-optimizing control) • Step 4: Where set the production rate? (Inventory control) II Bottom Up • Step 5: Regulatory control: What more to control (secondary CV’s) ? • Step 6: Supervisory control • Step 7: Real-time optimization • Case studies
Main message • 1. Control for economics (Top-down steady-state arguments) • Primary controlled variables c = y1 : • Control active constraints • For remaining unconstrained degrees of freedom: Look for “self-optimizing” variables • 2. Control for stabilization (Bottom-up; regulatory PID control) • Secondary controlled variables y2 (“inner cascade loops”) • Control variables which otherwise may “drift” • Both cases: Control “sensitive” variables (with a large gain)!
Process control:“Plantwide control” = “Control structure design for complete chemical plant” • Large systems • Each plant usually different – modeling expensive • Slow processes – no problem with computation time • Structural issues important • What to control? Extra measurements, Pairing of loops • Previous work on plantwide control: • Page Buckley (1964) - Chapter on “Overall process control” (still industrial practice) • Greg Shinskey (1967) – process control systems • Alan Foss (1973) - control system structure • Bill Luyben et al. (1975- ) – case studies ; “snowball effect” • George Stephanopoulos and Manfred Morari (1980) – synthesis of control structures for chemical processes • Ruel Shinnar (1981- ) - “dominant variables” • Jim Downs (1991) - Tennessee Eastman challenge problem • Larsson and Skogestad (2000): Review of plantwide control
Control structure selection issues are identified as important also in other industries. Professor Gary Balas (Minnesota) at ECC’03 about flight control at Boeing: The most important control issue has always been to select the right controlled variables --- no systematic tools used!
Main objectives control system • Stabilization • Implementation of acceptable (near-optimal) operation ARE THESE OBJECTIVES CONFLICTING? • Usually NOT • Different time scales • Stabilization fast time scale • Stabilization doesn’t “use up” any degrees of freedom • Reference value (setpoint) available for layer above • But it “uses up” part of the time window (frequency range)
Dealing with complexity Main simplification: Hierarchical decomposition The controlled variables (CVs) interconnect the layers Process control OBJECTIVE Min J (economics); MV=y1s RTO cs = y1s Follow path (+ look after other variables) CV=y1 (+ u); MV=y2s MPC y2s Stabilize + avoid drift CV=y2; MV=u PID u (valves)
Hierarchical decomposition Example: Bicycle riding Note: design starts from the bottom • Regulatory control: • First need to learn to stabilize the bicycle • CV = y2 = tilt of bike • MV = body position • Supervisory control: • Then need to follow the road. • CV = y1 = distance from right hand side • MV=y2s • Usually a constant setpoint policy is OK, e.g. y1s=0.5 m • Optimization: • Which road should you follow? • Temporary (discrete) changes in y1s
Summary: The three layers • Optimization layer (RTO; steady-state nonlinear model): • Identifies active constraints and computes optimal setpoints for primary controlled variables (y1). • Supervisory control (MPC; linear model with constraints): • Follow setpoints for y1 (usually constant) by adjusting setpoints for secondary variables (MV=y2s) • Look after other variables (e.g., avoid saturation for MV’s used in regulatory layer) • Regulatory control (PID): • Stabilizes the plant and avoids drift, in addition to following setpoints for y2. MV=valves (u). Problem definition and overall control objectives (y1, y2) starts from the top. Design starts from the bottom. A good example is bicycle riding: • Regulatory control: • First you need to learn how to stabilize the bicycle (y2) • Supervisory control: • Then you need to follow the road. Usually a constant setpoint policy is OK, for example, stay y1s=0.5 m from the right hand side of the road (in this case the "magic" self-optimizing variable self-optimizing variable is y1=distance to right hand side of road) • Optimization: • Which road (route) should you follow?
Control structure design procedure I Top Down • Step 1: Define operational objectives (optimal operation) • Cost function J (to be minimized) • Operational constraints • Step 2: Identify degrees of freedom (MVs) and optimize for expected disturbances • Identify regions of active constraints • Step 3: Select primary controlled variables c=y1 (CVs) • Step 4: Where set the production rate? (Inventory control) II Bottom Up • Step 5: Regulatory / stabilizing control (PID layer) • What more to control (y2; local CVs)? • Pairing of inputs and outputs • Step 6: Supervisory control (MPC layer) • Step 7: Real-time optimization (Do we need it?) Understanding and using this procedure is the most important part of this course!!!! y1 y2 MVs Process
What should we control? c = Hy Step 3. What should we control (c)?(primary controlled variables y1=c) • CONTROL ACTIVE CONSTRAINTS, c=cconstraint • REMAINING UNCONSTRAINED, c=? H
Step 1. Define optimal operation (economics) • What are we going to use our degrees of freedom u(MVs) for? • Define scalar cost function J(u,x,d) • u: degrees of freedom (usually steady-state) • d: disturbances • x: states (internal variables) Typical cost function: • Optimize operation with respect to u for given d (usually steady-state): minu J(u,x,d) subject to: Model equations: f(u,x,d) = 0 Operational constraints: g(u,x,d) < 0 J = cost feed + cost energy – value products
Optimal operation distillation column • Distillation at steady state with given p and F: N=2 DOFs, e.g. L and V • Cost to be minimized (economics) J = - P where P= pD D + pB B – pF F – pV V • Constraints Purity D: For example xD, impurity· max Purity B: For example, xB, impurity· max Flow constraints: min · D, B, L etc. · max Column capacity (flooding): V · Vmax, etc. Pressure: 1) p given, 2) p free: pmin· p · pmax Feed: 1) F given 2) F free: F · Fmax • Optimal operation: Minimize J with respect to steady-state DOFs cost energy (heating+ cooling) value products cost feed
Optimal operation minimize J = cost feed + cost energy – value products • Given feed Amount of products is then usually indirectly given and J = cost energy. Optimal operation is then usually unconstrained: • Feed free Products usually much more valuable than feed + energy costs small. Optimal operation is then usually constrained: Two main cases (modes) depending on marked conditions: “maximize efficiency (energy)” Control: Operate at optimal trade-off (not obvious what to control to achieve this) “maximize production” Control: Operate at bottleneck (“obvious what to control”)
Solution I (“obvious”): Optimal feedforward • Problem: UNREALISTIC! • Lack of measurements of d • Sensitive to model error
Solution II (”obvious”): Optimizing control y Estimate d from measurements y and recompute uopt(d) Problem: COMPLICATED! Requires detailed model and description of uncertainty
Solution III (in practice): FEEDBACK with hierarchical decomposition y CVs: link optimization and control layers When disturbance d: Degrees of freedom (u) are updated indirectly to keep CVs at setpoints
How does self-optimizing control (solution III) work? • When disturbances d occur, controlled variable c deviates from setpoint cs • Feedback controller changes degree of freedom u to uFB(d) to keep c at cs • Near-optimal operation / acceptable loss (self-optimizing control) is achieved if • uFB(d) ≈ uopt(d) • or more generally, J(uFB(d)) ≈ J(uopt(d)) • Of course, variation of uFB(d) is different for different CVs c. • We need to look for variables, for whichJ(uFB(d)) ≈ J(uopt(d)) or Loss = J(uFB(d)) - J(uopt(d)) is small
Remarks “self-optimizing control” 1. Old idea (Morari et al., 1980): “We want to find a function c of the process variables which when held constant, leads automatically to the optimal adjustments of the manipulated variables, and with it, the optimal operating conditions.” 2. “Self-optimizing control” = acceptable steady-state behavior (loss) with constant CVs. is similar to “Self-regulation” = acceptable dynamic behavior with constant MVs. 3. The ideal self-optimizing variable c is the gradient (c = J/ u = Ju) • Keep gradient at zero for all disturbances (c = Ju=0) • Problem: no measurement of gradient
What should we control? Step 3. What should we control (c)?Simple examples
Optimal operation - Runner Optimal operation of runner • Cost to be minimized, J=T • One degree of freedom (u=power) • What should we control?
Optimal operation - Runner Self-optimizing control: Sprinter (100m) • 1. Optimal operation of Sprinter, J=T • Active constraint control: • Maximum speed (”no thinking required”)
Optimal operation - Runner Self-optimizing control: Marathon (40 km) • 2. Optimal operation of Marathon runner, J=T
Optimal operation - Runner Solution 1 Marathon: Optimizing control • Even getting a reasonable model requires > 10 PhD’s … and the model has to be fitted to each individual…. • Clearly impractical!
Optimal operation - Runner Solution 2 Marathon – Feedback(Self-optimizing control) • What should we control?
Optimal operation - Runner Self-optimizing control: Marathon (40 km) • Optimal operation of Marathon runner, J=T • Any self-optimizing variable c (to control at constant setpoint)? • c1 = distance to leader of race • c2 = speed • c3 = heart rate • c4 = level of lactate in muscles