540 likes | 560 Views
Feedback: The simple and best solution. Applications to self-optimizing control and stabilization of new operating regimes. Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim. December 2004. Outline. About myself
E N D
Feedback:The simple and best solution.Applications to self-optimizing control and stabilization of new operating regimes Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim December 2004
Outline • About myself • I. Why feedback (and not feedforward) ? • II. Self-optimizing feedback control: What should we control? • III. Stabilizing feedback control: Anti-slug control • Conclusion
Abstract • Feedback: The simple and best solutionMost chemical engineers are (indirectly) trained to be ``feedforward thinkers" and they immediately think of ``model inversion'' when it comes doing control. Thus, they prefer to rely on models instead of data, although simple feedback solutions in many cases are much simpler and certainly more robust.In the seminar two nice applications of feedback are considered: - Implementation of optimal operation by "self-optimizing control". The idea is to turn optimization into a setpoint control problem, and the trick is to find the right variable to control. Applications include process control, pizza baking, marathon running, biology and the central bank of a country. - Stabilization of desired operating regimes. Here feedback control can lead to completely new and simple solutions. One example would be stabilization of laminar flow at conditions where we normally have turbulent flow.In the seminar a nice application to anti-slug control in multiphase pipeline flow is discussed.
Arctic circle North Sea Trondheim SWEDEN NORWAY Oslo DENMARK GERMANY UK
NTNU, Trondheim
Sigurd Skogestad • Born in 1955 • 1978: Siv.ing. Degree (MS) in Chemical Engineering from NTNU (NTH) • 1980-83: Process modeling group at the Norsk Hydro Research Center in Porsgrunn • 1983-87: Ph.D. student in Chemical Engineering at Caltech, Pasadena, USA. Thesis on “Robust distillation control”. Supervisor: Manfred Morari • 1987 - : Professor in Chemical Engineering at NTNU • Since 1994: Head of process systems engineering center in Trondheim (PROST) • Since 1999: Head of Department of Chemical Engineering • 1996: Book “Multivariable feedback control” (Wiley) • 2000,2003: Book “Prosessteknikk” (Norwegian) • Group of about 10 Ph.D. students in the process control area
Research: Develop simple yet rigorous methods to solve problems of engineering significance. • Use of feedback as a tool to • reduce uncertainty (including robust control), • change the system dynamics (including stabilization; anti-slug control), • generally make the system more well-behaved (including self-optimizing control). • limitations on performance in linear systems (“controllability”), • control structure design and plantwide control, • interactions between process design and control, • distillation column design, control and dynamics. • Natural gas processes
Outline • About myself • I. Why feedback (and not feedforward) ? • II. Self-optimizing feedback control: What should we control? • III. Stabilizing feedback control: Anti-slug control • Conclusion
Model-based control =Feedforward (FF) control d Gd G u y ”Perfect” feedforward control: u = - G-1 Gd d Our case: G=k (=10 nominal), Gd = 10→ Use u = -d
d Gd G u y (smaller process gain ) Feedforward with perfect model (k=10) Feedforward: sensitive to gain error
d Gd ys e C G u y Measurement-based correction =Feedback (FB) control
d Gd ys e C G u y Output y Input u Feedback generates inverse! Resulting output Feedback PI-control: Nominal case
d Gd ys e C G u y Feedback PI control: insensitive to gain error
Conclusion: Why feedback?(and not feedforward control) • Simple: High gain feedback! • Counteract unmeasured disturbances • Reduce effect of changes / uncertainty (robustness) • Change system dynamics (including stabilization) • Linearize the behavior • No explicit model required • MAIN PROBLEM • Potential instability (may occur suddenly) with time delay / RHP-zero
Outline • About myself • Why feedback (and not feedforward) ? • Distillation control • II. Self-optimizing feedback control: What should we control? • Stabilizing feedback control: Anti-slug control • Conclusion
Optimal operation (economics) • Define scalar cost function J(u0,d) • u0: degrees of freedom • d: disturbances • Optimal operation for given d: minu0 J(u0,x,d) subject to: f(u0,x,d) = 0 g(u0,x,d) < 0
”Obvious” solution: Optimizing control =”Feedforward” Estimate d and compute new uopt(d) Probem: Complicated and sensitive to uncertainty
Engineering systems • Most (all?) large-scale engineering systems are controlled using hierarchies of quite simple single-loop controllers • Commercial aircraft • Large-scale chemical plant (refinery) • 1000’s of loops • Simple components: on-off + P-control + PI-control + nonlinear fixes + some feedforward Same in biological systems
RTO y1 = c ? (economics) MPC PID Process control hierarchy
Hierarchical decomposition with separate layers What should we control?
Implementation of optimal operation • Optimal solution is usually at constraints, that is, most of the degrees of freedom are used to satisfy “active constraints”, g(u0,d) = 0 • CONTROL ACTIVE CONSTRAINTS! • Implementation of active constraints is usually simple.
Self-optimizing Control – Sprinter • Optimal operation of Sprinter (100 m), J=T • Active constraint control: • Maximum speed (”no thinking required”)
Implementation of optimal operation • Optimal solution is usually at constraints, that is, most of the degrees of freedom are used to satisfy “active constraints”, g(u0,d) = 0 • CONTROL ACTIVE CONSTRAINTS! • Implementation of active constraints is usually simple. • WHAT MORE SHOULD WE CONTROL? • We here concentrate on the remaining unconstrained degrees of freedom u.
Feedback implementation J Optimizer Issue: What should we control? c cs cm=c+n n Controller that adjusts u to keep cm = cs cs=copt u c d Plant uopt u
Self-optimizing Control • Define loss: • Self-optimizing Control • Self-optimizing control is when acceptable loss can be achieved using constant set points (cs)for the controlled variables c (without re-optimizing when disturbances occur).
Effect of implementation error Good Good BAD
Self-optimizing Control – Marathon • Optimal operation of Marathon runner, J=T • Any self-optimizing variable c (to control at constant setpoint)?
Self-optimizing Control – Marathon • Optimal operation of Marathon runner, J=T • Any self-optimizing variable c (to control at constant setpoint)? • c1 = distance to leader of race • c2 = speed • c3 = heart rate • c4 = level of lactate in muscles
Further examples • Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%) • Cake baking. J = nice taste, u = heat input. c = Temperature (200C) • Business, J = profit. c = ”Key performance indicator (KPI), e.g. • Response time to order • Energy consumption pr. kg or unit • Number of employees • Research spending Optimal values obtained by ”benchmarking” • Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) • Biological systems: • ”Self-optimizing” controlled variables c have been found by natural selection • Need to do ”reverse engineering” : • Find the controlled variables used in nature • From this possibly identify what overall objective J the biological system has been attempting to optimize
Good candidate controlled variables c (for self-optimizing control) Requirements: • The optimal value of c should be insensitive to disturbances • c should be easy to measure and control • The value of c should be sensitive to changes in the degrees of freedom (Equivalently, J as a function of c should be flat) • For cases with more than one unconstrained degrees of freedom, the selected controlled variables should be independent. Singular value rule (Skogestad and Postlethwaite, 1996): Look for variables that maximize the minimum singular value of the appropriately scaled steady-state gain matrix G from u to c
Outline • About myself • Why feedback (and not feedforward) ? • Self-optimizing feedback control: What should we control? • III. Stabilizing feedback control: Anti-slug control • Conclusion
Application stabilizing feedback control:Anti-slug control Two-phase pipe flow (liquid and vapor) Slug (liquid) buildup
Slug cycle (stable limit cycle) Experiments performed by the Multiphase Laboratory, NTNU
z p2 p1 Experimental mini-loopValve opening (z) = 100%
z p2 p1 Experimental mini-loopValve opening (z) = 25%
z p2 p1 Experimental mini-loopValve opening (z) = 15%
z p2 p1 Experimental mini-loop:Bifurcation diagram No slug Valve opening z % Slugging
Avoid slugging? • Design changes • Feedforward control? • Feedback control?
z p2 p1 Design change Avoid slugging:1. Close valve (but increases pressure) No slugging when valve is closed Valve opening z %
z p2 p1 Design change Avoid slugging:2. Other design changes to avoid slugging
z p2 p1 Design change Minimize effect of slugging:3. Build large slug-catcher • Most common strategy in practice
Avoid slugging: 4. Feedback control? Comparison with simple 3-state model: Simplified model (Storkaas, 2003) Valve opening z % Predicted smooth flow: Desirable but open-loop unstable
ref PC PT Avoid slugging:4. ”Active” feedback control z p1 Simple PI-controller
Anti slug control: Mini-loop experiments p1 [bar] z [%] Controller ON Controller OFF
Anti slug control: Full-scale offshore experiments at Hod-Vallhall field (Havre,1999)