1 / 63

Feedback: Still the simplest and best solution

Explore the applications of robustness, self-optimizing control, and stabilization in new operating regimes through feedback systems. Learn about the advantages of feedback over feedforward control and the principles behind self-optimizing feedback control. Discover the importance of active constraints and self-optimizing controlled variables for optimal operation.

lbankston
Download Presentation

Feedback: Still the simplest and best solution

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Feedback:Still the simplest and best solution Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim Applications to 1) robustness 2) economics (self-optimizing control), and 3) stabilization of new operating regimes

  2. NTNU, Trondheim

  3. Outline • 1. Why feedback (and not feedforward) ? • The feedback amplifier • 2. Self-optimizing control: • How do we link optimization and feedback? • What should we control? • 3. Stabilizing feedback control: • Anti-slug control • Conclusion

  4. Example: AMPLIFIER G Amplifier y r • Want: y(t) = α r(t) • Solution 1 (feedforward): • G = α (adjust amplifier gain) • Very difficult to in practice • Cannot get exact value of α • Cannot easily adjust α online • Do not get same amplification at all frequencies • Problems with distortion and nonlinearity

  5. G Amplifier y r K Measured y Black’s feedback amplifier (1927) Want: y(t) = α r(t) Solution 2 (feedback): K= 1/α (adjustable) Closed-loop response MAGIC! Independent of G, provided GK>> 1

  6. k=10 time 25 Example: disturbance rejection 1 d Gd G u y Plant (uncontrolled system)

  7. 1. Feedforward control (measure d) d Gd G u y ”Perfect” feedforward control: u = - G-1 Gd d Our case: G=Gd → Use u = -d

  8. 1.Feedforward control: Nominal (perfect model) d Gd G u y

  9. d Gd ys e C G u y 2. Feedback control

  10. d Gd ys e C G u y 2. Feedback PI-control: Nominal case Output y Input u Feedback generates inverse! Resulting output

  11. Robustness comparison • Gain error, k = 5, 10 (nominal), 20 • Time constant error, τ = 5, 10 (nominal), 20 • Time delay error, θ = 0 (nominal), 1, 2, 3

  12. Robustness: Gain error, k = 5, 10 (nominal), 20 1. FEEDFORWARD 2. FEEDBACK

  13. Robustness: Time constant error, τ= 5, 10 (nominal), 20 1. FEEDFORWARD 2. FEEDBACK

  14. Robustness: Time delay error, θ = 0 (nominal), 1, 2, 3 1. FEEDFORWARD 2. FEEDBACK

  15. Conclusion: Why feedback?(and not feedforward control) • Simple: High gain feedback! • Counteract unmeasured disturbances • Reduce effect of changes / uncertainty (robustness) • Change system dynamics (including stabilization) • Linearize the behavior • No explicit model required • MAIN PROBLEM: Potential instability (may occur “suddenly”) with time delay/unstable zero Unstable (RHP) zero: Fundamental problem with feedback! Does not help with detailed model + state estimator (Kalman filter)…

  16. Outline • I. Why feedback (and not feedforward) ? • II. Self-optimizing feedback control: • How do we link optimization and feedback? • What should we control? • III. Stabilizing feedback control: Anti-slug control • Conclusion

  17. Optimal operation (economics) • Define scalar cost function J(u0,x,d) • u0: degrees of freedom • d: disturbances • x: states (internal variables) • Optimal operation for given d. Dynamic optimization problem: minu0 J(u0,x,d) subject to: Model: f(u0,x,d) = 0 Constraints: g(u0,x,d) < 0 Here: How do we implement optimal operation?

  18. 1. ”Obvious” solution: Optimizing control =”Feedforward” Estimate d and compute new uopt(d) Probem: Complicated and sensitive to uncertainty

  19. 2. In Practice: Feedback implementation Issue: What should we control?

  20. RTO y1 = c ? (economics) MPC PID Process control hierarchy

  21. What should we control? • CONTROL ACTIVE CONSTRAINTS! • Optimal solution is usually at constraints, that is, most of the degrees of freedom are used to satisfy “active constraints”, g(u0,d) = 0 • Implementation of active constraints is usually simple. • WHAT MORE SHOULD WE CONTROL? • But what about the remaining unconstrained degrees of freedom? • Look for “self-optimizing” controlled variables!

  22. Self-optimizing Control • Definition Self-optimizing Control • Self-optimizing control is when acceptable operation (=acceptable loss) can be achieved using constant set points (cs)for the controlled variables c (without the need for re-optimizing when disturbances occur). c=cs

  23. Optimal operation – Runner • Cost: J=T • One degree of freedom (u=power) • Optimal operation?

  24. Optimal operation - Runner Solution 1: Optimizing control • Even getting a reasonable model requires > 10 PhD’s  … and the model has to be fitted to each individual…. • Clearly impractical!

  25. Optimal operation - Runner Solution 2 – Feedback(Self-optimizing control) • What should we control?

  26. Optimal operation - Runner Self-optimizing control: Sprinter (100m) • 1. Optimal operation of Sprinter, J=T • Active constraint control: • Maximum speed (”no thinking required”)

  27. Optimal operation - Runner Self-optimizing control: Marathon (40 km) • Optimal operation of Marathon runner, J=T • Any self-optimizing variable c (to control at constant setpoint)? • c1 = distance to leader of race • c2 = speed • c3 = heart rate • c4 = level of lactate in muscles

  28. Optimal operation - Runner Conclusion Marathon runner select one measurement c = heart rate • Simple and robust implementation • Disturbances are indirectly handled by keeping a constant heart rate • May have infrequent adjustment of setpoint (heart rate)

  29. Unconstrained optimum Optimal operation Cost J Jopt copt Controlled variable c

  30. Unconstrained optimum Optimal operation Cost J d Jopt n copt Controlled variable c Two problems: • 1. Optimum moves because of disturbances d: copt(d) • 2. Implementation error, c = copt + n

  31. Good Good BAD Unconstrained optimum Candidate controlled variables c for self-optimizing control Intuitive • The optimal value of c should be insensitive to disturbances (avoid problem 1) • Ideal self-optimizing variable is gradient, c = Ju 2. Optimum should be flat (avoid problem 2 – implementation error). Equivalently: “Want large gain” |G| from u to c

  32. Unconstrained optimum Quantitative steady-state: Maximum gain rule Maximum gain rule (Skogestad and Postlethwaite, 1996): Look for variables that maximize the scaled gain (Gs) (minimum singular value of the appropriately scaled steady-state gain matrix Gsfrom u to c)

  33. Unconstrained optimum Optimal measurement combinations Exact solutions for quadratic optimization problems • Nullspace method. No loss for disturbances (d) 2. Generalized (with noise n): Exact local method: • c = Hy can be considered as linear invariants for the quadratic optimization problem – which can be used for feedback implementation of optimal solution! • Example: Explicit MPC * V. Alstad, S. Skogestad and E.S. Hori, Optimal measurement combinations as controlled variables, Journal of Process Control, 19, 138-148 (2009)

  34. Example: CO2 refrigeration cycle pH • J = Ws (work supplied) • DOF = u (valve opening, z) • Main disturbances: • d1 = TH • d2 = TCs (setpoint) • d3 = UAloss • What should we control?

  35. CO2 refrigeration cycle Step 1. One (remaining) degree of freedom (u=z) Step 2. Objective function. J = Ws (compressor work) Step 3. Optimize operation for disturbances (d1=TC, d2=TH, d3=UA) • Optimum always unconstrained Step 4. Implementation of optimal operation • No good single measurements (all give large losses): • ph, Th, z, … • Nullspace method: Need to combine nu+nd=1+3=4 measurements to have zero disturbance loss • Simpler: Try combining two measurements. Exact local method: • c = h1 ph + h2 Th = ph + k Th; k = -8.53 bar/K • Nonlinear evaluation of loss: OK!

  36. Conclusion CO2 refrigeration cycle Self-optimizing c= “temperature-corrected high pressure”

  37. Outline • I. Why feedback (and not feedforward) ? • II. Self-optimizing feedback control: What should we control? • III. Stabilizing feedback control: Anti-slug control • IV. Conclusion

  38. Application stabilizing feedback control:Anti-slug control Two-phase pipe flow (liquid and vapor) Slug (liquid) buildup

  39. Slug cycle (stable limit cycle) Experiments performed by the Multiphase Laboratory, NTNU

  40. Experimental mini-loop

  41. z p2 p1 Experimental mini-loopValve opening (z) = 100%

  42. z p2 p1 Experimental mini-loopValve opening (z) = 25%

  43. z p2 p1 Experimental mini-loopValve opening (z) = 15%

  44. z p2 p1 Experimental mini-loop:Bifurcation diagram No slug Valve opening z % Slugging

  45. Avoid slugging? • Operate away from optimal point • Design changes • Feedforward control? • Feedback control?

  46. z p2 p1 Design change Avoid slugging:1. Close valve (but increases pressure) No slugging when valve is closed Valve opening z %

  47. z p2 p1 Design change Avoid slugging:2a. Design change to avoid slugging

  48. z p2 p1 Design change Minimize effect of slugging:2b. Build large slug-catcher • Most common strategy in practice

  49. Avoid slugging: 4. Feedback control? Comparison with simple 3-state model: Valve opening z % Predicted smooth flow: Desirable but open-loop unstable

More Related