1 / 32

CONTROLLED VARIABLE AND MEASUREMENT SELECTION

CONTROLLED VARIABLE AND MEASUREMENT SELECTION. Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim, Norway Bratislava, Jan. 2011. Outline. Plantwide control. The controlled variables (CVs) interconnect the layers.

megan
Download Presentation

CONTROLLED VARIABLE AND MEASUREMENT SELECTION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CONTROLLED VARIABLE AND MEASUREMENT SELECTION Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim, Norway Bratislava, Jan. 2011

  2. Outline

  3. Plantwide control The controlled variables (CVs) interconnect the layers Process control OBJECTIVE Min J (economics); MV=y1s RTO cs = y1s Switch+Follow path (+ look after other variables) CV=y1 (+ u); MV=y2s MPC y2s Stabilize + avoid drift CV=y2; MV=u PID u (valves) MV = manipulated variable CV = controlled variable

  4. Need to find new “self-optimizing” CVs (c=Hy) in each region of active constraints Control 3 active constraints 1 2 3 2 1 3 unconstrained degrees of freedom -> Find 3 CVs

  5. Unconstrained degrees of freedom: Ideal “Self-optimizing” variables • Operational objective: Minimize cost function J(u,d) • The ideal “self-optimizing” variable is the gradient (first-order optimality condition) (Halvorsen and Skogestad, 1997; Bonvin et al., 2001; Cao, 2003): • Optimal setpoint = 0 • BUT: Gradient can not be measured in practice • Possible approach: Estimate gradient Ju based on measurements y • Approach here: Look directly for c as a function of measurements y (c=Hy) without going via gradient I.J. Halvorsen and S. Skogestad, ``Optimizing control of Petlyuk distillation: Understanding the steady-state behavior'', Comp. Chem. Engng., 21, S249-S254 (1997) Ivar J. Halvorsen and Sigurd Skogestad, ``Indirect on-line optimization through setpoint control'', AIChE Annual Meeting, 16-21 Nov. 1997, Los Angeles, Paper 194h. .

  6. H

  7. H

  8. H Optimal measurement combination • Candidate measurements (y): Include also inputs u

  9. H Loss method. Problem definition

  10. No measurement noise (ny=0) Nullspace method

  11. No measurement noise (ny=0) Proof nullspace method Basis: Want optimal value of c to be independent of disturbances • Find optimal solution as a function of d: uopt(d), yopt(d) • Linearize this relationship: yopt= F d • Want: • To achieve this for all values of  d: • To find a F that satisfies HF=0 we must require • Optimal when we disregard implementation error (n) Amazingly simple! Sigurd is told how easy it is to find H V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), 846-853 (2007).

  12. J cs = constant + u + + y Loss K - + d + c H u Controlled variables, Generalization (with measurement noise) Loss with c=Hym=0 due to (i) Disturbances d (ii) Measurement noise ny Ref: Halvorsen et al. I&ECR, 2003 Kariwala et al. I&ECR, 2008

  13. Non-convex optimization problem (Halvorsen et al., 2003) Have extra degrees of freedom D : any non-singular matrix Improvement 1 (Alstad et al. 2009) Convex optimization problem Global solution st Improvement 2 (Yelchuru et al., 2010) st • Full H (not selection): Do not need Juu • Q can be used as degrees of freedom for faster solution • Analytical solution when YYT has full rank (w/ meas. noise):

  14. cs = constant + u + + y K - + + c H Special cases • No noise (ny=0, Wny=0): Optimal is HF=0 (Nullspace method) • But: If many measurement then solution is not unique • No disturbances (d=0; Wd=0) + same noise for all measurements (Wny=I): Optimal is HT=Gy (“control sensitive measurements”) • Proof: Use analytic expression

  15. cs = constant + u + + y K - + + c H Relationship to maximum gain rule “=0” in nullspace method (no noise) “Minimize” in Maximum gain rule ( maximize S1 G Juu-1/2 , G=HGy) “Scaling” S1-1 for max.gain rule

  16. Special case: Indirect control = Estimation using “Soft sensor” • Indirect control: Control measurements c= Hy such that primary output y1 is constant • Optimal sensitivity F = dyopt/dd = (dy/dd)y1 • Distillation: y=temperature measurements, y1= product composition • Estimation: Estimate primary variables, y1=c=Hy • y1 = G1 u, y = Gy u • Same as indirect control if we use extra degrees of freedom (H1=DH) such that (H1Gy)=G1

  17. Current research:“Loss approach” can also be used for Y = data • Alternative to “least squares” and extensions such as PCR and PLS (Chemiometrics) • Why is least squares not optimal? • Fits data by minimizing ||Y1-HY|| • Does not consider how estimate y1=Hy is going to be used in the future. • Why is the loss approach better? • Use the data to obtain Y, Gy and G1 (step 1). • Step 2: obtain estimate y1=Hy that works best for the future expected disturbances and measurement noise (as indirectly given by the data in Y)

  18. Loss approach as a Data Regression Method • Y1 – output to be estimated • X – measurements (called y before)

  19. Test 1. Gluten data • 1 y1 (gluten), 100 x (NIR), 100 data sets (50+50).

  20. but....

  21. Test 2. Wheat data • 2 y1, 701 x , 100 data sets (50+50).

  22. Test 3. Our own • 2 y1, 7 x (generated by 2u + 2d), • 8 or 32 data sets + noisefree as validation

  23. Conclusion ”Loss regression method” • Works well but not always the winner • Not surprising, PLS and PCR are well-proven methods... • Needs sufficent data to get estimate of noise • If we have model: Very promising as ”soft sensor” (estimate y1)

  24. The end in Bratislava....

  25. Find controlled variables c=Hy (inputs u are generated by PI-feedback to keep c=0) c=Hy may be viewed as estimate of gradient Ju Use model offline to find H HF=0 Optimal for modelled (expected) disturbances Fast time scale Find input values , Δu=-β Juu-1 Ju Measure gradient Ju (if possible) or estimate by perturbing u No explicit model needed Optimal also for “new” disturbances Slow time scale Self-optimizing controlvs.NCO-tracking

  26. Self-optimizing control and NCO-tracking • Similar idea: Use measurements to drive gradient to zero • Not competing but complementary

  27. Toy Example Reference: I. J. Halvorsen, S. Skogestad, J. Morud and V. Alstad, “Optimal selection of controlled variables”, Industrial & Engineering Chemistry Research, 42 (14), 3273-3284 (2003).

  28. Toy Example: Single measurements Constant input, c = y4 = u Want loss < 0.1: Consider variable combinations

  29. Toy Example

  30. Conclusion • Systematic approach for finding CVs, c=Hy • Can also be used for estimation S. Skogestad, ``Control structure design for complete chemical plants'',Computers and Chemical Engineering, 28 (1-2), 219-234 (2004). V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), 846-853 (2007). V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 19, 138-148 (2009) Ramprasad Yelchuru, Sigurd Skogestad, Henrik Manum , MIQP formulation for Controlled Variable Selection in Self Optimizing Control IFAC symposium DYCOPS-9, Leuven, Belgium, 5-7 July 2010 Download papers: Google ”Skogestad”

  31. Other issues • C=Hy. Want to use as few measurements y as possible (RAMPRASAD) • The c’s need to be controlled by some MV. Would like to use local measurements (RAMPRASAD) • Simplify switching: Would like to use same H in several regions? (RAMPRASAD) • Nonlinearity (JOHANNES)

More Related