320 likes | 338 Views
This research outlines the selection of controlled variables (CVs) to optimize plantwide control systems, focusing on minimizing cost functions. The study explores the identification of "self-optimizing" variables and their measurement combinations. Through the nullspace method and linearization techniques, the optimal measurement selection is achieved for improved control performance. The approach also considers measurement noise and disturbances to enhance the robustness of the control system. Practical applications and case studies are discussed to demonstrate the effectiveness of the proposed methods in various industrial settings.
E N D
CONTROLLED VARIABLE AND MEASUREMENT SELECTION Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim, Norway Bratislava, Jan. 2011
Plantwide control The controlled variables (CVs) interconnect the layers Process control OBJECTIVE Min J (economics); MV=y1s RTO cs = y1s Switch+Follow path (+ look after other variables) CV=y1 (+ u); MV=y2s MPC y2s Stabilize + avoid drift CV=y2; MV=u PID u (valves) MV = manipulated variable CV = controlled variable
Need to find new “self-optimizing” CVs (c=Hy) in each region of active constraints Control 3 active constraints 1 2 3 2 1 3 unconstrained degrees of freedom -> Find 3 CVs
Unconstrained degrees of freedom: Ideal “Self-optimizing” variables • Operational objective: Minimize cost function J(u,d) • The ideal “self-optimizing” variable is the gradient (first-order optimality condition) (Halvorsen and Skogestad, 1997; Bonvin et al., 2001; Cao, 2003): • Optimal setpoint = 0 • BUT: Gradient can not be measured in practice • Possible approach: Estimate gradient Ju based on measurements y • Approach here: Look directly for c as a function of measurements y (c=Hy) without going via gradient I.J. Halvorsen and S. Skogestad, ``Optimizing control of Petlyuk distillation: Understanding the steady-state behavior'', Comp. Chem. Engng., 21, S249-S254 (1997) Ivar J. Halvorsen and Sigurd Skogestad, ``Indirect on-line optimization through setpoint control'', AIChE Annual Meeting, 16-21 Nov. 1997, Los Angeles, Paper 194h. .
H Optimal measurement combination • Candidate measurements (y): Include also inputs u
H Loss method. Problem definition
No measurement noise (ny=0) Nullspace method
No measurement noise (ny=0) Proof nullspace method Basis: Want optimal value of c to be independent of disturbances • Find optimal solution as a function of d: uopt(d), yopt(d) • Linearize this relationship: yopt= F d • Want: • To achieve this for all values of d: • To find a F that satisfies HF=0 we must require • Optimal when we disregard implementation error (n) Amazingly simple! Sigurd is told how easy it is to find H V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), 846-853 (2007).
J cs = constant + u + + y Loss K - + d + c H u Controlled variables, Generalization (with measurement noise) Loss with c=Hym=0 due to (i) Disturbances d (ii) Measurement noise ny Ref: Halvorsen et al. I&ECR, 2003 Kariwala et al. I&ECR, 2008
Non-convex optimization problem (Halvorsen et al., 2003) Have extra degrees of freedom D : any non-singular matrix Improvement 1 (Alstad et al. 2009) Convex optimization problem Global solution st Improvement 2 (Yelchuru et al., 2010) st • Full H (not selection): Do not need Juu • Q can be used as degrees of freedom for faster solution • Analytical solution when YYT has full rank (w/ meas. noise):
cs = constant + u + + y K - + + c H Special cases • No noise (ny=0, Wny=0): Optimal is HF=0 (Nullspace method) • But: If many measurement then solution is not unique • No disturbances (d=0; Wd=0) + same noise for all measurements (Wny=I): Optimal is HT=Gy (“control sensitive measurements”) • Proof: Use analytic expression
cs = constant + u + + y K - + + c H Relationship to maximum gain rule “=0” in nullspace method (no noise) “Minimize” in Maximum gain rule ( maximize S1 G Juu-1/2 , G=HGy) “Scaling” S1-1 for max.gain rule
Special case: Indirect control = Estimation using “Soft sensor” • Indirect control: Control measurements c= Hy such that primary output y1 is constant • Optimal sensitivity F = dyopt/dd = (dy/dd)y1 • Distillation: y=temperature measurements, y1= product composition • Estimation: Estimate primary variables, y1=c=Hy • y1 = G1 u, y = Gy u • Same as indirect control if we use extra degrees of freedom (H1=DH) such that (H1Gy)=G1
Current research:“Loss approach” can also be used for Y = data • Alternative to “least squares” and extensions such as PCR and PLS (Chemiometrics) • Why is least squares not optimal? • Fits data by minimizing ||Y1-HY|| • Does not consider how estimate y1=Hy is going to be used in the future. • Why is the loss approach better? • Use the data to obtain Y, Gy and G1 (step 1). • Step 2: obtain estimate y1=Hy that works best for the future expected disturbances and measurement noise (as indirectly given by the data in Y)
Loss approach as a Data Regression Method • Y1 – output to be estimated • X – measurements (called y before)
Test 1. Gluten data • 1 y1 (gluten), 100 x (NIR), 100 data sets (50+50).
Test 2. Wheat data • 2 y1, 701 x , 100 data sets (50+50).
Test 3. Our own • 2 y1, 7 x (generated by 2u + 2d), • 8 or 32 data sets + noisefree as validation
Conclusion ”Loss regression method” • Works well but not always the winner • Not surprising, PLS and PCR are well-proven methods... • Needs sufficent data to get estimate of noise • If we have model: Very promising as ”soft sensor” (estimate y1)
Find controlled variables c=Hy (inputs u are generated by PI-feedback to keep c=0) c=Hy may be viewed as estimate of gradient Ju Use model offline to find H HF=0 Optimal for modelled (expected) disturbances Fast time scale Find input values , Δu=-β Juu-1 Ju Measure gradient Ju (if possible) or estimate by perturbing u No explicit model needed Optimal also for “new” disturbances Slow time scale Self-optimizing controlvs.NCO-tracking
Self-optimizing control and NCO-tracking • Similar idea: Use measurements to drive gradient to zero • Not competing but complementary
Toy Example Reference: I. J. Halvorsen, S. Skogestad, J. Morud and V. Alstad, “Optimal selection of controlled variables”, Industrial & Engineering Chemistry Research, 42 (14), 3273-3284 (2003).
Toy Example: Single measurements Constant input, c = y4 = u Want loss < 0.1: Consider variable combinations
Conclusion • Systematic approach for finding CVs, c=Hy • Can also be used for estimation S. Skogestad, ``Control structure design for complete chemical plants'',Computers and Chemical Engineering, 28 (1-2), 219-234 (2004). V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), 846-853 (2007). V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 19, 138-148 (2009) Ramprasad Yelchuru, Sigurd Skogestad, Henrik Manum , MIQP formulation for Controlled Variable Selection in Self Optimizing Control IFAC symposium DYCOPS-9, Leuven, Belgium, 5-7 July 2010 Download papers: Google ”Skogestad”
Other issues • C=Hy. Want to use as few measurements y as possible (RAMPRASAD) • The c’s need to be controlled by some MV. Would like to use local measurements (RAMPRASAD) • Simplify switching: Would like to use same H in several regions? (RAMPRASAD) • Nonlinearity (JOHANNES)