1 / 38

ECONOMIC PLANTWIDE CONTROL: Control structure design for complete processing plants

This article explores the design of control systems for complete chemical plants, focusing on the economic aspects and the selection of primary controlled variables. The hierarchical control structure, including regulatory control and supervisory control, is discussed. The article also presents a step-by-step procedure for control structure design.

madelinen
Download Presentation

ECONOMIC PLANTWIDE CONTROL: Control structure design for complete processing plants

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECONOMIC PLANTWIDE CONTROL: Control structure design for complete processing plants Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Tecnology (NTNU) Trondheim, Norway 13 Jan. 2017 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA

  2. Arctic circle North Sea Trondheim SWEDEN NORWAY Oslo DENMARK GERMANY UK

  3. Outline • Paradigm: based on time scale separation • Plantwide control procedure: based on economics • Example: Runner • Selection of primary controlled variables (CV1=H y) • Optimal is gradient, CV1=Ju with setpoint=0 • General CV1=Hy. Nullspace and exact local method • Throughput manipulator (TPM) location • Examples • Conclusion

  4. How we design a control system for a complete chemical plant? Where do we start? What should we control? and why? etc. etc.

  5. In theory: Optimal control and operation Objectives Approach: • Model of overall system • Estimate present state • Optimize all degrees of freedom Process control: • Excellent candidate for centralized control Problems: • Model not available • Objectives = ? • Optimization complex • Not robust (difficult to handle uncertainty) • Slow response time CENTRALIZED OPTIMIZER Present state Model of system (Physical) Degrees of freedom

  6. Practice: Engineering systems • Most (all?) large-scale engineering systems are controlled using hierarchies of quite simple controllers • Large-scale chemical plant (refinery) • Commercial aircraft • 100’s of loops • Simple components: PI-control + selectors + cascade + nonlinear fixes + some feedforward Same in biological systems But: Not well understood

  7. Alan Foss (“Critique of chemical process control theory”, AIChE Journal,1973): The central issue to be resolved ... is the determination of control system structure. Which variables should be measured, which inputs should be manipulated and which links should be made between the two sets? There is more than a suspicion that the work of a genius is needed here, for without it the control configuration problem will likely remain in a primitive, hazily stated and wholly unmanageable form. The gap is present indeed, but contrary to the views of many, it is the theoretician who must close it. Previous work on plantwide control: • Page Buckley (1964) - Chapter on “Overall process control” (still industrial practice) • Greg Shinskey (1967) – process control systems • Alan Foss (1973) - control system structure • Bill Luyben et al. (1975- ) – case studies ; “snowball effect” • George Stephanopoulos and Manfred Morari (1980) – synthesis of control structures for chemical processes • Ruel Shinnar (1981- ) - “dominant variables” • Jim Downs (1991) - Tennessee Eastman challenge problem • Larsson and Skogestad (2000): Review of plantwide control

  8. Main objectives control system • Economics: Implementation of acceptable (near-optimal) operation • Regulation: Stable operation ARE THESE OBJECTIVES CONFLICTING? • Usually NOT • Different time scales • Stabilization fast time scale • Stabilization doesn’t “use up” any degrees of freedom • Reference value (setpoint) available for layer above • But it “uses up” part of the time window (frequency range)

  9. Our Paradigm Practical operation: Hierarchical structure Planning OBJECTIVE Min J (economics) RTO CV1s Follow path (+ look after other variables) MPC CV2s Stabilize + avoid drift PID u (valves) The controlled variables (CVs) interconnect the layers CV = controlled variable (with setpoint)

  10. Optimizer (RTO) Optimally constant valves Always active constraints Supervisory controller (MPC) Regulatory controller (PID) H H2 Physical inputs (valves) d y PROCESS Stabilized process ny CV1s CV1 CV2s CV2 Degrees of freedom for optimization (usually steady-state DOFs), MVopt = CV1s Degrees of freedom for supervisory control, MV1=CV2s + unused valves Physical degrees of freedom for stabilizing control, MV2 = valves (dynamic process inputs)

  11. Control structure design procedure I Top Down (mainly steady-state economics, y1) • Step 1: Define operational objectives (optimal operation) • Cost function J (to be minimized) • Operational constraints • Step 2: Identify degrees of freedom (MVs) and optimize for expected disturbances • Identify Active constraints • Step 3: Select primary “economic” controlled variables c=y1 (CV1s) • Self-optimizing variables (find H) • Step 4: Where locate the throughput manipulator (TPM)? II Bottom Up (dynamics, y2) • Step 5: Regulatory / stabilizing control (PID layer) • What more to control (y2; local CV2s)? Find H2 • Pairing of inputs and outputs • Step 6: Supervisory control (MPC layer) • Step 7: Real-time optimization (Do we need it?) y1 y2 MVs Process S. Skogestad, ``Control structure design for complete chemical plants'', Computers and Chemical Engineering, 28 (1-2), 219-234 (2004).

  12. Step 1. Define optimal operation (economics) • What are we going to use our degrees of freedom u(MVs) for? • Define scalar cost function J(u,x,d) • u: degrees of freedom (usually steady-state) • d: disturbances • x: states (internal variables) Typical cost function: • Optimize operation with respect to u for given d (usually steady-state): minu J(u,x,d) subject to: Model equations: f(u,x,d) = 0 Operational constraints: g(u,x,d) < 0 J = cost feed + cost energy – value products

  13. Step S2. Optimize(a) Identify degrees of freedom (b) Optimize for expected disturbances • Need good model, usually steady-state • Optimization is time consuming! But it is offline • Main goal: Identify ACTIVE CONSTRAINTS • A good engineer can often guess the active constraints

  14. Step S3: Implementation of optimal operation • Have found the optimal way of operation. How should it be implemented? • What to control ? (CV1). • Active constraints • Self-optimizing variables (for unconstrained degrees of freedom)

  15. Optimal operation - Runner Optimal operation of runner • Cost to be minimized, J=T • One degree of freedom (u=power) • What should we control?

  16. Optimal operation - Runner 1. Optimal operation of Sprinter • 100m. J=T • Active constraint control: • Maximum speed (”no thinking required”) • CV = power (at max)

  17. Optimal operation - Runner 2. Optimal operation of Marathon runner • 40 km. J=T • What should we control? CV=? • Unconstrained optimum J=T u=power uopt

  18. Optimal operation - Runner Self-optimizing control: Marathon (40 km) • Any self-optimizing variable (to control at constant setpoint)? • c1 = distance to leader of race • c2 = speed • c3 = heart rate • c4 = level of lactate in muscles

  19. Optimal operation - Runner Conclusion Marathon runner select one measurement CV1 = heart rate J=T • CV = heart rate is good “self-optimizing” variable • Simple and robust implementation • Disturbances are indirectly handled by keeping a constant heart rate • May have infrequent adjustment of setpoint (cs) c=heart rate copt

  20. Summary Step 3. What should we control (CV1)? Selection of primary controlled variables c = CV1 • Control active constraints! • Unconstrained variables: Control self-optimizing variables! • Old idea (Morariet al., 1980): “We want to find a function c of the process variables which when held constant, leads automatically to the optimal adjustments of the manipulated variables, and with it, the optimal operating conditions.”

  21. Unconstrained degrees of freedom The ideal “self-optimizing” variable is the gradient, Ju c =  J/ u = Ju Keep gradient at zero for all disturbances (c = Ju=0) Problem: Usually no measurement of gradient Ju<0 cost J uopt Ju<0 0 Ju Ju=0 u

  22. Ideal: c = Ju In practise, use available measurements: c = H y. Task: Determine H! H

  23. Combinations of measurements, c= HyNullspace method for H (Alstad): HF=0 where F=dyopt/dd gives Ju=0 • Proof. Appendix B in: Jäschke and Skogestad, ”NCO tracking and self-optimizing control in the context of real-time optimization”, Journal of Process Control, 1407-1416 (2011) • .

  24. With measurement noise “Exact local method” “=0” in nullspace method (no noise) “Minimize” in Maximum gain rule ( maximize S1 G Juu-1/2 , G=HGy) “Scaling” S1 • No measurement error: HF=0 (nullspace method) • With measuremeng error: Minimize GFc • Maximum gain rule

  25. Example. Nullspace Method for Marathon runner u = power, d = slope [degrees] y1 = hr [beat/min], y2 = v [m/s] F = dyopt/dd = [0.25 -0.2]’ H = [h1 h2]] HF = 0 -> h1 f1 + h2 f2 = 0.25 h1 – 0.2 h2 = 0 Choose h1 = 1 -> h2 = 0.25/0.2 = 1.25 Conclusion: c = hr + 1.25 v Control c = constant -> hr increases when v decreases (OK uphill!)

  26. Good Good BAD In practice: What variable c=Hy should we control? (for self-optimizing control) • The optimal value of c should be insensitive to disturbances • Small HF = dcopt/dd • c should be easy to measure and control • The value of c should be sensitive to the inputs (“maximum gain rule”) • Large G = HGy = dc/du • Equivalent: Want flat optimum Note: Must alsofind optimal setpoint for c=CV1

  27. Example: CO2 refrigeration cycle pH J = Ws (work supplied) DOF = u (valve opening, z) Main disturbances: d1 = TH d2 = TCs (setpoint) d3 = UAloss What should we control?

  28. CO2 refrigeration cycle Step 1. One (remaining) degree of freedom (u=z) Step 2. Objective function. J = Ws (compressor work) Step 3. Optimize operation for disturbances (d1=TC, d2=TH, d3=UA) • Optimum always unconstrained Step 4. Implementation of optimal operation • No good single measurements (all give large losses): • ph, Th, z, … • Nullspace method: Need to combine nu+nd=1+3=4 measurements to have zero disturbance loss • Simpler: Try combining two measurements. Exact local method: • c = h1 ph + h2 Th = ph + k Th; k = -8.53 bar/K • Nonlinear evaluation of loss: OK!

  29. CO2 cycle: Maximum gain rule

  30. CV=Measurement combination Refrigeration cycle: Proposed control structure CV1= Room temperature CV2= “temperature-corrected high CO2 pressure”

  31. Step 4. Where set production rate? • Where locale the TPM (throughput manipulator)? • The ”gas pedal” of theprocess • Very important! • Determines structure of remaining inventory (level) control system • Set production rate at (dynamic) bottleneck • Link between Top-down and Bottom-up parts • NOTE: TPM location is a dynamic issue. Link to economics is to improve control of active constraints (reduce backoff)

  32. Production rate set at inlet :Inventory control in direction of flow* TPM * Required to get “local-consistent” inventory control

  33. Production rate set at outlet:Inventory control opposite flow TPM

  34. Production rate set inside process TPM General: “Need radiating inventory control around TPM” (Georgakis)

  35. QUIZ 1 CONSISTENT?

  36. Optimal centralized Solution (EMPC) Academic process control community fish pond Sigurd

  37. Conclusion:Systematic procedure for plantwide control • Start “top-down” with economics: • Step 1: Define operational objectives and identify degrees of freeedom • Step 2: Optimize steady-state operation. • Step 3A: Identify active constraints = primary CVs c. • Step 3B: Remaining unconstrained DOFs: Self-optimizing CVs c. • Step 4: Where to set the throughput (usually: feed) • Regulatory control I: Decide on how to move mass through the plant: • Step 5A: Propose “local-consistent” inventory (level) control structure. • Regulatory control II: “Bottom-up” stabilization of the plant • Step 5B: Control variables to stop “drift” (sensitive temperatures, pressures, ....) • Pair variables to avoid interaction and saturation • Finally: make link between “top-down” and “bottom up”. • Step 6: “Advanced/supervisory control” system (MPC): • CVs: Active constraints and self-optimizing economic variables + • look after variables in layer below (e.g., avoid saturation) • MVs: Setpoints to regulatory control layer. • Coordinates within units and possibly between units cs http://www.nt.ntnu.no/users/skoge/plantwide

  38. Summary and references • The following paper summarizes the procedure: • S. Skogestad, ``Control structure design for complete chemical plants'', Computers and Chemical Engineering, 28 (1-2), 219-234 (2004). • There are many approaches to plantwide control as discussed in the following review paper: • T. Larsson and S. Skogestad, ``Plantwide control: A review and a new design procedure'' Modeling, Identification and Control, 21, 209-240 (2000). • The following paper updates the procedure: • S. Skogestad, ``Economic plantwide control’’, Book chapter in V. Kariwala and V.P. Rangaiah (Eds), Plant-Wide Control: Recent Developments and Applications”, Wiley (2012). • Another paper: • S. Skogestad “Plantwide control: the search for the self-optimizing control structure‘”, J. Proc. Control, 10, 487-507 (2000). • More information: http://www.nt.ntnu.no/users/skoge/plantwide

More Related