740 likes | 932 Views
Introduction to Model Order Reduction. II.1 – Reducing Linear Time Invariant Systems. Luca Daniel. Thanks to Jacob White, Peter Feldmann. Model Order Reduction Linear Time Invariant Systems. II.1.a via Modal Analysis II.1.b via Rational Function Fitting (point matching)
E N D
Introduction to Model Order Reduction II.1 – Reducing Linear Time Invariant Systems Luca Daniel Thanks to Jacob White, Peter Feldmann
Model Order Reduction Linear Time Invariant Systems • II.1.a via Modal Analysis • II.1.b via Rational Function Fitting (point matching) • II.1.c. via Quasi Convex Optimization • II.1.d via Pade’ approximation and AWE
Introduction to Model Order Reduction II.1.a – Reduction using Modal Analyis Luca Daniel Thanks to Jacob White, Peter Feldmann
State-Space Description Dynamic Linear case • Original Dynamical System - Single Input/Output • Reduced Dynamical System • q << N, but input/output behavior preserved
Defining Accuracy • Time-domain response should be “close” • For which possible inputs? • Frequency response should match • At what frequencies?
Matching Frequency Response • Ensure accuracy for only some inputs? • Example: • low frequency inputs, • or some band, • or some points in the frequency response matching some part of the frequency response Original
Reminder about Eigenanalysis Cont. Decoupled Equations Output Equation
Reminder about Eigenanalysis Cont. Solving Decoupled Equations Assuming Zero Initial Conditions Output Equation
Reduced models via mode truncation Dynamic Linear Case Output Equation
Reduced models via mode Truncation Dynamic Linear Case Why? • Certain modes are not affected by the input • Certain modes do not affect the output • Keep least negative eigenvalues (slowest modes) • Look at response to a constant input
Reduced models via mode truncation Dynamic Linear Case Heat Conducting bar Results N=100 q=1 q=3 q=10 Exact Keep qth slowest modes
Another way to look at Reduction by Modal Analysis Transfer Function Apply Eigendecomposition elimitate each mode for which this term is small
Model Order Reduction via Eigenmode Analysis Pole-Residue Form Pole-Zero Form (SISO) • Ideas for reducing order: • Drop terms with small residues • Drop terms with large negative (“fast” modes) • Remove pole/zero near-cancellations • Cluster poles that are “together”
Eigenmode Analysis Based Reduction Summary • Advantages • Conceptually familiar • Simple physical interpretation : retains dominant system modes/poles • Drawbacks • Relatively expensive : have to find the eigenvalues first • Relatively inefficient. For a given model size, many other approaches can provide better accuracy for the same computational cost • e.g. Hankel Model Order Reduction • e.g. Truncated Balance Realization O(n3)
Model Order Reduction Linear Time Invariant Systems • II.1.a via Modal Analysis • II.1.b via Rational Function Fitting (point matching) • II.1.c. via Quasi Convex Optimization • II.1.d via Pade’ approximation and AWE
Introduction to Model Order Reduction II.1.b – Reduction using Fitting Luca Daniel Thanks to Jacob White
A canonical form for model order reduction Assuming A is non-singular we can cast the dynamical linear system into one canonical form for model order reduction Note: not necessarily always the best, but the simplest for educational purposes
Model Order Reduction via Rational Transfer Function Fitting Original System Transfer Function: rational function Model Reduction = Find a low order (q << N) rational function matching reduced order rational function
Rational Transfer Function Fitting: Degrees of Freedom Reduced Model Dynamical System coefficients Reduced Model Transfer Function coefficients
Rational Transfer Function Fitting: Degrees of Freedom (cont.) Reduced Model Transfer Function Apply any invertible change of variables to the state I I Many Dynamical Systems have the same transfer function!!
Rational Transfer Function Fitting: via Point Matching • Can match 2q points • cross multiplying generates a linear system For i = 1 to 2q
Rational Transfer Function Fitting: Point Matching matrix can be ill-conditioned • Columns contain progressively higher powers of the test frequencies: problem is numerically ill-conditioned • also... missing data can cause severe accuracy problems
Fitting Example Hard to Solve Systems Polynomial Interpolation Table of Data f t0f (t0) t1f (t1) tNf (tN) f (t0) t t0 t1 t2 tN Problem fit data with an Nth order polynomial
Example Problem Hard to Solve Systems Matrix Form
Fitting Example Hard to Solve Systems f t Fitting f(t) = t Coefficient Value Coefficient number
When vectors are nearly aligned, difficult to determine how much of versus how much of Perturbation Analysis Hard to Solve Systems Geometric Approach is clearer Columns orthogonal Case 1 Case 1 Columns nearly aligned
Geometric Analysis Hard to Solve Systems Polynomial Interpolation log(cond(M)) ~1020 1 1020 1 t 1015 ~1013 ~106 t 2 1010 ~314 4 8 16 32 1 t n The power series polynomials are nearly linearly dependent
Course Outline Numerical Simulation Quick intro to PDE Solvers Quick intro to ODE Solvers Model Order reduction Linear systems Common engineering practice Optimal techniques in terms of model accuracy Efficient techniques in terms of time and memory Non-Linear Systems Parameterized Model Order Reduction Linear Systems Non-Linear Systems Yesterday Today Tomorrow Thursday Friday
Introduction to Model Order Reduction Luca Daniel Massachusetts Institute of Technology luca@mit.edu http://onigo.mit.edu/~dluca/2006PisaMOR www.rle.mit.edu/cpg
Course Outline Numerical Simulation Quick intro to PDE Solvers Quick intro to ODE Solvers Model Order reduction Linear systems Common engineering practice Optimal techniques in terms of model accuracy Efficient techniques in terms of time and memory Non-Linear Systems Parameterized Model Order Reduction Linear Systems Non-Linear Systems Monday Yesterday Today Tomorrow Friday
Model Order Reduction Linear Time Invariant Systems • II.1.a via Modal Analysis • II.1.b via Ratianal Function Fitting (point matching) • II.1.c. via Quasi Convex Optimization • II.1.d via Pade’ approximation and AWE
Introduction to Model Order Reduction II.1.c – Reduction using Optimization Luca Daniel Thanks to Kin C. Sou, Alexander Megretski
Overview • Optimization based reduction • Quasi-convex optimization MOR setup • Solving the MOR setup • Application examples • Conclusions
Recall Rational Transfer Function Fitting via Point Matching • Can match 2q points • cross multiplying generates a linear system For i = 1 to 2q
Optimization based rational fit Model Order Reduction Setup From field solver Or measurements Small stable and passive reduced order model Least Square method • Cast as nonlinear least squares (solved by Gauss-Newton) Quasi-convex method • Cast as quasi-convex program (solved by convex optimization algorithm) • Do not consider stability or passivity while finding poles (need post-processing) • Explicitly take care of stability and passivity while finding poles
Change of variables • To make our program tractable, we introduce a change offrequency variables (bilinear transform) z frequency variable Laplace frequency variable [z] [s]
Modified optimal H-inf norm MOR setup Stability: q(z) Schur polynomial (roots inside unit circle) Passivity, and possibly other constraints • Desirable MOR setup to solve • Feasible set is not convex if m 3 • For example, • but • Problem has not been proved to be NP complete either
Overview • Optimization based reduction • Quasi-convex optimization MOR setup • Solving the MOR setup • Application examples • Conclusions
optimal relaxed solution -c nearest rounding feasible set Relaxation General idea • Original problem is difficult • Made easier if some constraints are dropped (relaxed) • Solve the relaxed problem • Construct original solution from relaxation • For example, LP relaxation (polynomial time) of IP problems (exponential time). optimal solution -c … feasible set
Relaxation of the H-inf norm MOR setup Anti-stable term Stability: q(z) Schur polynomial (roots inside unit circle) Passivity, and possibly other constraints Benefit: Relaxation equivalent to a quasi-convex program. Drawback: May obtain suboptimal solutions
How bad is this relaxation? THEOREM: Let such that deg(q) = m, q(z) is Schur polynomial Then m+1th Hankel singular value
Change of variables where a(z) b(z) and c(z) are trigonometric polynomials: when Prop: Stability
Passivity • For SISO systems, passivity means • H(z) is analytic for |z|>=1 • H(z)*=H(z*) • Re(H(z))>0 for |z|=1 for impedance, for all frequencies! Conclusion: Stability and passivity = positivity of trigonometric polynomials
convex set quasi-convex function =0 =1 =2 =3 1 0 Equivalent quasi-convex setup convex set This is a quasi-convex program, because defines an intersection of halfspaces and sub-level set is is again intersection of halfspaces parameterized by and
Additional constraints • Can model additional constraints such as • Bounded real passivity (for scatter parameters) • Explicit minimization of quality factor error (for inductors) • Weighting of frequency responses • Point-wise transfer function (and/or derivatives) matching
Overview • Optimization based reduction • Quasi-convex optimization MOR setup • Algorithm Summary • Application examples • Conclusions
Summary of QCO algorithm Step 1: Compute optimal solution a(z),b(z),c(z) of the relaxation subject to stability, passivity… Solved for example by the ellipsoid algorithm Step 2: Compute coefficients of q(z) using the relation and q(z) being a Schur polynomial Step 3: Compute coefficients of p(z) by solving ,stability, passivity… Solved for example by the ellipsoid algorithm
cut localization set center new center new cut min volume covering ellipsoid target set Solving quasi-convex programs localization set (e.g. ellipsoid) (a,b,c,) current iterate Objective oracle, stabilityoracle, passivity oracle… N Termination? Y N Update localization set Stability? N Y and so on Passivity? Generate cut N N Y Decrease … All Yes
Overview • Optimization based reduction • Quasi-convex optimization MOR setup • Algorithm summary • Application examples • Conclusions