310 likes | 327 Views
This paper presents a novel reduction algorithm for extracting behavioral models, focusing on preserving input-output properties and automatic reduction. The proposed algorithm aims to improve efficiency and accuracy compared to existing methods, offering a more reliable approach for large-scale models. Key topics covered include projection frameworks, balanced truncation algorithms, the AISIAD algorithm, and modifications for enhanced performance. The text provides insights into optimizing model reduction for differential equation models and applications in various domains.
E N D
A more reliable reduction algorithm for behavioral model extraction Dmitry Vasilyev, Jacob White Massachusetts Institute of Technology
Outline • Background • Projection framework for model reduction • Balanced Truncation algorithm and approximations • AISIAD algorithm • Description of the proposed algorithm • Modified AISIAD and a low-rank square root algorithm • Efficiency and accuracy • Conclusions
Model reduction problem inputs outputs Many (> 104) internal states inputs outputs few (<100) internal states • Reduction should be automatic • Must preserve input-output properties
Differential Equation Model - state A – stable, n xn (large) E – SPD, n xn - vector of inputs - vector of outputs • Model can represent: • Finite-difference spatial discretization of PDEs • Circuits with linear elements
Model reduction problem n – large (thousands)! q – small (tens) Need the reduction to be automatic and preserve input-output properties (transfer function)
Approximation error • Wide-band applications: model should have small worst-case error => maximal difference over all frequencies ω
Projection framework for model reduction • Pick biorthogonal projection matrices W and V • Projection basis are columns of V and W x Vxr x x n q V xr Ax WTAVxr Most reduction methods are based on projection
Projection should preserve important modes u y LTI SYSTEM t t input output P (controllability) Which modes are easier to reach? Q (observability) Which modes produce more output? X (state) • Reduced model retains most controllable and most observable modes • Mode must be both very controllable and very observable
Balanced truncation reduction (TBR) Compute controllability and observability gramians P and Q : (~n3)AP + PAT + BBT =0 ATQ + QA + CTC = 0 Reduced model keeps the dominant eigenspaces of PQ : (~n3) PQvi= λiviwiPQ = λiwi Reduced system: (WTAV, WTB, CV, D) Very expensive. P and Q are dense even for sparse models
Most reduction algorithms effectively separately approximate dominant eigenspaces of Pand Q: • Arnoldi [Grimme ‘97]:V = colsp{A-1B, A-2B, …}, W=VT, approx. Pdomonly • Padé via Lanczos [Feldman and Freund ‘95]colsp(V) = {A-1B, A-2B, …}, - approx. Pdomcolsp(W) = {A-TCT, (A-T )2CT, …},- approx. Qdom • Frequency domain POD [Willcox ‘02], Poor Man’s TBR [Phillips ‘04] colsp(V) = {(jω1I-A)-1B, (jω2I-A)-1B, …}, - approx.Pdom colsp(W) = {(jω1I-A)-TCT, (jω2I-A)-TCT, …},- approx.Qdom However, what matters is the product PQ
RC line (symmetric circuit) V(t) – input i(t) - output • Symmetric, P=Qall controllable states are observable and vice versa
RLC line (nonsymmetric circuit) Vector of states: • P and Q are no longer equal! • By keeping only mostly controllable and/or only mostly observable states, we may not find dominant eigenvectors of PQ
Lightly damped RLC circuit R = 0.008, L = 10-5 C = 10-6 N=100 • Exact low-rank approximations of P and Q of order < 50 leads to PQ≈ 0!!
Lightly damped RLC circuit Top 5 eigenvectorsof Q Top 5 eigenvectors of P Union of eigenspaces of P and Q does not necessarily approximate dominant eigenspace of PQ .
Xi= (PQ)Vi => Vi+1= qr(Xi) “iterate” AISIAD model reduction algorithm Idea of AISIAD approximation: Approximate eigenvectors using power iterations: Viconverges to dominant eigenvectors ofPQ Need to find the product (PQ)Vi How?
Approximation of the product Vi+1=qr(PQVi), AISIAD algorithm Wi≈ qr(QVi) Vi+1≈ qr(PWi) Approximate using solution of Sylvester equation Approximate using solution of Sylvester equation
More detailed view of AISIAD approximation Right-multiply by Wi (original AISIAD) X H, qxq X M, nxq
Modified AISIAD approximation Right-multiply by Vi ^ X H, qxq X Approximate! M, nxq
Modified AISIAD approximation Right-multiply by Vi ^ X H, qxq X Approximate! M, nxq We can take advantage of numerous methods, which approximate P and Q!
Specialized Sylvester equation -M X X A = + H qxq nxq nxn Need only column span of X
Solving Sylvester equation Schur decomposition of H : -M X X A ~ ~ ~ = + ~ Solve for columns of X X
Solving Sylvester equation • Applicable to any stable A • Requires solving q times Schur decomposition of H : Solution can be accelerated via fast MVP Another methods exists, based on IRA, needs A>0 [Zhou ‘02]
Solving Sylvester equation • Applicable to any stable A • Requires solving q times Schur decomposition of H : ^ For SISO systems and P=0 equivalent to matching at frequency points –Λ(WTAW)
Modified AISIAD algorithm LR-sqrt ^ ^ • Obtain low-rank approximations of Pand Q • Solve AXi+XiH+ M = 0, => Xi≈ PWi where H=WiTATWi, M = P(I - WiWiT)ATWi + BBTWi • Perform QR decomposition of Xi =ViR • Solve ATYi+YiF+ N = 0, => Yi≈ QVi where F=ViTAVi, N = Q(I - ViViT)AV + CTCVi • Perform QR decomposition of Yi =Wi+1 Rto get new iterate. • Go to step 2 and iterate. • Bi-orthogonalize WandVand construct reduced model: ^ ^ (WTAV, WTB, CV, D)
For systems in the descriptor form Generalized Lyapunov equations: Lead to similar approximate power iterations
mAISIAD and low-rank square root Low-rank gramians (cost varies) mAISIAD LR-square root (inexpensive step) (more expensive) For the majority of non-symmetric cases, mAISIAD works better than low-rank square root
RLC line example results H-infinity norm of reduction error (worst-case discrepancy over all frequencies) N = 1000, 1 input 2 outputs
Steel rail coolling profile benchmark Taken from Oberwolfach benchmark collection, N=1357 7 inputs, 6 outputs
mAISIAD is useless for symmetric models For symmetric systems (A = AT, B = CT) P=Q, therefore mAISIAD is equivalent to LRSQRT for P,Q of order q ^ ^ RC line example
Cost of the algorithm • Cost of the algorithm is directly proportional to the cost of solving a linear system:(where sjj is a complex number) • Cost does not depend on the number of inputs and outputs (non-descriptor case) (descriptor case)
Conclusions • The algorithm has a superior accuracy and extended applicability with respect to the original AISIAD method • Very promising low-cost approximation to TBR • Applicable to any dynamical system, will work (though, usually worse) even without low-rank gramians • Passivity and stability preservation possible via post-processing • Not beneficial if the model is symmetric