1 / 60

CSE 245: Computer Aided Circuit Simulation and Verification

This outline provides an introduction to direct and iterative methods for matrix computations in computer-aided circuit simulation and verification, including formulations, projection methods, Krylov space methods, preconditioned iterations, multigrid methods, and domain decomposition methods.

dewaynes
Download Presentation

CSE 245: Computer Aided Circuit Simulation and Verification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSE 245: Computer Aided Circuit Simulation and Verification Matrix Computations: Iterative Methods (II) Chung-Kuan Cheng

  2. Outline • Introduction • Direct Methods • Iterative Methods • Formulations • Projection Methods • Krylov Space Methods • Preconditioned Iterations • Multigrid Methods • Domain Decomposition Methods

  3. Introduction Iterative Methods Direct Method LU Decomposition Domain Decomposition General and Robust but can be complicated if N>= 1M Preconditioning Conjugate Gradient GMRES Jacobi Gauss-Seidel Multigrid Excellent choice for SPD matrices Remain an art for arbitrary matrices

  4. Formulation • Error in A norm • Matrix A is SPD (Symmetric and Positive Definite • Minimal Residue • Matrix A can be arbitrary

  5. Formulation: Error in A norm Min (x-x*)TA(x-x*), where Ax* =b, and A is SPD. Min E(x)=1/2 xTAx-bTx Search space: x=x0+Vy where x0 is an initial solution, matrix Vnxm has m bases of subspace K vector ym contains the m variables.

  6. Solution: Error in A norm Min E(x)=1/2 xTAx-bTx, Search space: x=x0+Vy, r=b-Ax • VTAV is nonsingular if A is SPD, V is full ranked • y=(VTAV)-1VTr0 • x=x0+V(VTAV)-1VTr0 • VTr=0 • E(x)=E(x0)-1/2 r0TV(VTAV)-1VTr0

  7. Solution: Error in A norm For any V’=VW, where V is nxm and W is a nonsingular mxm matrix, the solution x remains the same. Search space: x=x0+V’y, r=b-Ax • V’TAV’ is nonsingular if A is SPD & V is full ranked • x=x0+V’(V’TAV’)-1V’Tr0 • =x0+V(VTAV)-1VTr0 • V’Tr=WTVTr=0 • E(x)=E(x0)-1/2 r0TV(VTAV)-1VTr0

  8. Steepest Descent: Error in A norm Min E(x)=1/2 xTAx-bTx, Gradient: Ax-b= -r Set x=x0+yr0 • r0TAr0 is nonsingular if A is SPD • y=(r0TAr0)-1r0Tr0 • x=x0+r0(r0TAr0)-1r0Tr0 • r0Tr=0 • E(x)=E(x0)-1/2 (r0Tr0)2/(r0TAr0)

  9. Lanczos: Error in A norm Min E(x)=1/2 xTAx-bTx, Set x=x0+Vy • v1=r0 • vi is in K{r0, A, i} • V=[v1,v2, …,vm] is orthogonal • AV=VHm+vm+1 emT • VTAV=Hm Note since A is SPD, Hm is Tm Tridiagonal

  10. Conjugate Gradient: Error in A norm Min E(x)=1/2 xTAx-bTx, Set x=x0+Vy • v1=r0 • vi is in K{r0, A, i} • V=[v1,v2, …,vm] is orthogonal in A norm, i.e. VTAV= [diag(viTAvi)] • yi= (viTAvi)-1

  11. Formulation: Residual Min |r|2=|b-Ax|2, for an arbitrary square matrix A Min R(x)=(b-Ax)T(b-Ax) Search space: x=x0+Vy where x0 is an initial solution, matrix Vnxm has m bases of subspace K vector ym contains the m variables.

  12. Solution: Residual Min R(x)=(b-Ax)T(b-Ax) Search space: x=x0+Vy • VTATAV is nonsingular if A is nonsingular and V is full ranked. • y=(VTATAV)-1VTATr0 • x=x0+V(VTATAV)-1VTATr0 • VTATr= 0 • R(x)=R(x0)-r0TAV(VTATAV)-1VTATr0

  13. Steepest Descent: Residual Min R(x)=(b-Ax)T(b-Ax) Gradient: -2AT(b-Ax)=-2ATr Let x=x0+yATr0 • VTATAV is nonsingular if A is nonsingular where V=ATr0. • y=(VTATAV)-1VTATr0 • x=x0+V(VTATAV)-1VTATr0 • VTATr= 0 • R(x)=R(x0)-r0TAV(VTATAV)-1VTATr0

  14. GMRES: Residual Min R(x)=(b-Ax)T(b-Ax) Gradient: -2AT(b-Ax)=-2ATr Let x=x0+yATr0 • v1=r0 • vi is in K{r0, A, i} • V=[v1,v2, …,vm] is orthogonal • AV=VHm+vm+1 emT=Vm+1Hm • x=x0+V(VTATAV)-1VTATr0 • =x0+V(HmTHm)-1 HmTe1|r0|2

  15. Conjugate Residual: Residual Min R(x)=(b-Ax)T(b-Ax) Gradient: -2AT(b-Ax)=-2ATr Let x=x0+yATr0 • v1=r0 • vi is in K{r0, A, i} • (AV)TAV= D Diagonal Matrix • x=x0+V(VTATAV)-1VTATr0 • =x0+VD-1VTATr0

  16. Conjugate Gradient Method • Steepest Descent • Repeat search direction • Why take exact one step for each direction? Search direction of Steepest descent method

  17. Orthogonal Direction Pick orthogonal search direction: We like to leave the error of xi+1 orthogonal to the search direction di, i.e. • We don’t know !!!

  18. Orthogonal  A-orthogonal • Instead of orthogonal search direction, we make search direction A –orthogonal (conjugate)

  19. Search Step Size

  20. Iteration finish in n steps Initial error: A-orthogonal The error component at direction dj is eliminated at step j. After n steps, all errors are eliminated.

  21. Conjugate Search Direction • How to construct A-orthogonal search directions, given a set of n linear independent vectors. • Since the residue vector in steepest descent method is orthogonal, a good candidate to start with

  22. Construct Search Direction -1 • In Steepest Descent Method • New residue is just a linear combination of previous residue and • Let We have Krylov SubSpace: repeatedly applying a matrix to a vector

  23. Construct Search Direction -2 let For i > 0

  24. Construct Search Direction -3 • can get next direction from the previous one, without saving them all. let then

  25. Conjugate Gradient Algorithm Given x0, iterate until residue is smaller than error tolerance

  26. Conjugate gradient: Convergence • In exact arithmetic, CG converges in n steps (completely unrealistic!!) • Accuracy after k steps of CG is related to: • consider polynomials of degree k that is equal to 1 at step 0. • how small can such a polynomial be at all the eigenvalues of A? • Eigenvalues close together are good. • Condition number:κ(A) = ||A||2 ||A-1||2 = λmax(A) / λmin(A) • Residual is reduced by a constant factor by O(κ1/2(A)) iterations of CG.

  27. Other Krylov subspace methods • Nonsymmetric linear systems: • GMRES: for i = 1, 2, 3, . . . find xi  Ki (A, b) such that ri = (Axi– b)  Ki (A, b)But, no short recurrence => save old vectors => lots more space (Usually “restarted” every k iterations to use less space.) • BiCGStab, QMR, etc.:Two spaces Ki (A, b)and Ki (AT, b)w/ mutually orthogonal bases Short recurrences => O(n) space, but less robust • Convergence and preconditioning more delicate than CG • Active area of current research • Eigenvalues: Lanczos (symmetric), Arnoldi (nonsymmetric)

  28. Preconditioners • Suppose you had a matrix B such that: • condition number κ(B-1A) is small • By = z is easy to solve • Then you could solve (B-1A)x = B-1b instead of Ax = b • B = A is great for (1), not for (2) • B = I is great for (2), not for (1) • Domain-specific approximations sometimes work • B = diagonal of A sometimes works • Better: blend in some direct-methods ideas. . .

  29. Preconditioned conjugate gradient iteration • One matrix-vector multiplication per iteration • One solve with preconditioner per iteration x0 = 0, r0 = b, d0 = B-1r0, y0 = B-1r0 for k = 1, 2, 3, . . . αk = (yTk-1rk-1) / (dTk-1Adk-1) step length xk = xk-1 + αk dk-1 approx solution rk = rk-1 – αk Adk-1 residual yk = B-1rk preconditioning solve βk = (yTk rk) / (yTk-1rk-1) improvement dk = yk + βk dk-1 search direction

  30. Outline • Iterative Method • Stationary Iterative Method (SOR, GS,Jacob) • Krylov Method (CG, GMRES) • Multigrid Method

  31. What is the multigrid • A multilevel iterative method to solve • Ax=b • Originated in PDEs on geometric grids • Expend the multigrid idea to unstructured problem – Algebraic MG • Geometric multigrid for presenting the basic ideas of the multigrid method.

  32. v3 v4 v1 v2 v5 v6 v8 v7 + vs The model problem Ax = b

  33. Simple iterative method • x(0) -> x(1) -> … -> x(k) • Jacobi iteration • Matrix form : x(k) = Rjx(k-1) + Cj • General form: x(k) = Rx(k-1) + C (1) • Stationary: x* = Rx* + C (2)

  34. Error and Convergence Definition: errore = x* - x (3) residualr = b – Ax (4) e, r relation: Ae = r (5) ((3)+(4)) e(1) = x*-x(1) = Rx* + C – Rx(0) – C =Re(0) Error equatione(k) = Rke(0) (6) ((1)+(2)+(3)) Convergence:

  35. k= 1 k= 4 k= 2 Error of diffenent frequency • Wavenumber k and frequency  • = k/n • High frequency error is more oscillatory between points

  36. Iteration reduce low frequency error efficiently • Smoothing iteration reduce high frequency error efficiently, but not low frequency error Error k = 1 k = 2 k = 4 Iterations

  37. 2 1 3 4 3 4 1 2 5 6 8 7 Multigrid – a first glance • Two levels : coarse and fine grid 2h A2hx2h=b2h h Ahxh=bh Ax=b

  38. Idea 1: the V-cycle iteration • Also called the nested iteration Start with 2h A2hx2h = b2h A2hx2h = b2h Iterate => Prolongation:  Restriction:  h Ahxh = bh Iterate to get Question 1: Why we need the coarse grid ?

  39. 2 1 3 4 3 4 1 2 5 6 8 7 Prolongation • Prolongation (interpolation) operator xh = x2h

  40. 2 1 3 4 3 4 1 2 5 6 8 7 Restriction • Restriction operator xh = x2h

  41. Smoothing • The basic iterations in each level In ph: xphold  xphnew • Iteration reduces the error, makes the error smooth geometrically. So the iteration is called smoothing.

  42. Why multilevel ? • Coarse lever iteration is cheap. • More than this… • Coarse level smoothing reduces the error more efficiently than fine level in some way . • Why ? ( Question 2 )

  43. Error restriction • Map error to coarse grid will make the error more oscillatory K = 4,  =  K = 4,  = /2

  44. Idea 2: Residual correction • Known current solution x • Solve Ax=b eq. to • MG do NOT map x directly between levels Map residual equation to coarse level • Calculate rh • b2h= Ih2h rh ( Restriction ) • eh =Ih2hx2h ( Prolongation ) • xh = xh + eh

  45. Why residual correction ? • Error is smooth at fine level, but the actual solution may not be. • Prolongation results in a smooth error in fine level, which is suppose to be a good evaluation of the fine level error. • If the solution is not smooth in fine level, prolongation will introduce more high frequency error.

  46. Revised V-cycle with idea 2 • Smoothing on xh • Calculate rh • b2h= Ih2h rh • Smoothing on x2h • eh =Ih2hx2h • Correct: xh = xh + eh 2h h ` Restriction Prolongation

  47. What is A2h • Galerkin condition

  48. Going to multilevels • V-cycle and W-cycle • Full Multigrid V-cycle h 2h 4h h 2h 4h 8h

  49. Performance of Multigrid • Complexity comparison

  50. Summary of MG ideas Important ideas of MG • Hierarchical iteration • Residual correction • Galerkin condition • Smoothing the error: high frequency : fine grid low frequency : coarse grid

More Related