290 likes | 365 Views
This comprehensive guide explores various direct methods such as Gauss Elimination, LU Decompositions, and Thomas Algorithm for solving systems of equations efficiently, along with examples and detailed explanations. It also delves into iterative methods like Jacobi and Gauss Seidel, comparing their performances with practical examples.
E N D
Methods for Solution of the System of Equations: Ax = b • Direct Methods: one obtains the exact solution (ignoring the round-off errors) in a finite number of steps. These group of methods are more efficient for dense and banded matrices. • Gauss Elimination; Gauss-Jordon Elimination • LU-Decomposition (Doolittle, Crout, Cholesky) • Thomas Algorithm (for tri-diagonal banded matrix)
LU Theorem: Let Ak be a sequence of matrices formed by first k rows and k columns of a n × n square matrix A. If det (Ak) ≠ 0 for k = 1, 2, … (n-1), then there exist an upper triangular matrix U and a lower triangular matrix L such that, A = LU. Furthermore, if the diagonal elements of either L or U are unity, i.e. lii or uii = 1 for i = 1,2, … n, both L and U are unique. A = Ak
Diagonalization (LDU theorem): Let A be a n × n invertible matrix then there exists a decomposition of the form A = LDU where, L is a n × n lower triangular matrix with diagonal elements as 1, U is a n × n upper triangular matrix with diagonal elements as 1, and D is a n × n diagonal matrix. Example of a 3 × 3 matrix: For symmetric matrix: U = LT and A = LDLT Note that the entries of the diagonal matrix D are the pivots!
For positive definite matrices, pivots are positive! • Therefore, a diagonal matrix D containing the pivots can be factorized as: D = D1/2D1/2 • Example of a 3 × 3 matrix • For positive definitematrices: A = LDLT = L D1/2D1/2 LT • However, D1/2LT = (LD1/2)T. Denote: L D1/2 = L1 • Therefore, A = L1 L1T . This is also a LU-Decomposition where one needs to evaluate only one triangular matrix L1.
Cholesky Algorithm (for symmetric positive definite matrices): A = LLT .where U = LT. Elements of matrix L are to be evaluated! • Diagonal elements of the L matrix: j =i = p, ljk = ukj • Off-diagonal elements of the L matrix: j < i, p = j, ljk = ukj
Thomas Algorithm (for Tridiagonal) • No need to store n2 + n elements! • Store only 4n elements in the form of four vectors l, d, u and b • ith equation is: • Notice: l1 = un = 0
Thomas Algorithm • Initialize two new vectors α and βas α1 = d1and β1 = b1 • Take the first two equations and eliminate x1 : • Resulting equation is: where, • Similarly, we can eliminate x2, x3 ……
Thomas Algorithm • At the ith step: • Eliminate xi-1 to obtain: where,
Thomas Algorithm • Last two equations are: • Eliminate xn-1 to obtain:
Thomas Algorithm • Given: four vectors l, d, u and b • Generate: two vectors α and βas α1 = d1and β1 = b1 i = 2, 3, ….. n • Solution: i = n-1 …… 3, 2, 1 • FP operations: 8(n-1) + 3(n-1) + 1 = 11n - 10
ESO 208A: Computational Methods in EngineeringIterative Methods Abhas Singh Department of Civil Engineering IIT Kanpur Acknowledgements: Profs. SaumyenGuha and Shivam Tripathi (CE)
Sparse Matrix: Origin Laplace Equation: Boundary Conditions: Size of the matrix 16×16 Total number of elements = 256 Number of non-zero elements = 56 Also applicable for Network Analysis
Sparse Matrix Total number of elements = n2 Number of non-zero elements ~ O(n) Direct methods algorithms such as Gauss elimination, Gauss Jordon, LU decomposition are inefficient for Banded and Sparse Matrices!
Iterative Methods a11x1 + a12x2 + a13x3• • • • • • • • + a1nxn = b1 a21x1 + a22x2 + a23x3• • • • • • • • + a2nxn = b2 a31x1 + a32x2 + a33x3 • • • • • • • • + a3nxn = b3 • • • • • ai1x1 + ai2x2 + ai3x3 • • • • • • • • + ainxn = bi • • • • • an1x1 + an2x2 + an3x3 • • • • • • • • + annxn = bn • Assume (initialize) a solution vector x • Compute a new solution vector xnew • Iterate until ║ x- xnew ║∞ ≤ ε • We will learn two methods: Jacobi and Gauss Seidel
Jacobi and Gauss Seidel • Jacobi: for the iteration index k (k = 0 for the initial guess) • Gauss Seidel: for the iteration index k (k = 0 for the initial guess)
Stopping Criteria • Generate the error vector (e) at each iteration as • Stop when: ║ e ║∞ ≤ ε Let us see an example?
Iterative Methods (Example) Solve the following system of equations using Jacobi and Gauss Seidel methods and using initial guess as zero for all the variables with error less than 0.01%. Compare the number iterations required for solution using two methods: Jacobi Gauss Seidel
Iterative methods (Example) If you iterate, both the methods will diverge Is the problem ill conditioned? The answer is NO!
Iterative methods (Example) Original Problem Pivoting: Columns 2 to 1, 3 to 2, 4 to 3 and 1 to 4. This is equivalent to change of variables: x1 (new) = x2 (original) x2 (new) = x3 (original) x3 (new) = x4 (original) x4 (new) = x1 (original) After Pivoting
Iterative Methods (Example) New Iteration Equations after pivoting (variable identifiers in the subscript are for the new renamed variables): Jacobi Gauss Seidel
Solution: Jacobi Number of iteration required to achieve a relative error of < 0.01% = 97
Solution: Gauss Seidel Number of iteration required to achieve a relative error of < 0.01% = 14 So, what makes the methods diverge? When do we need pivoting or scaling or equilibration for the iterative methods? Let’s analyze for the convergence criteria!
Questions? • What are the condition of convergence for the iterative methods? • Rate of convergence? Can we make them converge faster?