1 / 29

Chapter 4 LINEAR SYSTEMS OF EQUATIONS

Chapter 4 LINEAR SYSTEMS OF EQUATIONS. Linear systems of equation are associated with many problems in engineering and science, as well as with applications of mathematics to the social sciences and the quantitative study of business and economic problems

penelope
Download Presentation

Chapter 4 LINEAR SYSTEMS OF EQUATIONS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4LINEAR SYSTEMS OF EQUATIONS

  2. Linear systems of equation are associated with many problems in engineering and science, as well as with applications of mathematics to the social sciences and the quantitative study of business and economic problems • E.g., to calculate the net flow of current through each junction of an electrical circuit

  3. 4.1 Techniques for Solving Linear Systems of Equations • A system of n equations with n unknown variables: • In the matrix notation: (4.1) (4.2)

  4. where • If the reverse of A exists (A-1), then (4.3) • The reverse matrix (4.4) where |A| is the determinant of A, Adj(A) – adjoint of A matrix • The adjoint of a square matrix A is the matrix whose (i,j)-th entry is the (j,i)-th cofactor of A • The (j,i)-th cofactor of A is defined by a'ij = (-1)i+j|A|(i|j), where A(i|j) is the submatrix of A obtained from A by deleting the i-th row and j-th column.

  5. Two basic classes of solving methods: direct and iterative • Direct methods : assume that the exact solutions exists and find the precise solution in a final number of steps • Gauss-Jordan method (simple and accurate) • Gaussian elimination method (calculatively efficient) • Cholesky method (in case of nonsingular symmetrical matrix) • Iterative methods : starting from some initial approximation value, construct a series of solution approximations such that it converges to the exact solution of a system • Jacobi method • Gauss-Seidel method • SOR (Successive Over-Relaxation) method

  6. Table 4.1 Methods for solving first-order linear systems of equations

  7. 4.2 Gaussian Elimination4.2.1 General Gaussian Elimination • Consider the system of n linear equations with n unknowns (4.5) • Start with transforming system (4.5) to upper triangular system of eq’s Ax = b  Ux = b* (4.6-4.7) where b* – n+1-dimensional vector, U – upper triangular matrix nxn: (4.8)

  8. Forward elimination: Step 1 : • Eliminate x1 from all the equations of system (4.5) starting from the 2nd one • with a11 ≠ 0 • as a result (4.9)

  9. Forward elimination: Step 2 : • Eliminate x2 from all the equations of system (4.9) starting from the 3rd one • with a22 ≠ 0 • as a result (4.10)

  10. Forward elimination: Step n-1 : • Triangle system of linear equations (4.11) • akk(k-1) – pivot element

  11. Elimination with backward substitution: • In system (4.11) starting from nth equation, with ann(n-1)≠0 • Substituting it to (n-1)th equation • Continuing this process, we obtain for each i = n-2, … ,1 (4.15)

  12. General Gaussian elimination procedure

  13. 4.2.2 Method of Pivot Selection (Pivoting) • Gaussian elimination: assumption that pivot element akk(k-1)≠0 • However, that is not always so  impossibility of computational calculations; growing of the calculation error  need to avoid that • In the method of forward substitution: • When one of the pivots akk(k-1) is zero, interchange the kth row with the mth row, where m is the smallest integer greater than k for which amk(k-1) is nonzero.

  14. Pivoting procedure

  15. Gaussian elimination procedure

  16. Gaussian elimination procedure

  17. 4.3 Gauss-Jordan Method • Modification of Gaussian elimination • Gaussian elimination: transforming the initial system to upper triangular system of eq’s Ax = b  Ux = b* • Gauss-Jordan elimination: the final system is Ix = b** (4.16) where I is the identity matrix of nth order, with elements δij = 1 if i = j δij = 0 otherwise •  the final system is x = b**

  18. Gauss-Jordan method procedure

  19. 4.4 LU factorization • Gaussian elimination: principal tool in the direct solution of linear systems of equations • The steps used to solve a system of the form Ax = b can also be used to factor a matrix into a product of matrices (factorization) • The factorization is particularly useful when it has the form A = LU, where L is lower triangular and U is upper triangular •  we can solve for x more easily in two-step process. • Suppose that Gaussian elimination can be performed on the system Ax = b without row interchanges (i.e., pivot elements akk(k-1) are nonzero for each k = 1,2,…,n

  20. Consider the system (4.17) (4.18) (4.19) • LU factorization (triangular factorization): • find y first from Ly = b • using Gaussian backward elimination find the solution from Ux = y • Two kinds of factorization: • Doolittle method requires lkk = 1 (k = 1,…,n) • Crout method requires ukk = 1 (k = 1,…,n)

  21. Consider Doolittle method condition, lkk = 1. The elements of A: • When k = 1, we can find • When k = 2, we can find here if we denote then

  22. On the kth step we can find kth row of U and 4th column of L (4.20) (4.21) (4.22) (4.23) • Repeat the process up to n times • If akk(k) = 0 apply pivoting for ajk(k) (k≤j≤n)

  23. LU factorization procedure

  24. 4.5 Modified Cholesky Method • System of linear equations nxn: Ax = b A – symmetric matrix (necessary) • Doolittle method: A = LU; L – lower triangular with lii = 1, U – upper triangular • Further: U = DR (4.24) D – diagonal matrix nxn, R – upper triangular matrix with rii = 1 •  A = LDR = (LDR)T (because A – symmetric) •  L = RT  A = RTDR (4.25) RTy = b, DRx = y (4.26) - we can find x using Gaussian elimination with backward substitution

  25. Modified Cholesky method procedure

  26. 4.6 Iterative Methods • An iterative technique to solve the linear system Ax = b starts with an initial approximation x(0) = (x1(0), x2(0), … , xn(0))T to the solution x and generates a sequence of vectors {x(k)}∞k=0 that converges to x • The process stops when x(k) is sufficiently close to x; x(k) is the solution of the system • Sopping criterions ε – small enough positive constant • Below: summary of some other methods • No matter which method is used, the recurrence formula is (4.29) • Depending on what ei(k) is, there are Jacobi, Gauss-Seidel, SOR and other methods

  27. 4.6.1 Jacobi Iterative Method • In Jacobi method the components of x(k) are used in ei(k) (4.30) • Sufficient condition for converging (4.31)

  28. 4.6.2 Gauss-Seidel Method • An improvement to Jacobi method • Jacobi method: to compute xj(k), the components of x(k-1) are used • xj(k+1) (j = i+1, … ,n) have already been computed and are likely to be better approximations to the actual solution  more reasonable to compute x(k) using the most recently calculated values (4.32) - Gauss-Seidel iterative technique

  29. 4.6.3 SOR (Successive Over-Relaxation) Method • Relaxation method: use the improved value of the difference between xj(k+1) (obtained by Gauss-Seidel method), and xj(k) (4.33) w – relaxation coefficient • 0<w<1: under-relaxation methods (can be used to obtain convergence of some systems that are not convergent by the Gauss-Seidel method) • 1<w: over-relaxation methods (used to accelerate the convergence for systems that are convergent by the Gauss-Seidel technique) – SOR method • w=1: same with Gauss-Seidel method • Usually: 1<w<2

More Related