660 likes | 824 Views
Numerical Analysis. EE, NCKU Tien-Hao Chang (Darby Chang). In the previous slide. Error estimation in system of equations vector/matrix norms LU decomposition split a matrix into the product of a lower and a upper triangular matrices
E N D
Numerical Analysis EE, NCKU Tien-Hao Chang (Darby Chang)
In the previous slide • Error estimation in system of equations • vector/matrix norms • LU decomposition • split a matrix into the product of a lower and a upper triangular matrices • efficient in dealing with a lots of right-hand-side vectors • Direct factorization • as an systems of equations • Crout decomposition • Dollittle decomposition
In this slide • Special matrices • strictly diagonally dominant matrix • symmetric positive definite matrix • Cholesky decomposition • tridiagonal matrix • Iterative techniques • Jacobi, Gauss-Seidel and SOR methods • conjugate gradient method • Nonlinear systems of equations • (Exercise 3)
3.7 Special matrices
Special matrices • Linear systems • which arise in practice and/or in numerical methods • the coefficient matrices often have special properties or structure • Strictly diagonally dominant matrix • Symmetric positive definite matrix • Tridiagonal matrix
Symmetric positive definiteRelations to • Eigenvalues • Leading principal sub-matrix
Cholesky decomposition • For symmetric positive definite matrices • greater efficiency can be obtained • consider the symmetric of the matrix • Rather than LU form, we factor the matrix into the form
Tridiagonal • Only operations • factor step • solve step
Before entering 3.8 • So far, we have learnt three methods algorithms in Chapter 3 • Gaussian elimination • LU decomposition • direct factorization • Are they algorithms? • What’s the differences to those algorithms in Chapter 2? • they report exact solutions rather than approximate solutions question further question answer
Before entering 3.8 • So far, we have learnt three methods algorithms in Chapter 3 • Gaussian elimination • LU decomposition • direct factorization • Are they algorithms? • What’s the differences to those algorithms in Chapter 2? • they report exact solutions rather than approximate solutions further question answer
Before entering 3.8 • So far, we have learnt three methods algorithms in Chapter 3 • Gaussian elimination • LU decomposition • direct factorization • Are they algorithms? • What’s the differences to those algorithms in Chapter 2? • they report exact solutions rather than approximate solutions answer
Before entering 3.8 • So far, we have learnt three methods algorithms in Chapter 3 • Gaussian elimination • LU decomposition • direct factorization • Are they algorithms? • What’s the differences to those algorithms in Chapter 2? • they report exact solutions rather than approximate solutions
3.8 Iterative techniques for linear systems
Iterative techniques • Analytic techniques is slow • Especially for systems with large but sparse coefficient matrices • As an added bonus, iterative techniques are less insensitive to roundoff error
Iteration matrixImmediate questions • When does guarantee a unique solution? • When does guarantee convergence? • How quick does converge? • How to generate ?
Assume that is singular, there exists a nonzero vector such that • is a eigenvalue of • but , contradiction
(in section 2.3 with proof) Recall that http://www.dianadepasquale.com/ThinkingMonkey.jpg
Recall that http://www.dianadepasquale.com/ThinkingMonkey.jpg
Iteration matrixFor these questions • We know that when , from will converge linearly to a unique solution with any • What is missing? • remember the problem is to solve • How to generate ? • like and , different algorithms construct different iteration matrix question hint answer
Iteration matrixFor these questions • We know that when , from will converge linearly to a unique solution with any • What is missing? • remember the problem is to solve • How to generate ? • like and , different algorithms construct different iteration matrix hint answer
Iteration matrixFor these questions • We know that when , from will converge linearly to a unique solution with any • What is missing? • remember the problem is to solve • How to generate ? • like and , different algorithms construct different iteration matrix answer
Iteration matrixFor these questions • We know that when , from will converge linearly to a unique solution with any • What is missing? • remember the problem is to solve • How to generate ? • like and , different algorithms construct different iteration matrix
Splitting methods • and • A class of iteration methods • Jacobi method • Gauss-Seidel method • SOR method
3.9 Conjugate gradient method 43
Conjugate gradient method • Not all iterative methods are based on the splitting concept • The minimization of an associated quadratic functional
http://fuzzy.cs.uni-magdeburg.de/~borgelt/doc/somd/parabola.gifhttp://fuzzy.cs.uni-magdeburg.de/~borgelt/doc/somd/parabola.gif
Choose the search direction • as the tangent line in Newton’s method • the gradient of at • Choose the step size • as the root of the tangent line
http://www.mathworks.com/cmsimages/op_main_wl_3250.jpg Global optimization problem