1 / 34

MA2213 Lecture 6

MA2213 Lecture 6. Linear Equations (Iterative Solvers). Iteration Methods p. 303-305. Many science and engineering applications require solutions of large linear systems .

yuki
Download Presentation

MA2213 Lecture 6

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MA2213 Lecture 6 Linear Equations (Iterative Solvers)

  2. Iteration Methods p. 303-305 Many science and engineering applications require solutions of large linear systems Direct methods, e.g. Gaussian elimination, that produce the exact solution (except for roundoff error) in a fixed number of steps, require excessive computation time and computer memory.

  3. Iteration Methods Such systems are usually solved by iterative methods which produce a sequence of approximations to the solution. Often, the matrix A is very sparse (most entries = 0), and iterative methods are very efficient for sparse matrices.

  4. equivalent Jacobi Method p.304 system This suggest the Jacobi iteration sequence for starting from an initial estimate

  5. MATLAB Code for Jacobi Method >> b = [10 19 0]'; >> x_final = jacobi_special(x_init,b,20)' x_final = 0 0 0 1.1111 1.9000 0 0.9000 1.6778 -0.9939 1.0351 2.0182 -0.8556 0.9819 1.9496 -1.0162 1.0074 2.0085 -0.9768 0.9965 1.9915 -1.0051 1.0015 2.0022 -0.9960 0.9993 1.9985 -1.0012 1.0003 2.0005 -0.9993 0.9999 1.9997 -1.0003 1.0001 2.0001 -0.9999 1.0000 1.9999 -1.0001 1.0000 2.0000 -1.0000 1.0000 2.0000 -1.0000 1.0000 2.0000 -1.0000 First pull down the File menu and choose New m-file or Open an existing m-file, then type and save your program below – I suggest using the same name as the name of the function function x_final = jacobi_special(x_init,b,num_iter) % function x_final = jacobi_special(x_init,b,num_iter) % % bla bla bla % x(:,1) = x_init; for k = 1:num_iter x(1,k+1) = (1/9)*(b(1)-x(2,k)-x(3,k)); x(2,k+1) = (1/10)*(b(2) - 2*x(1,k) - 3*x(3,k)); x(3,k+1) = (1/11)*(b(3) - 3*x(1,k) - 4*x(2,k)); end x_final = x; Always compare your numerical results with results available either in the textbook or elsewhere.

  6. Textbook Results Jacobi Method p. 305 For the solution If the Jacobi iteration gives

  7. Matrix Splitting for Jacobi Method p.306 If with D diagonal ; L, U lower, upper triangular, then where

  8. Jacobi Method If certain conditions on the matrix B hold then the following iteration produces a sequence that converges to that satisfies initial ‘guess’ stop after a specified number of iterations or when is sufficiently small Question 1. What is a ‘closed’ formula for Question 2. When will the sequence converge ?

  9. Gauss-Seidel Method p.305 For the same system of equations results from using the new (updated) entries of x as soon as possible This gives the Gauss-Seidel iteration for starting from an initial estimate

  10. Matrix Splitting for Gauss-Seidel p.306 If with D diagonal ; L, U lower, upper triangular, then where

  11. Gauss-Seidel Method If certain conditions on the matrix G hold then the following iteration produces a sequence that converges to that satisfies initial ‘guess’ stop after a specified number of iterations or when is sufficiently small Question 3. How can be computed without computing the inverse of the matrix D-L ?

  12. Convergence of Iterative Methods The equation for the solution and the iteration implies that therefore Question 4. When will the sequence converge to the solution regardless of the initial estimate ?

  13. Convergence Conditions Answer to Question 4. Converges always occurs iff Question 5. What does this statement mean ? Question 6. When does that happen ? Answer to Question 6. We need some mathematics !

  14. Complex Numbers Definition The real numbers are a field under the operations of addition and multiplication. Review ! Definition The complex numbers are the set with addition and multiplication Definition The absolute value ? Question 7. Why does Question 8. Why are the complex numbers a field ? Question 9. Why is the set with add. & scalar mult. a vector space with dimension over the field ?

  15. Characteristic Polynomial of a Matrix For define the function by Example is called the trace of where The roots of are satisfies where and is always a monic polynomial of degree n Remark It is called the characteristic polynomial of

  16. Eigenvalues and Spectral Radius Definition is an eigenvalue of a matrix iff Remark The word eigen is a German word eigen; eigener; eigene; eigenes {adj} ownseineigenesAutohisowncar; carofhisown is Definition The spectrum of a matrix Definition The spectral radius of a matrix Reference p. 380 Gilbert Strang, Linear Algebra and its Applications, 3rd Edition, Sauders, 1988.

  17. Eigenvectors and Eigenvspaces Definition If is an eigenvalue of a matrix the set is called the eigenspace of A nonzero vector in is called an ). (with eigenvalue eigenvector of ? Question Why is a subspace of Result If is an eigenvalue of a matrix then with eigenvalue there exists an eigenvector of Proof

  18. Examples and Applications Example 1 Question What is ? Remark is called the Golden Ratio since Remark Fibonacci sequence

  19. Examples and Applications Example 2 by Euler’s Formula. Question What is ?

  20. Convergence Conditions Answer to Question 6. ? Question 7. How can we compute Answer to Question 7. We will examine that problem in Week 8. Question 8. What can we do about convergence now ? Answer to Question 8. Learn about matrix norms.

  21. Vector Norms – Which One ? >> help norm NORM Matrix or vector norm. For matrices... NORM(X) is the largest singular value of X, max(svd(X)). NORM(X,2) is the same as NORM(X). NORM(X,1) is the 1-norm of X, the largest column sum, = max(sum(abs(X))). NORM(X,inf) is the infinity norm of X, the largest row sum, = max(sum(abs(X'))). NORM(X,'fro') is the Frobenius norm, sqrt(sum(diag(X'*X))). NORM(X,P) is available for matrix X only if P is 1, 2, inf or 'fro'. For vectors... NORM(V,P) = sum(abs(V).^P)^(1/P). NORM(V) = norm(V,2). NORM(V,inf) = max(abs(V)). NORM(V,-inf) = min(abs(V)). See also COND, RCOND, CONDEST, NORMEST.

  22. Vector Norms A vector norm on is a function that satisfies the following three properties: whenever Remark A vector norm measures the size of a vector.

  23. Examples of Vector Norms Euclidean Norm p Norm Max Norm nonsingular  Remark is a vector norm on for any Remark The textbook considers ONLY the norm (see page 298) and they denote this norm by

  24. Matrix Norms Given ANY vector norm on we can construct a CORRESPONDING matrix norm on by Theorem 1 then Proof. If then If

  25. Matrix Norms then Theorem 2 If (1) (2) (3) Proofs

  26. Examples of Matrix Norms Theorem 2 Proof. hence then construct Choose that maximizes by hence and

  27. Application to Jacobi Iteration Question Will Jacobi iteration converge ?

  28. Matrix Norms and Spectral Radius Two amazing results, whose full proofs will not be presented, hold for any matrix Result 1 Result 2 For EVERY matrix norm We now prove half of Result 1. If Therefore hence

  29. Condition Number of a Matrix Definition The condition number of a nonsingular is with respect to a matrix norm matrix >> help cond COND Condition number with respect to inversion. COND(X) returns the 2-norm condition number (the ratio of the largest singular value of X to the smallest). Large condition numbers indicate a nearly singular matrix. COND(X,P) returns the condition number of X in P-norm: NORM(X,P) * NORM(INV(X),P). where P = 1, 2, inf, or 'fro'. For the norm the Hilbert matrix has

  30. Error Analysis For Direct Methods If where is the error of the right side and is the error of the solution, then hence absolute error and hence relative error

  31. Error Analysis For Direct Methods If is the error of the matrix where and is the error of the solution, then hence absolute error hence relative error whenever

  32. Successive Over Relaxation Modify the Gauss-Seidel initial ‘guess’ Gauss-Seidel where successive over relaxation successive under relaxation

  33. Homework Due Lab 3 • Develop a MATLAB program for piecewise linear interpolation. Then use it to compute and plot the piecewise linear interpolants for the functions : (i) sin(x) on the interval [0,2 pi], (ii) 1/(1 + x*x) on the interval [–5,5], based on N = 4, 8, 16, and 32 nodes. Also plot the error functions (difference between the function and the interpolant) and discuss the behavior of the error as a function of N for each function. 2. Develop a MATLAB program for polynomial interpolation. Then use it (the same as you used the program that you developed in question 1 above) to study the functions sin(x) and 1/(1+x*x). Then discuss which interpolation method appears to be better suited to interpolate each of the two functions. Your answers should include well documented MATLAB code, carefully labeled plots, and clear logical discussions.

  34. Review Lectures 1-4 Problems Question 1. Do problem 13 on page 79 of the textbook. Question 2. Derive a closed form for the estimate of the solution of the equation below obtained by applying 2 iterations of Newton’s Method with the initial estimate 0. Question 3. Compute a linear combination of functions that interpolate at Question 4. Compute the quadratic least squares approximation to the function over Question 5. Use Gram-Schmidt to compute orthonormal over functions from the sequence Question 6. Do problem 1 on page 215 of the textbook.

More Related