1 / 45

Orthogonal matrices

Orthogonal matrices. based on excelent video lectures by Gilbert Strang, MIT http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-2005/VideoLectures/detail/lecture17.htm Lecture 17. Orthogonality. we’ve met so far orthogonal vectors When are two vector orthogonal? orthogonal subspaces

xena
Download Presentation

Orthogonal matrices

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Orthogonal matrices based on excelent video lectures by Gilbert Strang, MIT http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-2005/VideoLectures/detail/lecture17.htm Lecture 17

  2. Orthogonality • we’ve met so far • orthogonal vectors • When are two vector orthogonal? • orthogonal subspaces • When are two subspaces orthogonal? Do you remember some examples? • today we’ll meet • orthogonal basis q1, …, qn • orthogonal matrix Q

  3. Orthonormal vectors • they are orthogonal, i.e. qiTqj = 0 if i ≠ j • the length of each of them is one • I can write this like condition: • qiTqj = 0 if i ≠ j • qiTqj = 1 if i = j • for general vector a, how do I get the same vector of unity length? • I divide every of its component by it’s norm |a| = (aTa)-1/2

  4. orthonormal vectors are independent • i.e. there is no linear combination (except 0) giving 0 • having orthonormal basis is nice, it makes all the calculations better • and I can put orthonormal vectors q1 … qn into matrix Q (when I use Q, I always mean orthonormal columns!!!) • Generally, Q is rectangular matrix, we can have e.g. just two columns with many rows. • Such a matrix Q is called matrix with orthonormal columns. • The name orthogonal matrix (should better be orthonormal matrix, but this is used rarely) is used when Q is square.

  5. However, suppose my columns of A are not orthogonal. How do I make them orthonormal? • This can be done by Gram-Schmidt procedure: A → Q • In the last lecture we’ve been talking about ATA, it’s natural to look at QTQ. • What is QTQ? • QTQ = I

  6. If Q is square and it holds, that QTQ = I, what can you tell me about its inverse? • Its inverse is its transpose … Q-1 = QT • Example:

  7. What’s the good of having Q? What formulas become easier? • I want to project to Q’s column space. • What’s the projection matrix P = A(ATA)-1AT? • P = QQT • So all the messy equations from the last lecture become nice and cute when I have matrix Q. • What do I mean by that? to compute ATA I must compute a lot of inner products But if I have Q instead of A, what happens? ^ so i-th component of x is qiT times b

  8. If I have just matrix A with independentcolumns, I use Gram-Schmidt orthonormalization procedure to calculate Q. • The column space of A and Q are the same. • And the connection between A and Q is a matrix R. And it can be shown, that R is upper triangular. A = QR

  9. Determinants based on excelent video lectures by Gilbert Strang, MIT http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-2005/VideoLectures/detail/lecture18.htm Lecture 18

  10. det A = |A| … determinant of a matrix is a number associated with every square matrix A • determinant is a test for invertibility • matrix is invertible if |A| ≠ 0 • matrix is singular if |A| = 0

  11. In our quest for determinants, we start with three key properties.

  12. Property 1 det I = 1 • Property 2 Exchange rows of a matrix: reverse sign of det • Property 3 The determinant is a linear function of each row separately. multiply row #1 by number t add row #1 of A to row #1 of A’

  13. det of triangular matrix (except for the sign) U matrix is a result of Gauss elimination. So now we know, how any reasonable software (Matlab, e.g.) calculates det. Do the elimination, get U and multiply the diagonal elements. • det (A) = det (AT) • because of this property, all properties about rows (exchanging, all rows zero) are also valid about columns upper triangular matrix

  14. There exist an ugly formula (Leibniz formula) involving the sum of the terms (products of certain elements of A), determining the sign of the products involves permutations of numbers (i.e. combinatorics). • For those (not many, I guess) interested, I refer to http://en.wikipedia.org/wiki/Determinant#n-by-n_matrices

  15. Eigenvalues and eigenvectors based on excelent video lectures by Gilbert Strang, MIT http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-2005/VideoLectures/detail/lecture21.htm Lecture 21, 22

  16. Introduction • Matrices A are square and we’re looking for some special numbers, eigenvalues, and some special vectors, eigenvectors. • Matrix A acts on vector x, the outcome is vector Ax. • And the vectors I'm specially interested in are the ones the come out in the same direction that they went in. • That won’t be typical. Most Ax vectors point into different direction. • But some vectors Ax will be parallel to x. These are called eigenvectors. • By parallel I mean • λ is called eigenvalue, and there is not just one eigenvalue for the given A.

  17. Example: eigenvalues and eigevectors of the projection matrix P Pb b x x is the vector b eigenvector? no, it isn’t, its projection Pb is in another direction propose some eigenvector x any vector x in the plane is unchaged by P (Px = x) and this is telling me, that λ = 1 Are there any other eigenvectors? I expect the answer to be yes, I am in 3D and I hope for getting three independent eigenvectors, two of them are in the plane, but where is the third? perpendicular to the plane, Px = 0, λ = 0 λ = 0, 1

  18. I will jump ahead a bit and will tell you some nice things about eigenvalues. • n x n matrix has n eigenvalues • The sum of eigenvalues is equal to the sum of diagonal elements (trace). • The product of eigenvalues is determinant. • Eigenvalues can be complex numbers, but eigenvalues of symmetric matrices are always real numbers. • And let me introduce you antisymmetric matrices. These are matrices for which AT = -A. • And they have all eigenvalues complex .

  19. What else can go wrong? • n x n matrix has n eigenvalues. But they don’t have to be different (algebraic multiplicity of the eigenvalue,two eigenvalues of the same value 2). • number of eigenvalues with the same value (for a given value), multiplicity of the corresponding root • If some of them are equal, then the eigenvectors may (but don’t have to!) also be equal (geometric multiplicity of the eigenvalue, two same eigenvectors 1). Such a matrix is called degenerate, and again it’s a bad thing. • number of linearly independent eigenvectors with that eigenvalue

  20. Now the question: how do I find eigenvalues and eigenvectors? • How to solve Ax = λx, where I have two unknowns (λ and x) in one equation. • rewrite as (A - λI)x = 0 • I don’t know λ, x, but I do know something here. I know, that matrix (A - λI) (i.e. matrix A shifted on the diagonal by some λ) must be singular. Otherwise the only solution would be just x = 0, not particularly interesting eigenvector. • And what I know about singular matrices? • Their determinant is zero!

  21. det(A - λI) = 0 • This is called characteristic (or eigenvalue) equation. And I got rid of x in the equation! • Now I am in the position to find λ first (I find neigenvalues, they don’t necessarily differ), and then to find eigenvectors. • Eigenvectors are found by Gauss elimination, as the solution of (A - λI)x = 0.

  22. Example • Find eigen for matrix • Form A – λI. How will it look like? • What is det(A – λI)? characteristic polynomial

  23. Find the roots of the polynomial, it must be equal to zero (determinant must be equal to zero). • What are the roots of λ2 – λ – 2? • Isn’t it (λ + 1)(λ - 2)? So λ1 = -1, λ2 = 2

  24. For each λ find eigenvector, generally by Gauss elimination, as the solution of (A - λI)x = 0. • However, we can guess in our case. What is A – λI for λ1 = -1? • and solve (A – λI)x = 0 • so what is vector x1 = [bž, vž]T? • x1 = [1, 1]T (eigenvector corresponding to λ1 = -1) • and similarly we proceed for λ2 = 2

  25. If I have Ax = λx and Bx = αx, what about eigenvalues of A + B? • (A + B)x = (λ + α)x. Right? • WRONG! • What’s wrong? • B will, generally, have different eigenvectors than A. A and B are different matrices. • Or A · B. Also ≠ λ·α • So that was like caution. In these cases, the igenvalue problem must be solved for A+B (A·B).

  26. What to do with them? • We’ve eigenvalues/eigenvectors, but what do we do with them? • A small terminological diversion • Square matrix A is called diagonalizable if there exists an invertible matrix P such that P−1AP is a diagonal matrix. • Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix. • Eigendecomposition is used for diagonalization.

  27. Construct matrix S, it will have eigenvectors as columns. • Start with AS multiplication. • But I can factor out eigenvalues from this matrix. How? AS = SΛ

  28. And now I can multiply by S-1, but only if S is invertible, i.e. if n eigenvectors are independent. S-1AS = Λ A = SΛS-1 • There is a small number of matrices that don’t have independent eigenvectors. Such matrices ARE NOT diagonalizable (deffective matrix). • But if eigenvalues are distinct, their eigenvectors are independent and EVERY SUCH a matrix is diagonalizable.

  29. Which matrices are diagonalizable (i.e. eigendecomposed)? • all λ’s are different (λ can be zero, the matrix is still diagonalizable. But it is singular.) • if some λ’s are same, I must be careful, I must check the eigenvectors. • If I have repeated eigenvalues, I may or may not have n independent eigenvectors. • If the eigenvectors are independent, than it is diagonalizable (even if I have some repeating eigenvalues) … geometric multiplicity = algebraic • Diagonalizability is concerned with eigenvectors. (independent) • Invertibility is concerned with eigenvalues. (λ=0)

  30. Example of matrix with equivalent eigenvalues, but with different eigenvectors • Identity matrix I (n x n): • eigenvalues? • all ones (n ones) • but there is no shortage in eigenvectors • its got plenty of eigenvectors, I choose n independent

  31. diagonal matrix • eigenvalues are on the diagonal • eigenvectors are columns with 0 and 1 • triangular matrix • eigenvalues are sitting along the diagonal • symmetric matrix (n x n) • real eigenvalues, real eigenvectors • n independent eigenvectors • they can be chosen such that they are orthonormal (A = QΛQT) • orthogonal matrix • eigenvalues are +1 or -1

  32. Some facts about eigen • What does it mean, if we have λ = 0? • Ax = λx = 0 • So what follows from this? • A has linearly dependent rows/columns. • if A has λ = 0, then A is singular • A has λi, then A-1 has λi-1, eigenvectors of A and A-1 are the same • The sum of eigenvalues is equal to the sum of diagonal elements (trace). • The product of eigenvalues is determinant.

  33. By some unfortunate accident a new species of frog has been introduced into an area where it has too few natural predators. In an attempt to restore the ecological balance, a team of scientists is considering introducing a species of bird which feeds on this frog. Experimental data suggests that the population of frogs and birds from one year to the next can be modeled by linear relationships. Specifically, it has been found that if the quantities Fk and Bk represent the populations of the frogs and birds in the kth year, then The question is this: in the long run, will the introduction of the birds reduce or eliminate the frog population growth?

  34. So this system evolves in time according to x(k+1) = Ax(k) (k = 0, 1, …). Such a system is called discrete linear dynamical system, matrix A is called transition matrix. • So if we need to know the state of the system in time k = 50, we have to compute x(50) = A50x(0). 50th power of matrix A, that’s a lot matrix multiplication.

  35. However, it can be easily proved, tha if matrix A is diagonalizable (i.e. A = PDP−1, where D is a diagonal matrix), then Ak = PDkP-1 and Dk is easy peasy. • That’s one practical application of matrix diagonalization – matrix powers. • And in the last lecture we have shown, that solving eigenproblem and diagonalization is equivalent! A = SΛS-1 • Whenever you read “matrix was diagonalized”, it means its eigenvalues/vectors were found.

  36. Eigendecomposition demonstration • http://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/tools/ • http://ocw.mit.edu/ans7870/18/18.06/javademo/Eigen/

  37. Symmetric matrices based on excelent video lectures by Gilbert Strang, MIT http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-2005/VideoLectures/detail/lecture25.htm Lecture 25 http://ocw.mit.edu/OcwWeb/Mathematics/18-06Spring-2005/VideoLectures/detail/lecture27.htm Lecture 27

  38. Eigenvalues/eigenvectors • eigenvalues are real • eigenvectors are perpendicular (orthogonal) • what I mean by this? • have a look at identity matrix I, it certainly is symmetric, what are its eigenvalues? • ones • and eigenvectors? • every vector is an eigenvector • so I may choose these that are orthogonal • If eigenvalues of symmetric matrix are different, its eigenvectors are orthogonal. If some eigenvalues are equal, orthogonal vectors may be chosen.

  39. usual case: A = SΛS-1 • symmetric case: A = QΛQT • so symmetric matrix can be factored into this form: orthogonal times diagonal times transpose of orthogonal • this factorization completely displays eigenvalues, eigenvectors, and the symmetry (QΛQT is clearly symmetric) • This is called spectral theorem. • Spectrum is the set of eigenvalues of matrix.

  40. Every symmetric matrix breaks up into these pieces with real λ and orthonormal eigenvectors. • q1q1T – what kind of matrix is this matrix? • OK, I don’t expect you remember what we were talking about two weeks later. • It was about matrices of this form: • In our case qTq is what? • one • And q1q1T is a projection matrix !

  41. Every symmetric matrix is a combination of perpendicular projection matrices. • That's another way that people like to think of the spectral theorem.

  42. For symmetric matrices eigenvalues are real. Question: are they positive or negative? • Symmetric matrices with all the eigenvalues positive are called positive definite. • Can I answer this question without actually computing the whole spectra? • What’s the sign of the determinant of such matrices? • The determinant is positive. • But that’s not enough!

  43. So I have to check all subdeterminants, they must all be positive. • Matrices with eigenvalues equal or higher than zero are called positive semidefinite.

  44. Definition: positive definite matrices have xTAx > 0 for every x(where x is any non-zero real vector). • So we have three tests of positive semidefinitness: • all eigenvalues positive • determinant and all subdeterminants positive • xTAx> 0 for every nonzero x

  45. Symmetric matrices are good, but positive semidefinite matrices are the best ! • Positive definite matrices are very important in applications. • Positive definite matrices can be factorized using Cholesky decomposition M = LLT = RTR (L – lower triangular, R – upper). • So we need to store just one factor (L or R). • This factorization takes into account the symmetry of M. • In least squares ATAx = ATb, ATA is always symmetric positive definite provided that all columns of A are independent.

More Related