1 / 11

MAT208

MAT208. Gram-Schmidt Process Fall 2009. Orthonormal Basis. Let S = { v 1 , v 2 , K, v n } be a basis for an inner product space V. Then S is an orthonormal basis for V if a) ( v i , v j ) = 0 for i ≠ j b) ( v i , v i ) = 1 for all i. Theorem.

kiara
Download Presentation

MAT208

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MAT208 Gram-Schmidt Process Fall 2009

  2. Orthonormal Basis • Let S = { v1, v2, K, vn } be a basis for an inner product space V. Then S is an orthonormal basis for V if a) (vi , vj) = 0 for i ≠ j b) (vi , vi) = 1 for all i

  3. Theorem Let S = { v1, v2, …, vn } be an orthonormal basis for an inner product space V and let v be any vector in V. Then v=c1v1 + c2v2 + … + cnvn where ci= ( v,vi ) for all i

  4. Proof -

  5. Gram-Schmidt Process • There is more at issue here than just convenience in computing coefficients. Orthogonal bases can make computations more numerically stable. • If S = { u1, u2, …, un } is a basis (not orthonormal) for an inner product space V, is there a way to convert it to an orthonormal basis?

  6. Gram-Schmidt Process • Replace the basis with an orthonormal basis

  7. Gram-Schmidt Process Example (continued) Orthonormal set is

  8. Comments • Key idea in Gram-Schmidt is to subtract from every new vector, uk, its components in the directions already determined, {v1,v2, …, vk–1} • When doing Gram-Schmidt by hand, it simplifies the calculation to multiply the newly computed vk by an appropriate scalar to clear fractions in its components. The resulting vectors are normalized at the end of the computation

  9. QR Factorization • In the Gram-Schmidt example, the basis is transformed to Interpreting these vectors as column vectors of matrices, the following result holds This is called the QR-Factorization of A

  10. Comments • Computer programs that compute the QR Factorization use an algorithm that is different from that of the proof, which is essentially Gram-Schmidt. • MATLAB’s implementation of QR-Factorization of an m x n matrix A returns an m x m matrix Q with orthonormal columns and an m x n matrix R of the form  The first n columns of Q form a basis for the column space of A andA=QR

  11. Definitions • A square matrix Q that has orthonormal columns is called an orthogonal matrix • Because of the orthonormal columns, QTQ = I. Therefore Q–1=QT.

More Related