250 likes | 267 Views
Learn about iterative projection methods for solving eigenvalue problems efficiently with examples and applications. Understand the power method, subspace iteration, and projection techniques in perturbation analysis.
E N D
Eigenvalue problems 4th Seminar An introduction to iterative projection methods Luiza Bondar the 23rd of November -2005
Introduction (Erwin) Perturbation analysis (Nico) Direct (global) methods (Peter) Introduction to projection methods (Luiza) (theoretical background) Krylov subspace methods 1 (Mark) Krylov subspace methods 2 (Willem)
Outline • Introduction • The power method • Projection Methods • Subspace iteration • Summary
Introduction Direct methods (Schur decomposition, QR iteration, Jacobi method, method of Sturm sequences ) compute all the eigenvalues and the corresponding eigenvectors What if we DON’T need all the eigenvalues? Example : compute the page rank of the www documents
Introduction WEB: a graph (pages are nodes links are edges )
Introduction Web graph: 1.4 bilion nodes (pages) 6.6 bilion edges (links) page rank of page i : the probability that a surfer will visit the page i page rank : vector with dimension N=1.4 bilion The page rank is a dominant vector of a sparse 1.4 bilion X 1.4 bilion matrix. It makes little sense to compute all the eigenvectors.
The power method computes the dominant eigenvalue and an associated eigenvector Some background consider that A has p distinct eigenvalues. is the algebraic multiplicity of is the projection onto semi-simple
The power method consider that the dominant eigenvalue is unique and is semi-simple initial vector such that () and take compute NO YES convergence ?
use convergence of each term in given by 0 ( ) then The power method is used by to compute the page rank. and The power method initial vector
The power method • the convergence of the method is given by • the convergence might be very slow if are close from one another • if the dominant eigenvalue is multiple but semi-simple, then the algorithm provides only one eigenvalue and a corresponding eigenvector • does not converge if the dominant eigenvalue is complex and the original matrix is real (2 eigenvalues with the same modulus) IMPROVEMENT : the shifted power method LED TO : projection methods
The power method Shifted power method Example • let be the dominant eigenvalue of a matrix that has an egenvalue • then the power method does not converge when applied to • but the power method converges for a shift (e.g. ) Other variants of the power method inverse power method (iterates with ) inverse power method with shift smallest eigenvalue eigenvalue closest to the shift
The power method inverse power method then converges to the smallest eigenvalue and converges to an associated eigenvector inverse power method with shift then converges to and converges to an eigenvector associated with
The power method does not converge if the dominant eigenvalue is complex and the original matrix is real (2 eigenvalues with the same modulus) But after a certain k contains approximations to the complex par of eigenvectors power method IDEA: extract the vectors by performing a projection into the subspace
Projection methods (Introduction) find and such that introduce 2 degrees of freedom impose 2 more constrains one choice is to impose orthogonality conditions (Galerkin) i.e., and projection method
Projection methods (Introduction) Generalization find and such that dim K=dim L=m K: the right subspace,L: the left subspace A projection technique seeks an approximate eigenpar and such that orthogonal projection or oblique projection A way to constructK is Krylov subspace (inspired by the power method)
Projection methods (orthogonal) Consider an orthonormal basis of K and the approximate can be written as eigenvalue of then eigenvalue of eigenvector of then eigenvector of Arnoldi’s method and the hermitian Lanczos algorithm are orthogonal projection methods
Projection methods (oblique) Search for and such that orthonormal basis ofK orthonormal basis ofL and are such that (biorthogonal) the approximate can be written as The condition leads to the approximate eigenvalue problem The nonhermitian Lanczos alghoritm is an oblique projection method.
Projection methods (orthogonal) How accurate can an orthogonal projection method be? exact eigenpar projection onto K then K
K Projection methods (orthogonal) Hermitian case
Subspace iteration generalization of the power method start with an initial system of m vectors instead of only one vector (power method) compute the matrix If each of the m vectors is normalised in the same way as for the power method, then each of these vectors will converge to the SAME eigenvector associated with the dominant eigenvalue (provided that ) Note looses its linear independence IDEA: restore the linear independence by performing a QR factorisation
Subspace iteration start with QR factorize take compute recover the first m eigenvalues and corresponding eigenvectors of A from YES NO convergence ?
Subspace iteration • the i-th column of converges to a Schur vector associated with the eigenvalue • the convergence of the column is given by the factor • the speed of convergence for an eigenvalue depends on how close is it to the next one Variants of the subspace iteration method • take the dimension of the subspace m larger than nev number of eigenvalues wanted • perform “locking” i.e., as soon as an eigenvalue has converged stop multiplying with A the corresponding vector in the subsequent iterations
Subspace iteration Some very theoretical result on residual norm projection onto the subspace spanned by the eigenvectors associated with the first m eigenvalues of projection onto assume that are linearly independent Then for any eigenvalue of there is an unique such that and 0
Summary • The power method can be used to compute the dominant eigenvalue (real) and a corresponding eigenvector. • Variants of the power method can compute the smallest eigenvalue or the eigenvalue closest to a given number (shift). • General projection methods consist in approximating the eigenvectors of a matrix with vectors belonging to a subspace of approximants with dimension smaller than the dimension of the matrix. • Subspace iteration method is a generalization of the power method that computes a given number of dominant eigenvalues and their corresponding eigenvectors.
Last minute questions answered by Tycho van Noorden Sorin Pop