230 likes | 1.09k Views
QR-RLS Algorithm. Cy Shimabukuro EE 491D 05-13-05. Overview. What is QR-RLS Different methods of Computation Simulation Results. QR-RLS?. QR-RLS algorithm is used to solve linear least square problems. The decomposition is the basis for the QR algorithm.
E N D
QR-RLS Algorithm Cy Shimabukuro EE 491D 05-13-05
Overview • What is QR-RLS • Different methods of Computation • Simulation • Results
QR-RLS? QR-RLS algorithm is used to solve linear least square problems. The decomposition is the basis for the QR algorithm. Algorithm is a procedure to produce eigenvalues of a matrix.
Advantage Using this QR method is not for speed, but the numerical stablility How? proceeds by orthogonal similarity transforms. works directly with data from decomp. eliminating the correlation matrix.
Computing QR Decomp. • Gram-Schmidt Process • Householder Transformation • a.k.a Householder reflection • Givens Rotation
Gram-Schmidt • A method of orthogonalizing a set of vectors • This method is numerically Unstable • The vectors aren’t orthogonal due to rounding errors. • Loss of orthogonality is bad
Householder • Used to calculate QR decompositions • Reflection of a vector plane in 3-D space. • Hyperplane is a unit vector orthogonal to hyperplane
Householder • Used to zero out subdiagonal elements A is decomposed: where QT=Hn…H2H1 is the orthogonal product of Householders and R is upper triangular. • Over determined system Ax=b is transformed into the easy-to-solve
Householder • Properties it follows: • Symmetrical : Q = Q^T • it is orthogonal: Q^{-1}=Q^T • therefore it is also involutary: Q^2=I • By using the Householder transformation method, it has more stability than the Gram-Schmidt
Givens Rotation • Another transformation to find Q matrix • Method zeros out element in the matrix • Most useful because: • Don’t have to build a new matrix but just manipulating original • Less work and zeros out what is needed • Much more easily parallelized
The Matrix ‘c’ represents cos(θ), ‘s’ represents sin(θ)
Properties • The cosine parameter c is always real, but the sine parameter s is complex when dealing with complex data. • The parameters c and s are always constrained by trigonometric relation • The Givens rotation is non-Hermitian • Givens rotation is unitary. • The Givens rotation is length preserving
1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 a b 0 0 0 0 0 0 g d 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 a b 0 0 0 0 0 0 g d 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 a b 0 0 0 0 0 0 g d 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 a b 0 0 0 0 0 0 g d * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 0 * * * * * * * 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 0 * * * * * * * 0 * * * * * * * 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 0 * * * * * * * 0 * * * * * * * 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 0 * * * * * * * 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 0 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 0 * * * * * * * 0 * * * * * * * 0 0 * * * * * * = = = = How Givens Rotations Works Method some matrix output Gm Gm-1 Gm-2 ... G2 G1 U = Upper triangular and Diagonal
QR-RLS Algorithm • Data matrix: - M represents the number of FIR filter coefficients
Phi represents the correlation matrix • The matrix here is the exponential weighting matrix. • Lambda is the exponential weighting factor
Simulations • QR decomposition RLS adaptation algorithm • Program used: MATLAB
Summary • QR decomposition is one of the best numerical procedures for solving the recursive lease squares estimation problem • QR decomposition operates on inputs only • QR decomposition involves the use of only numerically well behaved unitary rotations
QR-RLS eliminates almost all the error • Has good numerical properties and good stability. • Reliable