1 / 144

3.14 Least Squares Adaptive Filter 3.14.1 Vector Space Analysis for Least Square Filter

Understand the Recursive Least Squares (RLS) problem, projection matrices, orthogonal projections, and time updates in least squares adaptive filtering. Discover how to optimize filters for nonstationary environments effectively.

egoyette
Download Presentation

3.14 Least Squares Adaptive Filter 3.14.1 Vector Space Analysis for Least Square Filter

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. signifies the error in predicting using the coefficient offilter computed at time . 3.14 Least Squares Adaptive Filter 3.14.1 Vector Space Analysis for Least Square Filter

  2. The Recursive Least Squares (RLS) Problem Finding the set of coefficients of the filter such that the cumulative error measure is minimized. prewindowed data

  3. The vector and matrix representation for the RLS problem (1) The desired vector Example:

  4. (2) The weight vector (3) The data matrix (prewindowed data)

  5. (5) The error vector signifies the error in predicting using the coefficient offilter computed at time . (6) The vector and matrix representation

  6. Although is very important in the application of RLS to nonstationary, it is often set equal to 1 if it is known that the data is stationary. A value of signifies that all data is equally weighted in the computation of , and this case often referred to as the prewindowed case. Progressively smaller values of compute based upon effectively smaller windows of data that are beneficial in nonstationary environments. For these reasons, is sometimes called the fade factor or forgetting factor for the filter. A value of in the range has proved to be effective in tracking local nonstationaries.

  7. (7) The optimal weight vector of the LS Filter

  8. (8) The LS estimation (the output of the LS filter)

  9. (9) Vector Space Analysis for Least Square Filter The data vector The time-shifted vector The data vector space The dimension subspace spanned by the columns of .

  10. Example

  11. may be interpreted as a linear combination of the columns of .

  12. The Least Squares (RLS) Problem (Summary) • Suppose column vectors of the data matrix • span a dimension subspace . • Find the vector in that closest to the • desired vector . The desired vector lies in the • dimension linear vector space. • Solution: • is the projection of onto the subspace • .

  13. can be expressed by the linear combination • of the basis vectors of the subspace . • is closest to . The length of the error vector is the norm of the error vector. (4) The entire dimensional linear vector space containing may be defined as the direct sum of the two orthogonal subspaces

  14. is the complement of the project of onto . is the orthogonal complement space of .

  15. 3.14.2 Projection Matrix and Orthogonal Projection Matrix

  16. Properties of Projection Matrix and Orthogonal Projection Matrix

  17. Proving: supposed a discretionary vector v, and

  18. The current data subspace is , which has the projection matrix and the orthogonal projection matrix . Suppose the additional vector is added to the set of the basis vectors of . The specific vector differs depending on the application. For example,

  19. is the data subspace for an order least square filter. is the data subspace for an order least square filter.

  20. In general, the vector will usually provide some information not contained in the basis vectors of . As the data subspace changes from to , then it is necessary to find the “new” projection matrices and in terms of the old projection matrices and . If this can be done in a recursive manner, then the groundwork will be laid for the fast RLS filter updates.

  21. The vector itself cannot guaranteed to be orthogonal to . However, if may be used to create a vector that is orthogonal to , then the projection matrix ma be decomposed into

  22. Proving formula 5

  23. (6) Proving

  24. (7)

  25. Proving

  26. By a judicious selection of and in properties (6),(7)(8), all time and order updates needed for both the LSL and transversal least squares filters may be derived.

  27. 3.14.3 Time Update (1) Time Update of Projection Matrix Definition: The Unit Time Vector The projection matrix of

  28. The orthogonal projection matrix of

  29. Time Update of Projection Matrix

  30. Appending the vector to the current data matrix provides a method for decomposing a projection matrix into its past and current components.

  31. (2) The Angle Parameter Definition:

  32. The relationship between and expressed by the angle parameter The property (6) of the projection matrix:

  33. On the other hand:

  34. 3.15 Least Square Lattice (LSL) Adaptive Algorithm Forward prediction

  35. (prewindowed data) : LS forward prediction coefficients

  36. The Forward Prediction Vector:

  37. The Forward Prediction Error (FPE) Vector:

  38. The current time FPE scalar sample: The benefit of writing ,as in the above equation, is that the FPE is now in the form required by the property (8) for the LS update of a scalar. This will be very important in the order updates for the LS lattice.

  39. Backward prediction

  40. The backward prediction vector:

  41. The Backward Prediction Error Vector: The current time BPE scalar sample:

  42. FPE and BPE Residuals:

  43. 3.15.2 LS Lattice Structure of PE Filter The structure derived from the order update for FPE and BPE

More Related