900 likes | 2.39k Views
Adaptive Signal Processing. Problem : Equalise through a FIR filter the distorting effect of a communication channel that may be changing with time. If the channel were fixed then a possible solution could be based on the Wiener filter approach
E N D
Adaptive Signal Processing • Problem: Equalise through a FIR filter the distorting effect of a communication channel that may be changing with time. • If the channel were fixed then a possible solution could be based on the Wiener filter approach • We need to know in such case the correlation matrix of the transmitted signal and the cross correlation vector between the input and desired response. • When the the filter is operating in an unknown environment these required quantities need to be found from the accumulated data. Professor A G Constantinides©
Adaptive Signal Processing • The problem is particularly acute when not only the environment is changing but also the data involved are non-stationary • In such cases we need temporally to follow the behaviour of the signals, and adapt the correlation parameters as the environment is changing. • This would essentially produce a temporally adaptive filter. Professor A G Constantinides©
Algorithm Adaptive Signal Processing • A possible framework is: Professor A G Constantinides©
Adaptive Signal Processing • Applications are many • Digital Communications • Channel Equalisation • Adaptive noise cancellation • Adaptive echo cancellation • System identification • Smart antenna systems • Blind system equalisation • And many, many others Professor A G Constantinides©
Applications Professor A G Constantinides©
Tx1 Rx2 Hybrid Hybrid Echo canceller Echo canceller Adaptive Algorithm Adaptive Algorithm Local Loop - + - + Rx1 Rx2 Adaptive Signal Processing • Echo Cancellers in Local Loops Professor A G Constantinides©
REFERENCE SIGNAL FIR filter Noise - + Adaptive Algorithm Signal +Noise PRIMARY SIGNAL Adaptive Signal Processing • Adaptive Noise Canceller Professor A G Constantinides©
FIR filter - + Adaptive Algorithm Signal Unknown System Adaptive Signal Processing • System Identification Professor A G Constantinides©
Signal FIR filter - + Adaptive Algorithm Unknown System Delay Adaptive Signal Processing • System Equalisation Professor A G Constantinides©
Signal FIR filter - + Adaptive Algorithm Delay Adaptive Signal Processing • Adaptive Predictors Professor A G Constantinides©
Linear Combiner Interference Adaptive Signal Processing • Adaptive Arrays Professor A G Constantinides©
Adaptive Signal Processing • Basic principles: • 1) Form an objective function (performance criterion) • 2) Find gradient of objective function with respect to FIR filter weights • 3) There are several different approaches that can be used at this point • 3) Form a differential/difference equation from the gradient. Professor A G Constantinides©
Adaptive Signal Processing • Let the desired signal be • The input signal • The output • Now form the vectors • So that Professor A G Constantinides©
Adaptive Signal Processing • The form the objective function • where Professor A G Constantinides©
Adaptive Signal Processing • We wish to minimise this function at the instant n • Using Steepest Descent we write • But Professor A G Constantinides©
Adaptive Signal Processing • So that the “weights update equation” • Since the objective function is quadratic this expression will converge in m steps • The equation is not practical • If we knew and a priori we could find the required solution (Wiener) as Professor A G Constantinides©
Adaptive Signal Processing • However these matrices are not known • Approximate expressions are obtained by ignoring the expectations in the earlier complete forms • This is very crude. However, because the update equation accumulates such quantities, progressive we expect the crude form to improve Professor A G Constantinides©
The LMS Algorithm • Thus we have • Where the error is • And hence can write • This is sometimes called the stochastic gradient descent Professor A G Constantinides©
Convergence The parameter is the step size, and it should be selected carefully • If too small it takes too long to converge, if too large it can lead to instability • Write the autocorrelation matrix in the eigen factorisation form Professor A G Constantinides©
Convergence • Where is orthogonal and is diagonal containing the eigenvalues • The error in the weights with respect to their optimal values is given by (using the Wiener solution for • We obtain Professor A G Constantinides©
Convergence • Or equivalently • I.e. • Thus we have • Form a new variable Professor A G Constantinides©
Convergence • So that • Thus each element of this new variable is dependent on the previous value of it via a scaling constant • The equation will therefore have an exponential form in the time domain, and the largest coefficient in the right hand side will dominate Professor A G Constantinides©
Convergence • We require that • Or • In practice we take a much smaller value than this Professor A G Constantinides©
Estimates • Then it can be seen that as the weight update equation yields • And on taking expectations of both sides of it we have • Or Professor A G Constantinides©
Limiting forms • This indicates that the solution ultimately tends to the Wiener form • I.e. the estimate is unbiased Professor A G Constantinides©
Misadjustment • The excess mean square error in the objective function due to gradient noise • Assume uncorrelatedness set • Where is the variance of desired response and is zero when uncorrelated. • Then misadjustment is defined as Professor A G Constantinides©
Misadjustment • It can be shown that the misadjustment is given by Professor A G Constantinides©
Normalised LMS • To make the step size respond to the signal needs • In this case • And misadjustment is proportional to the step size. Professor A G Constantinides©
Algorithm Transform based LMS Transform Inverse Transform Professor A G Constantinides©
Least Squares Adaptive • with • We have the Least Squares solution • However, this is computationally very intensive to implement. • Alternative forms make use of recursive estimates of the matrices involved. Professor A G Constantinides©
Recursive Least Squares • Firstly we note that • We now use the Inversion Lemma (or the Sherman-Morrison formula) • Let Professor A G Constantinides©
Recursive Least Squares (RLS) • Let • Then • The quantity is known as the Kalman gain Professor A G Constantinides©
Recursive Least Squares • Now use in the computation of the filter weights • From the earlier expression for updates we have • And hence Professor A G Constantinides©
Kalman Filters • Kalman filter is a sequential estimation problem normally derived from • The Bayes approach • The Innovations approach • Essentially they lead to the same equations as RLS, but underlying assumptions are different Professor A G Constantinides©
Kalman Filters • The problem is normally stated as: • Given a sequence of noisy observations to estimate the sequence of state vectors of a linear system driven by noise. • Standard formulation Professor A G Constantinides©
Kalman Filters • Kalman filters may be seen as RLS with the following correspondence Sate space RLS • Sate-Update matrix • Sate-noise variance • Observation matrix • Observations • State estimate Professor A G Constantinides©
Cholesky Factorisation • In situations where storage and to some extend computational demand is at a premium one can use the Cholesky factorisation tecchnique for a positive definite matrix • Express , where is lower triangular • There are many techniques for determining the factorisation Professor A G Constantinides©