1 / 16

SEQUENTIAL STATE-SPACE FILTERS FOR SPEECH ENHANCEMENT

SEQUENTIAL STATE-SPACE FILTERS FOR SPEECH ENHANCEMENT. S.Patil, S. Srinivasan, S. Prasad , R. Irwin, G. Lazarou and J. Picone Intelligent Electronic Systems Center for Advanced Vehicular Systems Mississippi State University

kelton
Download Presentation

SEQUENTIAL STATE-SPACE FILTERS FOR SPEECH ENHANCEMENT

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SEQUENTIAL STATE-SPACE FILTERSFOR SPEECH ENHANCEMENT S.Patil, S. Srinivasan, S. Prasad, R. Irwin, G. Lazarou and J. Picone Intelligent Electronic Systems Center for Advanced Vehicular Systems Mississippi State University URL: http://www.cavs.msstate.edu/hse/ies/publications/conferences/ieee_secon/2006/state_space_algorithms/

  2. Recursive State-Space Filters Models the state x(n) as the output of a linear system excited by white noise v1(n) Relates the observable output of the system y(n) to the state x(n) • Signal Model: Observation Model: • The filtering problem is posed as follows: Using the set of observed data , • to find for each the MMSE of the state vector , and then to estimate the clean signal from the signal model.

  3. Auto-Regressive Model for Speech • It is common to model speech data as being generated by an AR process: • This framework can be used in a conventional state-space setup as follows: AR order Driving/Process noise AR-Coefficients Filtered estimates Noisy observations State-evolution function Measurement function

  4. Kalman Filters • Linear recursive filters where a Gaussian random (state) vector is propagated through a linear state space model. • Objective: to find a ‘filtered’ estimate of the state vector, X , at time k represented as a linear combination of the measurements up to time k, such that the following quadratic cost function is minimized: • P, Q and R represent the covariance matrices for the state-estimation error, process noise and measurement noise. a symmetric nonnegative definite weighting matrix Prediction Estimation / Filtering

  5. Nonlinear Recursive State-Space Filters • Nonlinear State Estimation: • Predicting state vector • Kalman gain • Predicting observation • When propagating a GRV through a linear model (Kalman filtering), it is easy to compute the expectation, E[ ]. • If the same GRV passes through a nonlinear model, the resultant estimate may no longer be Gaussian, and calculation of E[ ] is no longer easy! • The calculation of E[ ] in the recursive filtering process requires an estimate the pdf of the propagating random vector F, H are (nonlinear) vector functions a Posteriori estimate Prediction of Innovation

  6. Unscented Kalman Filters • Extended Kalman Filters (EKFs) avoid this by linearizing the state-space equations and reducing the estimate equations to point estimates: • The covariance matrices (P) are computed by linearizing the dynamics • The state distributions are approximated by a GRV and propagated analytically through a ‘first-order linearization’ of the nonlinear system • If the assumption of local linearity is violated, the resulting filter will be unstable • The linearization process itself involves computations of Jacobians and Hessians (which are nontrivial)

  7. The Unscented Transform • Problem statement: calculation of the statistics of a random variable which has undergone a nonlinear transformation • Approach: It is easier to approximate a distribution than it is to approximate an arbitrary nonlinear transformation. • Consider a nonlinearity: • The Unscented transform chooses (deterministically) a set of ‘sigma’ points in the original (x) space which when mapped to the transformed (y) space will be capable of generating accurate sample means and covariance matrices in the transformed space • An accurate estimation of these allows us to use the Kalman filter in the usual setup, with the expectation values being replaced by sample means and covariance matrices as appropriate

  8. Unscented Kalman Filters (UKFs) • Instead of propagating a GRV through the “linearized” system, the UKF technique focuses on an efficient estimation of the statistics of the random vector being propagated through a nonlinear function • To compute the statistics of , we ‘cleverly’ sample this function using ‘deterministically’ chosen 2L+1 points about the mean of x Sigma Points

  9. Unscented Kalman Filters (UKFs) • It can be proved that with these as sample points, the following ‘weighted’ sample means and covariance matrices will be equal to the true values • We use this set of equations to predict the states and observations in the nonlinear state-estimation model. The expected values E[ ] and the covariance matrices are replaced with these sample ‘weighted’ versions.

  10. Particle Filters • Being increasingly employed for sequential state estimation from systems with a nonlinear state-space model • These do not restrict the statistics of the propagating random vector to a Gaussian distribution • These are based on sequential Monte Carlo techniques Observed Signal Represent the pdfs using particles Update these particles with each observation Use the state estimates for prediction / filtering / smoothing Use these ‘learned’ pdfs for state estimation

  11. Particle Filters • Probability distributions are represented in terms of randomly picked samples (particles) • These particles are then recursively updated using Bayesian estimation procedures • Direct generation of particles for arbitrary distributions is difficult • Particles are generated for another distribution – importance density function • Importance Density Function: chosen such that it facilitates easy drawing of samples • Accuracy of density estimation using particles, and hence, of state estimation and filtering will improve with an increase in the number of particles used • 100 particles were used for density estimation in this setup

  12. Experimental Setup • A comparative analysis of the performance of three state-space sequential filtering algorithms is presented • Three test signals were used: • An Auto-Regressive process with known driving parameters • A single tone sinusoid • Sample data from an acoustic utterance • An Auto-Regressive model (order=10) was used to represent these signals. • Filtering was performed on a per-frame basis, to ensure that stationarity is not violated. • The AR coefficients are learned using Yule-Walker equations. • The filtering process in each frame is used by initializing the filter with the last 10 samples of the previously (filtered) frame.

  13. Experimental results UKF performs better at low SNRs KF performs better at high SNRs

  14. Conclusions and Future Work • A comparison of three sequential state space filtering algorithms • Performance studied as a function of SNR: • At high SNRs, KF consistently outperforms UKF and PF • At low SNRs, UKF and PF perform better than KF • This experimental design is ready for applications in speech enhancement and robust speech recognition. • The code is publicly available and suitable for use in a large scale speech enhancement / recognition applications. • This work can be extended to a nonlinear AR model using a perceptron to learn the nonlinear vector functions. • This will allow modeling of nonlinear state-space functions and measurement functions. • Another attractive model for speech recognition applications could be a linear AR model and a nonlinear observation model.

  15. Pattern Recognition Applet: compare popular linear and nonlinear algorithms on standard or custom data sets • Speech Recognition Toolkits: a state of the art ASR toolkit for testing the efficacy of these algorithms on recognition tasks • Foundation Classes: generic C++ implementations of many popular statistical modeling approaches • Resources

  16. References • L. Rabiner and B. Juang, Fundamentals of Speech Recognition, Prentice Hall, Englewood Cliffs, NJ, 1993. • M. Gabrea, “Robust Adaptive Kalman Filtering-Based Speech Enhancement Algorithm,” Proceedings of the International Conference on Acoustics, Speech and Signal Signal Processing (ICASSP), vol.  1, pp‑I301-304, May 2004. • E. Wan and R. Merwe, “The unscented Kalman Filter for Nonlinear Estimation,” Proceedings of the Adaptive Systems for Signal Processing, Communications and Control Symposium, pp. 153-158, November 2005. • G. Welch and G. Bishop, “Introduction to the Kalman Filter,” Technical Report TR 95-041, Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA, April 2004.Figure 3: Output MSE vs. input SNR plot for a signal generated from an AR model • S. Haykin and E. Moulines, “From Kalman to Particle Filters,” Proceedings of the International Conference on Acoustics, Speech and Signal Signal Processing (ICASSP), Philadelphia, PA, USA, March 2005. • R. Merve, N. de Freitas, A. Doucet, and E. Wan, “The Unscented Particle Filter,” Technical Report CUED/F-INFENG/TR 380, Cambridge University Engineering Department, Cambridge University, U.K, August 2000. • P. Djuric, J. Kotecha, J. Zhang, Y. Huang, T. Ghirmai, M. Bugallo, and J. Miguez, “Particle Filtering,” IEEE Signal Processing Magazine, vol. 20, no. 5, pp. 19-38, September 2003. • N. Arulampalam, S. Maskell, N. Gordan, and T. Clapp, "Tutorial On Particle Filters For Online Nonlinear/ Non-Gaussian Bayesian Tracking," IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 174-188, February 2002.

More Related