200 likes | 329 Views
Bayesian Methods for Speech Enhancement. I. Andrianakis P. R. White. Signal Processing and Control Group Institute of Sound and Vibration Research University of Southampton. Progress from last meeting. We have gathered a number of existing Bayesian methods for speech enhancement….
E N D
Bayesian Methods for Speech Enhancement I. Andrianakis P. R. White Signal Processing and Control Group Institute of Sound and Vibration Research University of Southampton
Progress from last meeting We have gathered a number of existing Bayesian methods for speech enhancement… …added a number of our own ideas… …and compiled a framework of Bayesian algorithms with different priors and cost functions. The above algorithms were implemented and simulations were carried out to assess their performance.
Likelihood Prior Elements of Bayesian Estimation A central concept in Bayesian estimation is the posterior density
Elements of Bayesian Estimation II Another important element is the selection of the cost function which leads in to different rules • Square Error Cost Function MMSE • Uniform Cost Function MAP
Motivation for this work A number of successful Bayesian algorithms already existing in the literature… • Ephraim : MMSE in the Amplitude domain with Rayleigh priors • Rainer : MMSE in the DFT domain with Gamma priors • Lotter : MAP in the Amplitude domain with Gamma priors Some of our ideas fitted in the framework that seemed to be forming. It was interesting to “complete” the framework and test the algorithms for ourselves!
What have we examined Estimation Rules: • MMSE • MAP Domains: • Amplitude • DFT Likelihood (Noise pdf): • Gaussian
Priors - Chi Below are a number of instances for the Chi priors Strictly speaking the 2-sided Chi pdf is shown above. The 1-sided Chi is just the right half x2
Note that the Gamma pdf is spikier than the Chi for the same value of Priors - Gamma …and a number of instances for the Gamma priors
DFT Amp Domain : Rule : MMSE MMSE MAP MAP Prior : Chi Gamma Chi Gamma Chi Gamma Chi Gamma In all the above algorithms can be either fixed or estimated adaptively. Categorisation of the examined algorithms
Results In the following we will present results from simulations performed with the above algorithms We will first show results for fixed prior shapes. Finally, we will examine the case when the priors change shape adaptively.
Results for DFT algorithms andfixed SegSNR PESQ Input SegSNR was 0 dB. Graphs for other input SNRs look similar
Results for AMP algorithms andfixed SegSNR PESQ
Audio samples and spectrograms In the following we shall present some audio samples and spectrograms of enhanced speech with the so far examined algorithms. The clean and the noisy speech segments used in the simulations are presented below Clean Speech Noisy Speech
SNR = 8.61 PESQ = 2.41 SNR = 6.98 PESQ = 2.25 = 1.5 = 1.5 = 0.1 = 0.1 = 0.5 = 0.5 SNR = 8.78 PESQ = 2.44 SNR = 7.17 PESQ = 2.42 Chi - DFT MAP SNR = 8.62 PESQ = 2.44 MMSE SNR = 8.62 PESQ = 2.44
SNR = 8.85 PESQ = 2.33 SNR = 8.24 PESQ = 2.31 SNR = 8.37 PESQ = 2.38 = 1.5 = 1.5 = 0.1 = 0.1 = 1.0 = 1.0 SNR = 8.97 PESQ = 2.42 SNR = 8.81 PESQ = 2.44 SNR = 8.65 PESQ = 2.44 Gamma - DFT MAP MMSE
SNR = 9.31 PESQ = 2.41 SNR = 8.88 PESQ = 2.47 SNR = 8.71 PESQ = 2.44 = 0.1 = 0.1 = 0.5 = 0.5 = 1.0 = 1.0 SNR = 9.43 PESQ = 2.48 SNR = 8.88 PESQ = 2.44 SNR = 8.12 PESQ = 2.35 Chi - AMP MAP MMSE
SNR = 9.28 PESQ = 2.34 SNR = 9.26 PESQ = 2.40 SNR = 8.99 PESQ = 2.39 = 0.1 = 0.5 = 1.8 Gamma - AMP MAP
MAP algorithms do not seem to improve their performance with adaptive values of Results for adaptive MMSE algorithms reduce the background noise, especially for low SNRs Some examples follow…
= 0.05 = 0.3 = 0.1 Results for adaptive MMSE Chi Dft MMSE Gamma Dft MMSE Chi Amp Fixed SNR = 8.89 PESQ = 2.42 SNR = 8.99 PESQ = 2.42 SNR = 9.43 PESQ = 2.48 Adaptive SNR = 8.96 PESQ = 2.5 SNR = 9.07 PESQ = 2.5 SNR = 9.54 PESQ = 2.52