20 likes | 101 Views
Auto-Regressive Hidden Markov Models. Continuous density HMMs – The random vector observed during a hidden state is pulled from a continuous pdf modeled by the following mixture: where is an elliptically symmetric density (e.g., Gaussian).
E N D
Auto-Regressive Hidden Markov Models Continuous density HMMs – The random vector observed during a hidden state is pulled from a continuous pdf modeled by the following mixture: where is an elliptically symmetric density (e.g., Gaussian). + Avoid quantization errors (e.g., codebooks in discrete HMMs) and yield better performance Auto-Regressive HMMs – The random vector observed during a hidden state is pulled from a Gaussian Auto-Regressive process where Ok represents the k’th component of the observation vector O where HMM training involves learning the state transition matrix, the mixture weights and parameters of the basis density for each mixture component, using max. likelihood estimation + AR-modeling allows us to account for the correlation in the components of the observation vector + Known to be effective models for recognition of discrete speech utterances (e.g., isolated digit recognition) Autocorrelation of the observation samples Autocorrelation of AR parameters
References: • Lawrence R. Rabiner, “A tutorial on Hidden Markov Models and Selected Applications in Speech Recognition”, Proceedings of the IEEE, Vol. 77, No.2, February 1989 • Biing-Hwang Juang and Lawrence R. Rabiner, “Mixture Autoregressive Hidden Markov Models for Speech Signals”, IEEE transactions on Acoustics, Speech and Signal Processing, Vol. ASSP-33, No. 6, December 1985.