280 likes | 289 Views
This tutorial provides an introduction to the EM algorithm and its application in parameter estimation for Gaussian mixture and hidden Markov models. It covers the basic concepts, steps, and intuition behind the algorithm, as well as its implementation for two specific applications. The tutorial aims to emphasize intuition rather than mathematical rigor.
E N D
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. BilmesInternational Computer Science Institute Berkeley CA, and Computer Science Division Department of Electrical Engineering and Computer Science U.C. Berkeley April 1998 Presenter : Hsu Ting-Wei
Outline • Abstract • Maximum-likelihood • Basic EM • Finding Maximum Likelihood Mixture Densities Parameters via EM • Learning the parameters of an HMM, EM, and the Baum-Welch algorithm NTNU Speech Lab
Abstract • Using Expectation-Maximization (EM) algorithm to solve the maximum-likelihood (ML) parameter estimation problem • EM parameter estimation procedure for two applications: • Finding the parameters of a mixture of Gaussian densities • Finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models • Trying to emphasize intuition rather than mathematical rigor NTNU Speech Lab
Maximum-likelihood i.i.d. likelihood function density function incomplete data easy to count max value We can set the derivative of to zero to find the max value. But if the equation can’t be solved by this method ,we should take EM algorithm. NTNU Speech Lab
Basic EM • Two main applications of the EM algorithm: • When the data indeed has missing values, due to problems with or limitations of the observation process. • The second occurs when optimizing the likelihood function is analytically intractable but when the likelihood function can be simplified by assuming the existence of and values for additional but missing (or hidden) parameters. • more common in the computational pattern recognition community NTNU Speech Lab
Basic EM (cont.) complete-data likelihoodfunction joint density function = complete data set missing, unknow values, observed values constant random variable incomplete-data likelihood function : NTNU Speech Lab
Basic EM (cont.) • The EM algorithm first finds the expected value of the complete-data log-likelihood function E-step : evaluation of expectation current parameter constant new parameter random variable and governed by the distribution Recall: space of values y NTNU Speech Lab
Basic EM (cont.) • M-Step : maximize the expectation • These two step are repeated as necessary. • Each iteration is guaranteed to increase the log likelihood and the algorithm is guaranteed to converge to a local maximum of the likelihood function. NTNU Speech Lab
目標函數最大化 目標函數 目前模型參數 NTNU Speech Lab
找輔助函數的一般式 目標函數 輔助函數 NTNU Speech Lab
在 找”最佳的”輔助函數 目標函數 “最佳的”輔助函數 NTNU Speech Lab
對輔助函數求全域最大值 目標函數 NTNU Speech Lab
對輔助函數求全域最大值 (Cont.) 目標函數 NTNU Speech Lab
重覆剛才的步驟 目標函數 NTNU Speech Lab
在 找”最佳的”輔助函數 目標函數 “最佳的”輔助函數 NTNU Speech Lab
對輔助函數求全域最大值 目標函數 NTNU Speech Lab
Finding Maximum Likelihood Mixture Densities Parameters via EM probabilistic model: density function M component densities mixed together with M mixing coefficients . incomplete-data log-likelihood expression : difficult to optimize because it contains the log of the sum. NTNU Speech Lab
Finding Maximum Likelihood Mixture Densities Parameters via EM (cont.) If we know the values of y , the likelihood becomes: If we do not know the values of y ,and y is a random vector : NTNU Speech Lab
Finding Maximum Likelihood Mixture Densities Parameters via EM (cont.) E-step : =1 NTNU Speech Lab
Finding Maximum Likelihood Mixture Densities Parameters via EM (cont.) E-step : M-step: add Lagrange multiplier and NTNU Speech Lab
Finding Maximum Likelihood Mixture Densities Parameters via EM (cont.) E-step : M-step: Recall: (*) derivate (*) 將 代回(*),整理如下… NTNU Speech Lab
Finding Maximum Likelihood Mixture Densities Parameters via EM (cont.) E-step : M-step: (*) derivate (*) NTNU Speech Lab
Learning the parameters of an HMM, EM, and the Baum-Welch algorithm • A Hidden Markov Model is a probabilistic model of the joint probability of a collection of random variables • The model is and • Two assumption • First-order assumption • Output independent assumption • Three basic problems continuous or discrete observations “hidden” and discrete NTNU Speech Lab
Learning the parameters of an HMM, EM, and the Baum-Welch algorithm (cont.) • Estimation formula using the Q function. incomplete-data likelihood function E-step: complete-data likelihood function discrete We know NTNU Speech Lab
Learning the parameters of an HMM, EM, and the Baum-Welch algorithm (cont.) M-step in discrete: add Lagrange multiplier and NTNU Speech Lab
Learning the parameters of an HMM, EM, and the Baum-Welch algorithm (cont.) M-step in discrete: add Lagrange multiplier and NTNU Speech Lab
Learning the parameters of an HMM, EM, and the Baum-Welch algorithm (cont.) M-step in discrete: add Lagrange multiplier and NTNU Speech Lab
Learning the parameters of an HMM, EM, and the Baum-Welch algorithm (cont.) M-step in continuous: the mixture component for each state at each time NTNU Speech Lab