330 likes | 339 Views
This article explains the concepts of Hidden Markov Models (HMMs) and how they are used for inference, decoding, and parameter estimation. It also explores various applications of HMMs, including speech recognition and speech tagging.
E N D
Hidden Markov Models David Meir Blei November 1, 1999
Graphical Model Circles indicate states Arrows indicate probabilistic dependencies between states What is an HMM?
Green circles are hidden states Dependent only on the previous state “The past is independent of the future given the present.” What is an HMM?
Purple nodes are observed states Dependent only on their corresponding hidden state What is an HMM?
{S, K, P, A, B} S : {s1…sN } are the values for the hidden states K : {k1…kM } are the values for the observations HMM Formalism S S S S S K K K K K
{S, K, P, A, B} P = {pi} are the initial state probabilities A = {aij} are the state transition probabilities B = {bik} are the observation state probabilities HMM Formalism A A A A S S S S S B B B K K K K K
Compute the probability of a given observation sequence Given an observation sequence, compute the most likely hidden state sequence Given an observation sequence and set of possible models, which model most closely fits the data? Inference in an HMM
oT Decoding o1 ot-1 ot ot+1 Given an observation sequence and a model, compute the probability of the observation sequence
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Decoding
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Decoding
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Decoding
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Decoding
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Decoding
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Forward Procedure • Special structure gives us an efficient solution using dynamic programming. • Intuition: Probability of the first t observations is the same for all possible t+1 length state sequences. • Define:
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Forward Procedure
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Forward Procedure
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Forward Procedure
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Forward Procedure
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Forward Procedure
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Forward Procedure
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Forward Procedure
x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Forward Procedure
Backward Procedure x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Probability of the rest of the states given the first state
Decoding Solution x1 xt-1 xt xt+1 xT o1 ot-1 ot ot+1 oT Forward Procedure Backward Procedure Combination
Find the state sequence that best explains the observations Viterbi algorithm o1 ot-1 ot ot+1 oT Best State Sequence
Viterbi Algorithm x1 xt-1 j o1 ot-1 ot ot+1 oT The state sequence which maximizes the probability of seeing the observations to time t-1, landing in state j, and seeing the observation at time t
o1 ot-1 ot ot+1 oT Viterbi Algorithm x1 xt-1 xt xt+1 Recursive Computation
o1 ot-1 ot ot+1 oT Viterbi Algorithm x1 xt-1 xt xt+1 xT Compute the most likely state sequence by working backwards
o1 ot-1 ot ot+1 oT Parameter Estimation A A A A B B B B B • Given an observation sequence, find the model that is most likely to produce that sequence. • No analytic method • Given a model and observation sequence, update the model parameters to better fit the observations.
o1 ot-1 ot ot+1 oT Parameter Estimation A A A A B B B B B Probability of traversing an arc Probability of being in state i
o1 ot-1 ot ot+1 oT Parameter Estimation A A A A B B B B B Now we can compute the new estimates of the model parameters.
Generating parameters for n-gram models Tagging speech Speech recognition HMM Applications
o1 ot-1 ot ot+1 oT The Most Important Thing A A A A B B B B B We can use the special structure of this model to do a lot of neat math and solve problems that are otherwise not solvable.