440 likes | 743 Views
Hidden Markov Models for Automatic Speech Recognition. Dr. Mike Johnson Marquette University, EECE Dept. Overview. Intro: The problem with sequential data Markov chains Hidden Markov Models Key HMM algorithms Evaluation Alignment Training / parameter estimation Examples / applications.
E N D
Hidden Markov Modelsfor Automatic Speech Recognition Dr. Mike Johnson Marquette University, EECE Dept.
Overview • Intro: The problem with sequential data • Markov chains • Hidden Markov Models • Key HMM algorithms • Evaluation • Alignment • Training / parameter estimation • Examples / applications
Big Picture View of Statistical Models HMMs Basic Gaussian
Historical Method: Dynamic Time Warping • DTW is a dynamic path search versus template • Can solve using Dynamic Programming
State Machine S1 S2 S3 State Distribution Models Data Alternative: Sequential modeling Use a Markov Chain (state machine)
Markov Chains (discrete-time & state) • A Markov chain is a discrete-time discrete-state Markov Process. The likelihood of the current RV going to any new state is determined solely by the current state, called a transition probability • Note: since transition probabilities are fixed, there is also a time-invariance assumption. (Also false of course, but useful)
a13 a11 a22 a33 a12 a23 S1 S2 S3 a32 a21 a31 Graphical representation • Markov chain parameters include • Transition probability values aij • Initial state probabilities 1 2 3
Example: Weather Patterns • Probability of Rain, Clouds, or Sunshine modeled as a Markov chain: A = Note: A matrix of this form (square, row sum=1) is called a stochastic matrix.
Two-step probabilities If it’s raining today, what’s the probability of it raining two days from now? • Need two-step probabilities.Answer = 0.7*0.7 + 0.2*0.4 + 0.1*0.1 = .58 • Can also get these directly from A2 : A2 =
Steady-state • The N-step probabilities can be gotten from AN, so A is sufficient to determine the likelihoods of all possible sequences. • What’s the limiting case? Does it matter if it was raining 1000 days ago? A1000 =
Probability of state sequence • The probability of any state sequence is given by: • Training: Learn the transition probabilities by keeping count of the state sequences in the training data.
Weather classification • Using a Markov chain for classification: • Train one Markov chain model for each classex: A weather transition matrix for each city; Milwaukee, Phoenix, and Miami • Given a sequence of state observations, identify which is the most likely city by choosing the model that gives the highest overall probability.
Hidden states & HMMs • What if you can’t directly observe states? • But… there are measures/observations that relate to the probability of different states. • States hidden from view = Hidden Markov Model.
b1(ot) b2(ot) b3(ot) b4(ot) General Case HMM si : state i aij : P(si sj ) ot : output at time t bj(ot) : P (ot | sj ) Initial: 1 2 3
Weather HMM • Extend Weather Markov Chain to HMM’s • Can’t see if it’s raining, cloudy, or sunny. • But, we can make some observations: • Humidity H • Temperature T • Pressure P • How do we calculate … • Probability of an observation sequence under a model • How do we learn … • State transition probabilities for unseen states • Observation probabilities in each state
Observation models • How do we characterize these observations? • Discrete/categorical observations: Learn probability mass function directly. • Continuous observations: Assume a parametric model. • Our Example: Assume a Gaussian distribution • Need to estimate the mean and variance of the humidity, temperature and pressure for each state(9 means and 9 variances, for each city model)
HMM classification • Using a HMM for classification: • Training: One HMM for each class • Transition matrix plus state means and variances (27 parameters) for each city • Classification: Given a sequence of observations: • Evaluate P(O|model) for each city(Much harder to compute for HMM than for Markov Chain) • Choose the model that gives the highest overall probability.
a13 a24 a35 a22 a33 a44 a12 a23 a34 a45 S1 S2 S3 S4 S5 Start State End State b2(•) b3(•) b4(•) Using for Speech Recognition States represent beginning, middle, end of a phoneme Gaussian Mixture Model in each state
Fundamental HMM Computations • Evaluation: Given a model and an observation sequence O = (o1, o2, …, oT), compute P(O | ). • Alignment: Given and O, compute the ‘correct’ state sequence S = (s1, s2, …, sT), such as S = argmaxS { P (S |O, ) }. • Training: Given a group of observation sequences, find an estimate of , such asML = argmax { P (O | ) }.
Evaluation: Forward/Backward algorithm • Define i(t) = P(o1o2..ot, st=i | ) • Define i(t) = P(ot+1ot+2..oT | st=i , ) Each of these can be implemented efficiently via dynamic programming recursions starting at t=1 (for ) and t=T (for ).By putting the forward & backward together:
Forward Recursion • Initialization • Recursion • Termination
Backward recursion Initialization Recursion Termination
Note: Computation improvement • Direct computation: P(O | ) = the sum of the observation probabilities for all possible state sequences = NT. Time complexity = O(T NT) • F/B algorithm: For each state at each time step do a maximization over all state values from the previous time step:Time Complexity = O(T N2)
From i(t) and i(t) : One-State Occupancy probability Two-state Occupancy probability
Alignment: Viterbi algorithm To find single most likely state sequence S, use Viterbi dynamic programming algorithm: • Initialization: • Recursion: • Termination:
Training We need to learn the parameters of the model, given the training data. Possibilities include: • Maximum a Priori (MAP) • Maximum Likelihood (ML) • Minimum Error Rate
Expectation Maximization Expectation Maximization(EM) can be used for ML estimation of parameters in the presence of hidden variables. Basic iterative process: • Compute the state sequence likelihoods given current parameters • Estimate new parameter values given the state sequence likelihoods.
EM Training: Baum-Welchfor Discrete Observations (e.g. VQ coded) Basic Idea: Using current and F/B equations, compute state occupation probabilities. Then, compute new values:
Update equations for Gaussian distributions: • GMMs are similar, but need to incorporate mixture likelihoods as well as state likelihoods
Toy example: Genie and the urns • There are N urns in a nearby room; each contains many balls of M different colors. • A genie picks out a sequence of balls from the urns and shows you the result. Can you determine the sequence of urns they came from? • Model as HMM: N states, M outputs • probabilities of picking from an urn are state transitions • number of different colored balls in each urn makes up the probability mass function for each state.
Working out the Genie example • There are three baskets of colored balls • Basket one: 10 blue and 10 red • Basket two: 15 green, 5 blue, and 5 red • Basket three: 10 green and 10 red • The genie chooses from baskets at random • 25% chance of picking from basket one or two • 50% chance of picking from basket three
Two Questions • Assume that the genie reports a sequence of two balls as {blue, red}. • Answer two questions: • What is the probability that a two ball sequence will be {blue, red}? • What is the most likely sequence of baskets to produce the sequence {blue, red}?
Probability of {blue,red} What is the total probability of {blue,red}? • Sum(matrix values)= 0.074375 What is the most likely sequence of baskets visited? • Argmax(matrix values) = {Basket 1, Basket 3} • Corresponding max likelihood = 0.03125
Viterbi method Best path ends in state 3, coming previously from state 1.
a13 a13 a24 a24 a35 a35 a22 a22 a33 a33 a44 a44 a12 a12 a23 a23 a34 a34 a45 a45 S1 S1 S2 S2 S3 S3 S4 S4 S5 S5 Start State Start State End State End State Composite Models • Training data is at sentence level, generally not annotated at sub-word (HMM model) level. • Need to be able to form composite models from a sequence of word or phoneme labels.
... a a a a ... b b b b ... c c c c ... d d d d ... ... ... ... ... z z z z ... b c c d a b c ... c d f d e Viterbi and Token Passing Viterbi Best Sentence Token Passing Recognition Network Word Graph
HMM Notation Discrete HMM Case: