1.25k likes | 1.42k Views
Fast Inference and Learning in Large-State-Space HMMs. Sajid M. Siddiqi Andrew W. Moore The Auton Lab Carnegie Mellon University. HMM Overview Reducing quadratic complexity in the number of states The model Algorithms for fast evaluation and inference Algorithms for fast learning Results
E N D
Fast Inference and Learning in Large-State-Space HMMs Sajid M. Siddiqi Andrew W. Moore The Auton Lab Carnegie Mellon University Siddiqi and Moore, www.autonlab.org
HMM Overview • Reducing quadratic complexity in the number of states • The model • Algorithms for fast evaluation and inference • Algorithms for fast learning • Results • Speed • Accuracy • Conclusion
HMM Overview • Reducing quadratic complexity in the number of states • The model • Algorithms for fast evaluation and inference • Algorithms for fast learning • Results • Speed • Accuracy • Conclusion
q0 q1 q2 q3 q4 Hidden Markov Models O0 O1 O2 1/3 O3 O4
q0 q1 q2 q3 q4 Transition Model 1/3
q0 q1 q2 q3 q4 Notation: Transition Model 1/3 Each of these probability tables is identical
q0 q1 q2 q3 q4 Observation Model O0 O1 O2 O3 O4
q0 q1 q2 q3 q4 Observation Model Notation: O0 O1 O2 O3 O4
Some Famous HMM Tasks Question 1: State Estimation What is P(qT=Si| O1O2…OT)
Some Famous HMM Tasks Question 1: State Estimation What is P(qT=Si| O1O2…OT)
Some Famous HMM Tasks Question 1: State Estimation What is P(qT=Si| O1O2…OT)
Some Famous HMM Tasks Question 1: State Estimation What is P(qT=Si| O1O2…OT) Question 2: Most Probable Path Given O1O2…OT , what is the most probable path that I took?
Some Famous HMM Tasks Question 1: State Estimation What is P(qT=Si| O1O2…OT) Question 2: Most Probable Path Given O1O2…OT , what is the most probable path that I took?
Some Famous HMM Tasks Question 1: State Estimation What is P(qT=Si| O1O2…OT) Question 2: Most Probable Path Given O1O2…OT , what is the most probable path that I took? Woke up at 8.35, Got on Bus at 9.46, Sat in lecture 10.05-11.22…
Some Famous HMM Tasks Question 1: State Estimation What is P(qT=Si| O1O2…OT) Question 2: Most Probable Path Given O1O2…OT , what is the most probable path that I took? Question 3: Learning HMMs: Given O1O2…OT , what is the maximum likelihood HMM that could have produced this string of observations?
Some Famous HMM Tasks Question 1: State Estimation What is P(qT=Si| O1O2…OT) Question 2: Most Probable Path Given O1O2…OT , what is the most probable path that I took? Question 3: Learning HMMs: Given O1O2…OT , what is the maximum likelihood HMM that could have produced this string of observations?
Ot Some Famous HMM Tasks aBB bB(Ot) Bus Question 1: State Estimation What is P(qT=Si| O1O2…OT) Question 2: Most Probable Path Given O1O2…OT , what is the most probable path that I took? Question 3: Learning HMMs: Given O1O2…OT , what is the maximum likelihood HMM that could have produced this string of observations? aAB aCB Ot-1 Ot+1 aBA aBC bA(Ot-1) bC(Ot+1) Eat walk aAA aCC
Basic Operations in HMMs For an observation sequence O = O1…OT, the three basic HMM operations are: T = # timesteps, i.e. datapoints N = # states
Basic Operations in HMMs For an observation sequence O = O1…OT, the three basic HMM operations are: This talk: A simple approach to reducing the complexity in N T = # timesteps, i.e. datapoints N = # states
HMM Overview • Reducing quadratic complexity • The model • Algorithms for fast evaluation and inference • Algorithms for fast learning • Results • Speed • Accuracy • Conclusion
Reducing Quadratic Complexity in N Why does it matter? • Quadratic HMM algorithms hinder HMM computations when N is large • Several promising applications for efficient large-state-space HMM algorithms in • topic modeling • speech recognition • real-time HMM systems such as for activity monitoring • … and more
Idea One: Sparse Transition Matrix • Only K << N non-zero next-state probabilities
Idea One: Sparse Transition Matrix • Only K << N non-zero next-state probabilities
Idea One: Sparse Transition Matrix Only O(TNK)! • Only K << N non-zero next-state probabilities
Idea One: Sparse Transition Matrix Only O(TNK)! • Only K << N non-zero next-state probabilities • But can get very badly confused by “impossible transitions” • Cannot learn the sparse structure (once chosen cannot change)
Dense-Mostly-Constant (DMC) Transitions • K non-constant probabilities per row • DMC HMMs comprise a richer and more expressive class of models than sparse HMMs a DMC transition matrix with K=2
Dense-Mostly-Constant (DMC) Transitions • The transition model for state i now consists of: • K = the number of non-constant values per row • NCi= { j : sisj is a non-constant transition probability } • ci = the transition probability for sito all states not in NCi • aij= the non-constant transition probability for si sj, K = 2 NC3 = {2,5} c3 = 0.05 a32 = 0.25 a35 = 0.6
HMM Overview • Reducing quadratic complexity in the number of states • The model • Algorithms for fast evaluation and inference • Algorithms for fast learning • Results • Speed • Accuracy • Conclusion
Evaluation in Regular HMMs P(qt = si | O1, O2 … Ot)
Evaluation in Regular HMMs P(qt = si | O1, O2 … Ot) = Where
Evaluation in Regular HMMs P(qt = si | O1, O2 … Ot) = Where Then,
Evaluation in Regular HMMs P(qt = si | O1, O2 … Ot) = Where Then, Called the “forward variables”
Similarly, and Also costs O(TN2)
Called the “backward variables” Similarly, and Also costs O(TN2)
Fast Evaluation in DMC HMMs O(N), but only computed once per row of the table! O(K) for each t(j)entry • This yields O(TNK) complexity for the evaluation problem
Fast Inference in DMC HMMs O(N2) recursion in regular model:
Fast Inference in DMC HMMs O(N2) recursion in regular model: O(NK) recursion in DMC model: O(N), but only computed once per row of the table O(K) for each t(j)entry
HMM Overview • Reducing quadratic complexity in the number of states • The model • Algorithms for fast evaluation and inference • Algorithms for fast learning • Results • Speed • Accuracy • Conclusion
Learning a DMC HMM • Idea One: • Ask user to tell us the DMC structure • Learn the parameters using EM
Learning a DMC HMM • Idea One: • Ask user to tell us the DMC structure • Learn the parameters using EM • Simple! • But in general, don’t know the DMC structure
Learning a DMC HMM • Idea Two: Use EM to learn the DMC structure also • Guess DMC structure • Find expected transition counts and observation parameters, given current model and observations • Find maximum likelihood DMC model given counts • Goto 2
Learning a DMC HMM • Idea Two: Use EM to learn the DMC structure also • Guess DMC structure • Find expected transition counts and observation parameters, given current model and observations • Find maximum likelihood DMC model given counts • Goto 2 DMC structure can (and does) change!
Learning a DMC HMM • Idea Two: Use EM to learn the DMC structure also • Guess DMC structure • Find expected transition counts and observation parameters, given current model and observations • Find maximum likelihood DMC model given counts • Goto 2 In fact, just start with an all-constant transition model DMC structure can (and does) change!