70 likes | 82 Views
Dive into the world of Markov models, explore filtering and smoothing techniques, and uncover the power of the Viterbi algorithm. Learn to predict states over time with hidden Markov models. Discover key concepts in probabilistic reasoning and apply them to practical scenarios.
E N D
Recap: Reasoning Over Time • Stationary Markov models • Hidden Markov models 0.3 0.7 X1 X2 X3 X4 rain sun 0.7 0.3 X1 X2 X3 X4 X5 E1 E2 E3 E4 E5 This slide deck courtesy of Dan Klein at UC Berkeley
Recap: Filtering • Elapse time: compute P( Xt | e1:t-1 )Observe: compute P( Xt | e1:t ) Belief: <P(rain), P(sun)> X1 X2 <0.5, 0.5> Prior on X1 <0.82, 0.18> Observe E1 E2 <0.63, 0.37> Elapse time <0.88, 0.12> Observe
Filtering/Smoothing • Exercise 15.13, p. 609 • Prior of “Enough Sleep” (ES) = .7 • ES at t-1 => Pr(ES at t) = .8 • ~ES at t-1 => Pr(ES at t) = .3 • “Red Eyes” (RE): If ES, PR(RE) = .2; If ~ES, PR(RE) = .7 • “Sleep in Class” (SC): If ES, PR(SC) = .1; If ~ES, PR(SC) = .3 • e1 = not red eyes, not sleeping in class: ~RE, ~SC • e2 = red eyes, not sleeping in class: RE, ~SC • e3 = red eyes, sleeping in class RE, SC
Best Explanation Queries • Query: most likely seq: X1 X2 X3 X4 X5 E1 E2 E3 E4 E5
State Path Trellis • State trellis: graph of states and transitions over time • Each arc represents some transition • Each arc has weight • Each path is a sequence of states • The product of weights on a path is the seq’s probability • Can think of the Forward (and now Viterbi) algorithms as computing sums of all paths (best paths) in this graph sun sun sun sun rain rain rain rain
Viterbi Algorithm sun sun sun sun rain rain rain rain