280 likes | 434 Views
Probabilistic Reasoning Over Time Using Hidden Markov Models. Minmin Chen. Contents. 15.1~15.3. Time and Uncertainty. Noisy sensor. Agent: security guard at some secret underground installation Observation: Is the director coming with an umbrella State: Rain or not. Not fully observable.
E N D
Probabilistic Reasoning Over Time Using Hidden Markov Models Minmin Chen
Contents • 15.1~15.3
Time and Uncertainty Noisy sensor • Agent: security guard at some secret underground installation • Observation: Is the director coming with an umbrella • State: Rain or not Not fully observable time
Time and Uncertainty Noisy sensor • Observation: • Measured Heart Rate • Electrocardiogram (ECG) • Patient’s Activity • State • Atria Fibrillation? • Tachycardia? • Bradycardia? Not fully observable time
States and Observations • Unobservable state variable : Xt • Observable evidence variable: Et • Example 1: for each day • U1,U2,U3, …… • R1, R2, R3, …… • Example 2: for each recording • Et = {Measured_heart_ratet, ECG t, activity t} • Xt = {AF t, Tachycardia t, Bradycardiat}
Assumption1: Stationary Process • Changing world • Unchanged laws • remains the same for different t
Assumption 2: Makrov Process • Current states depends only on a finite history of previous states • First-order markov process • States • Transition Probability Matrix • Initial Distribution
Assumption 3: Restriction to the Parents of Evidence • The evidence variable at time t only depends on the current state:
Hidden Markov Model • Hidden state • sequence • Evidence sequence Rt-1 Rt Rt+1 Ut-1 Ut Ut+1
Joint Distribution of HMMs Bayes rule Chain rule Conditional independence
Example • DAY: 1 2 3 4 5 • Umbrella: true true false true true • Rain: true true false true true
How True These Assumptions are • Depends on the problem domain • To overcome violations to the assumptions • Increasing the order of Markov process model • Increasing the set of state variables
Inference in Temporal Models • Filtering: • posterior distribution over the current state, given all evidence to date • Prediction: • Posterior distribution over the future state, given all evidence to date • Smoothing: • Posterior distribution over a past state, given all evidence to date • Most likely explanation: • The sequence of states most likely to generate those observations
Filtering & Prediction Transition model Posterior distribution at time t Prediction Sensor model Filtering
Proof Bayes Rule Chain Rule Conditional Independence Marginal Probability Chain Rule Conditional Independence Forward Alg
Interpretation & Example 0.7 0.9 0.5 0.5 0.45 0.3 0.3 0.5 0.5 0.1 0.7 0.2 U1=true U2=true
Interpretation &Example 0.7 0.9 0.7 0.9 0.5 0.5 0.818 0.627 0.565 0.3 0.3 0.3 0.3 0.5 0.5 0.182 0.373 0.075 0.7 0.2 0.7 0.2 U1=true U2=true
Interpretation & Example 0.7 0.9 0.7 0.9 0.5 0.5 0.818 0.627 0.883 0.3 0.3 0.3 0.3 0.5 0.5 0.182 0.373 0.117 0.7 0.2 0.7 0.2 U1=true U2=true
Likelihood of Evidence sequence • The likelihood of the evidence sequence • The forward algorithm computes
Smoothing Divide Evidence Bayes Rule Chain Rule Conditional Independence
Intuition Sensor model Backward message at time k+1 Sensor model Backward Message at time k
Backward Marginal Probability Chain Rule Conditional Independence Conditional Independence Backward Alg
Interpretation & Example 0.7 0.9 0.5 0.818 0.69 0.9 1 0.883 0.3 0.3 0.5 0.182 0.41 0.2 1 0.117 0.2 0.7 U1=true U2=true
Finding the Most Likely Sequence true true true true true true true true true true
Finding the Most Likely sequence • Enumeration • Enumerate all possible state sequence • Compute the joint distribution and find the sequence with the maximum joint distribution • Problem: total number of state sequence grows exponentially with the length of the sequence • Smooth • Calculate the posterior distribution for each time step k • In each step k, find the state with maximum posterior distribution • Combine these states to form a sequence • Problem:
Viterbi Algorithm true true false true true .8182 .5155 .0361 .0334 .0210 .1818 .0491 .1237 .0173 .0024
Proof Divide Evidence Bayes Rule Chain Rule Conditional Independence Chain Rule