320 likes | 636 Views
Hidden Markov Models I. Biology 162 Computational Genetics Todd Vision 14 Sep 2004. Hidden Markov Models I. Markov chains Hidden Markov models Transition and emission probabilities Decoding algorithms Viterbi Forward Forward and backward Parameter estimation Baum-Welch algorithm.
E N D
Hidden Markov Models I Biology 162 Computational Genetics Todd Vision 14 Sep 2004
Hidden Markov Models I • Markov chains • Hidden Markov models • Transition and emission probabilities • Decoding algorithms • Viterbi • Forward • Forward and backward • Parameter estimation • Baum-Welch algorithm
Markov Chain • A particular class of Markov process • Finite set of states • Probability of being in state i at time t+1 depends only on state at time t (Markov property) • Can be described by • Transition probability matrix • Initial probability distribution 0
Markov chain a13 a23 a12 a11 1 a22 2 3 a33 a21 a32 a31
Transition probability matrix • Square matrix with dimensions equal to the number of states • Describes the probability of going from state i to state j in the next step • Sum of each row must equal 1
Multistep transitions • Probability of 2 step transition is sum of probability of all 1 step transitions • And so on for n steps
Stationary distribution • A vector of frequencies that exists if chain • Is irreducible: each state can eventually be reached from every other • Is aperiodic: state sequence does not necessarily cycle
Applications • Substitution models • PAM • DNA and codon substitution models • Phylogenetics and molecular evolution • Hidden Markov models
Hidden Markov models: applications • Alignment and homology search • Gene finding • Physical mapping • Genetic linkage mapping • Protein secondary structure prediction
Hidden Markov models • Observed sequence of symbols • Hidden sequence of underlying states • Transition probabilities still govern transitions among states • Emission probabilities govern the likelihood of observing a symbol in a particular state
A coin flip HMM • Two coins • Fair: 50% Heads, 50% Tails • Loaded: 90% Heads, 10% Tails What is the probability for each of these sequences assuming one coin or the other? A: HHTHTHTTHT B: HHHHHTHHHH
A coin flip HMM • Now imagine the coin is switched with some probability Symbol: HTTHHTHHHTHHHHHTHHTHTTHTTHTTH State: FFFFFFFLLLLLLLLFFFFFFFFFFFFFL HHHHTHHHTHTTHTTHHTTHHTHHTHHHHHHHTTHTT LLLLLLLLFFFFFFFFFFFFFFLLLLLLLLLLFFFFF
aFL F L aFF aLL aLF H 0.5 T 0.5 H 0.9 T 0.1 The formal model where aFF, aLL > aFL, aLF
Probability of a state path Symbol: T H H H State: F F L L Symbol: T H H H State: L L F F Generally
HMMs as sequence generators • An HMM can generate an infinite number of sequences • There is a probability associated with each one • This is unlike regular expressions • With a given sequence • We might want to ask how often that sequence would be generated by a given HMM • The problem is there are many possible state paths even for a single HMM • Forward algorithm • Gives us the summed probability of all state paths
Decoding • How do we infer the “best” state path? • We can observe the sequence of symbols • Assume we also know • Transition probabilities • Emission probabilities • Initial state probabilities • Two ways to answer that question • Viterbi algorithm - finds the single most likely state path • Forward-backward algorithm - finds the probability of each state at each position • These may give different answers
Viterbi with coin example • Let aFF=aLL=0.7, aFLaLF=0.3, a0=(0.5, 0.5) T H H H B 1 0 0 0 0 F 0 0.25 0.03125 0.0182* 0.0115* L 0 0.05 0.0675* 0.0425 0.0268 • p* = F L L L • Better to use log probabilities!
Forward algorithm • Gives us the sum of all paths through the model • Recursion similar to Viterbi but with a twist • Rather than using the maximum state k at position i , we take the sum of all possible states k at i
Forward with coin example • Let aFF=aLL=0.7, aFLaLF=0.3, a0=(0.5, 0.5) • eL(H)=0.9 T H H H B 1 0 0 0 0 F 0 0.25 0.101 ? ? L 0 0.05 0.353 ? ?
Posterior decoding • We can use the forward-backward algorithm to define a simple state sequence, as in Viterbi • Or we can use it to look at ‘composite states’ • Example: a gene prediction HMM • Model contains states for UTRs, exons, introns, etc. versus noncoding sequence • A composite state for a gene would consist of all the above except for noncoding sequence • We can calculate the probability of finding a gene, independent of the specific match states
Parameter estimation • Design of model (specific to application) • What states are there? • How are they connected? • Assigning values to • Transition probabilities • Emission probabilities
Model training • Assume the states and connectivity are given • We use a training set from which our model will learn the parameters • An example of machine learning • The likelihood is probability of the data given the model • Calculate likelihood assuming j, j=1..n sequences in training set are independent
When state sequence is known • Maximum likelihood estimators • Adjusted with pseudocounts
When state sequence is unknown • Baum-Welch algorithm • Example of a general class of EM (Expectation-Maximization) algorithms • Initialize with a guess at akl and ek(b) • Iterate until convergence • Calculate likely paths with current parameters • Recaculate parameters from likely paths • Akl and Ek(b) are calculated from posterior decoding (ie forward-backward algorithm) at each iteration • Can get stuck on local optima
Reading assignment • Continue studying: • Durbin et al. (1998) pgs. 46-79 in Biological Sequence Analysis