240 likes | 380 Views
Time to Equilibrium for Finite State Markov Chain. 許元春(交通大學應用數學系). a finite set ( state space) a sequence of valued random variables ( random process, stochastic process)
E N D
Time to Equilibrium for Finite State Markov Chain 許元春(交通大學應用數學系)
a finite set ( state space) a sequence of valued random variables ( random process, stochastic process) ( finite-dimensional distribution) ( Here )
What is ? Among all possibilities, the following two are the simplest: • (i.i.d.) where is a probability measure on • Example: ( Black-Scholes-Merton Model) the price of some asset at time t
Here is a stochastic matrix (i.e. and ) In this case, is the transition probability for the Markov chain
Example: ( Riffle Shuffles ) (Gilbert, Shannon ‘55, Reeds ‘81)
Markov Chain with transition kernel K and initial distribution λ This implies In particular, we observe Here and
What is the limiting distribution of given ? (i.e. What is the limiting behavior for ?) Example: ( Two State Chain )
invariant/equilibrium/stationary distribution Suppose for some , that for all Then
Ergodic Markov Chain Assume is aperiodic and irreducible. Then there admits a unique invariant distribution λand How the distribution of converge to its limiting distribution?
Distance between two probability measures ν and μ on S . • ( total variation distance ) • ( distance ) ( Note that )
For is a non-increasing sub-additive function ( ) This implies that if for some and then
We say is reversible if it satisfies the detailed balance condition Assume is reversible, irreducible and aperiodic.Then there exists eigenvalue and for any corresponding orthonormal basis of eigenvectors with , we have and
the smallest non-zero eigenvalue of = the spectral gap of where is the smallest constant satisfying the Poincare inequality Holding for all
Setting .Then ( The Divichlet form associated with the semigroup ) and Note that Hence
Theorem: The mixing time is given by • Theorem: where
Consider the entropy – like quantity L And The log-Sobolev constant is given by the Formula L Hence is the smallest constant satisfying the log-Sobolev inequality L holding for all function L
Can one compute or estimate the constant ? The present answer is that it seems to be a very difficult problem to estimate . • Lee-Yau(1998), Ann. of probability symmetric simple exclusion/random transposition • Diaconis-Saloff-Coste(1996), Ann. Of Applied Probability . For simple random walk on cycle, . The exact value of for with all rows equal to • Chen-Sheu(03), Journal of Functional Analysis when and is even
a set. a group. Action of group on set : Orbit of for some What’s the number of orbits (or patterns) ?
Example ( balls, boxes, Bose-Einstein distribution) Polya’s theory of counting (See Enumerative Combinatorics, Vol II, by R. Stanley, Sec7.24) Burnside Process (Jerrum and Goldberg)
Diaconis (‘03) ( balls, boxes) for all
Cut-off phenomenon Bayer and Diacoins (’86) The total variation distance for riffle shuffles of 52 cards “neat riffle shuffles”?