1 / 21

Much More About Markov Chains

Much More About Markov Chains. Calculating P n – how do we raise a matrix to the n th power? Ergodicity in Markov Chains. When does a chain have equilibrium probabilities? Balance Equations Calculating equilibrium probabilities without the fuss. The leaky bucket queue

mac
Download Presentation

Much More About Markov Chains

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Much More About Markov Chains • Calculating Pn – how do we raise a matrix to the nth power? • Ergodicity in Markov Chains. • When does a chain have equilibrium probabilities? • Balance Equations • Calculating equilibrium probabilities without the fuss. • The leaky bucket queue • Finally an example which is to do with networks. • For more information: • Norris: Markov Chains (Chapter 1) • Bertsekas: Appendix A and Section 6.3

  2. How to calculate Pn • If P is diagonalisable (3x3) then we can find some invertible matrix such that: where i are the eigenvalues Therefore pij(n)=A1n+B2n+ C3n assuming the eigenvalues are distinct

  3. General Procedure • For an M state chain. Compute the eigenvalues 1,2,..M • If the eigenvalues are distinct then pij(n) has the general form: • If an eigenvalue  is repeated once then the general form includes a term (an+b)n • As roots of a polynomial with real coefficients, complex eigenvalues come in conjugate pairs and can be written as sin and cosine pairs. • The coefficients of the general form can be found by calculating pij(n) by hand for n= 0...M-1 and solving.

  4. Example of Pn (where states are no’s 1, 2 and 3) find p11(n) Eigenvalues are 1, i/2, -i/2. Therefore p11(n) has the form: where the substitution can be made since p11(n) must be real we can calculate that p11(0)=1, p11(1)=0 and p11(2)=0

  5. Example of Pn (2) • We now have three simultaneous equations in ,  and . • Solving we get =1/5, =4/5 and =-2/5.

  6. Equilibrium Probabilities • Recall the distribution vector  of equilibrium probabilities. If n is the distribution vector after n steps  is given by: • This is also the distribution which solves: • When does this limit exist? When is there a unique solution to the equation? • This is when the chain is ergodic: • Irreducible • Recurrent non-null (also called positive recurrent) • Aperiodic

  7. 0  1-  1- 2 1 1 Irreducible • A chain is irreducible if any state can be reached from any other. • More formally for all i and j: For what values of  and  is this chain irreducible?

  8. Aperiodic chains • A state i is periodic if it is returned to after a time period > 1. • Formally, it is periodic if there exists an integer k > 1 for which, for all n we can find an integer j: • Equivalently, a state is aperiodic if there is always a sufficiently large n that for all m > n:

  9. A useful aperiodicity lemma • If P is irreducible and has one aperiodic state i then all states are aperiodic. Proof: By irreducibility there exists r, s  0 with pji(r),pik(s) > 0 Therefore there is an n such that for all m > n: And therefore all the states are aperiodic (consider j=k in the above equation).

  10. Return (Recurrence) Time • If a chain is in state i when will it next return to state i? • This is known as “return time”. • First we must define the probability that the first return to state i is after n steps: fi(n) • The probability that we ever return is: • A state where fi = 1 is recurrent fi < 1 is called transient. • The expectation of this is the “mean recurrence time” or “mean return time”. • Mi= recurrent null Mi< recurrent non-null

  11. Return (Recurrence) Time • A finite chain is always recurrent non null. • In an irreducible aperiodic Markov Chain the limiting probabilities always exist and are independent of the starting distribution. Either: • All states are transient or recurrent null in which case j=0 for all states and no stationary distribution exists. • All states are recurrent non null and a unique stationary distribution exists with:

  12. Ergodicity (summary) • A chain which is irreducible, aperiodic and recurrent non-null is ergodic. • If a chain is ergodic, then there is a unique invariant distribution which is equivalent to the limit: • In Markov Chain theory, the phrases invariant, equilibrium and stationary are often used interchangeably.

  13. Invariant Density in Periodic Chains • It is worth noting that an irreducible, recurrent non null chain which is periodic, has a solution to the invariant density equation but the limit distribution does not exist. Consider: • However, it should be clear that does not exist in general though it may for specific starting distributions 1 0 1 1  =( ½ , ½ ) solves =P

  14. Balance Equations • Sometimes it is not practical to calculate the equilibrium probabilities using the limit. • If a distribution is invariant then at every iteration, the inputs to a state must add up to its starting probability. • The inputs to a state i are the probabilities of each state j (j) which leads into it multiplied by the probability pij

  15. Balance Equations (2) • More formally if i is the probability of state i : • And to ensure it is a distribution: • Which, for an n state chain gives us n+1 equations for n unknowns.

  16. Queuing Analysis of the Leaky Bucket • A “leaky bucket” is a mechanism for managing buffers to smooth the downstream flow. • What is described here is what is sometimes called a “token bucket”. • A queue holds a stock of “permits” which arrive at a rate r (one every 1/r seconds) up to W permits may be held. • A packet cannot leave the queue if there is no permit stored. • The idea is that the scheme limits downstream flow but can deal with bursts of traffic.

  17. Modelling the Leaky Bucket • Let us assume that the arrival process is a Poisson process with a rate  • Consider how many packets arrive in 1/r seconds. The prob ak that k packets arrive is: Queue of permits (arrive at 1/r seconds) Exit of buffer Queue of packets (Poisson) Exit queue for packets with permits

  18. A Markov Model • Model this as a Markov Chain which changes state every 1/r seconds. • States 0iW represent no packets waiting and i-W permits available. States W+i (where i > 1) represent 0 permits and i packets waiting. • Transition probabilities: a2 a2 a2 . . . . . . 0 1 2 W W+1 a0 a0 a0 a1 a0+a1 a1 a1 a1

  19. Solving the Markov Model • By solving the balance equations we get: Similarly, we can get expressions for 3 in terms of 2 ,1 and 0. And so on...

  20. Solving the Markov Model (2) • Normally we would solve this using the remaining balance equation: • This is difficult analytically in this case. • Instead we note that permits are generated every step except when we are in state 0 and no packets arrive (W permits none used). • This means permits are generated at a rate (1-0a0)r • This must be equal to  since each packet gets a permit (assume none dropped while waiting).

  21. And Finally • The average delay for a packet to get a permit is given by: • Of course this is not a closed form expression. To complete this analysis, look at Bertsekas P515 No of iterations taken to get out of queue from state j Amount of time spent in given state Time taken for each iteration of chain For those states with queue

More Related