210 likes | 407 Views
Lecture 6. Calculating P n – how do we raise a matrix to the n th power? Ergodicity in Markov Chains. When does a chain have equilibrium probabilities? Balance Equations Calculating equilibrium probabilities without the fuss. The leaky bucket queue
E N D
Lecture 6 • Calculating Pn – how do we raise a matrix to the nth power? • Ergodicity in Markov Chains. • When does a chain have equilibrium probabilities? • Balance Equations • Calculating equilibrium probabilities without the fuss. • The leaky bucket queue • Finally an example which is to do with networks. • For more information: • Norris: Markov Chains (Chapter 1) • Bertsekas: Appendix A and Section 6.3
How to calculate Pn • If P is diagonalisable (3x3) then we can find some invertible matrix such that: where i are the eigenvalues Therefore pij(n)=A1n+B2n+ C3n assuming the eigenvalues are distinct
General Procedure • For an M state chain. Compute the eigenvalues 1,2,..M • If the eigenvalues are distinct then pij(n) has the general form: • If an eigenvalue is repeated once then the general form includes a term (an+b)n • As roots of a polynomial with real coefficients, complex eigenvalues come in conjugate pairs and can be written as sin and cosine pairs. • The coefficients of the general form can be found by calculating pij(n) by hand for n= 0...M-1 and solving.
Example of Pn (where states are no’s 1, 2 and 3) where I is the identity matrix Eigenvalues are 1, i/2, -i/2. Therefore p11(n) has the form: where the substitution can be made since p11(n) must be real we can calculate that p11(0)=1, p11(1)=0 and p11(2)=0
Example of Pn (2) • We now have three simultaneous equations in , and . • Solving we get =1/5, =4/5 and =-2/5.
Equilibrium Probabilities • Recall the distribution vector of equilibrium probabilities. If n is the distribution vector after n steps is given by: • This is also the distribution which solves: • When does this limit exist? When is there a unique solution to the equation? • This is when the chain is ergodic: • Irreducible • Recurrent non-null (also called positive recurrent) • Aperiodic
Irreducible • A chain is irreducible if any state can be reached from any other. • More formally for all i and j: 0 1 1- For what values of and is this chain irreducible? 1- 2 1
Aperiodic chains • A state i is periodic if it is returned to after a time period > 1. • Formally, it is periodic if there exists an integer k > 1 where, for all j: • Equivalently, a state is aperiodic if there is always a sufficiently large n that for all m > n:
A useful aperiodicity lemma • If P is irreducible and has one aperiodic state i then all states are aperiodic. Proof: By irreducibility there exists r, s 0 with pji(r),pik(s) > 0 Therefore there is an n such that for all m > n: And therefore all the states are aperiodic (consider j=k in the above equation).
Return (Recurrence) Time • If a chain is in state i when will it next return to state i? • This is known as “return time”. • First we must define the probability that the first return to state i is after n steps: fi(n) • The probability that we ever return is: • A state where fi = 1 is recurrent fi < 1 is called transient. • The expectation of this is the “mean recurrence time” or “mean return time”. • Mi= recurrent null Mi< recurrent non-null
Return (Recurrence) Time • A finite irreducible chain is always recurrent non null. • In an irreducible aperiodic Markov Chain the limiting probabilities always exist and are independent of the starting distribution. Either: • All states are transient or recurrent null in which case j=0 for all states and no stationary distribution exists. • All states are recurrent non null and a unique stationary distribution exists with:
Ergodicity (summary) • A chain which is irreducible, aperiodic and recurrent non-null is ergodic. • If a chain is ergodic, then there is a unique invariant distribution which is equivalent to the limit: • In Markov Chain theory, the phrases invariant, equilibrium and stationary are often used interchangeably.
Invariant Density in Periodic Chains • It is worth noting that an irreducible, recurrent non null chain which is periodic, has a solution to the invariant density equation but the limit distribution does not exist. Consider: • However, it should be clear that does not exist in general though it may for specific starting distributions 1 0 1 1 =( ½ , ½ ) solves =P
Balance Equations • Sometimes it is not practical to calculate the equilibrium probabilities using the limit. • If a distribution is invariant then at every iteration, the inputs to a state must add up to its starting probability. • The inputs to a state i are the probabilities of each state j (j) which leads into it multiplied by the probability pji
Balance Equations (2) • More formally if i is the probability of state i : • And to ensure it is a distribution: • Which, for an n state chain gives us n+1 equations for n unknowns.
Queuing Analysis of the Leaky Bucket • A “leaky bucket” is a mechanism for managing buffers to smooth the downstream flow. • What is described here is what is sometimes called a “token bucket”. • A queue holds a stock of “permits” which arrive at a rate r (one every 1/r seconds) up to W permits may be held. • A packet cannot leave the queue if there is no permit stored. • The idea is that the scheme limits downstream flow but can deal with bursts of traffic.
Modelling the Leaky Bucket • Let us assume that the arrival process is a Poisson process with a rate • Consider how many packets arrive in 1/r seconds. The prob ak that k packets arrive is: Queue of permits (arrive at 1/r seconds) Exit of buffer Queue of packets (Poisson) Exit queue for packets with permits
A Markov Model • Model this as a Markov Chain which changes state every 1/r seconds. • States 0iW represent no packets waiting and W-i permits available. States W+i (where i > 1) represent 0 permits and i packets waiting. • Transition probabilities: a2 a2 a2 . . . . . . 0 1 2 W W+1 a0 a0 a0 a1 a0+a1 a1 a1 a1
Solving the Markov Model • By solving the balance equations we get: Similarly, we can get expressions for 3 in terms of 2 ,1 and 0. And so on...
Solving the Markov Model (2) • Normally we would solve this using the remaining balance equation: • This is difficult analytically in this case. • Instead we note that permits are generated every step except when we are in state 0 and no packets arrive (W permits none used). • This means permits are generated at a rate (1-0a0)r • This must be equal to since each packet gets a permit (assume none dropped while waiting).
And Finally • The average delay for a packet to get a permit is given by: • Of course this is not a closed form expression. To complete this analysis, look at Bertsekas P515 No of iterations taken to get out of queue from state j Amount of time spent in given state Time taken for each iteration of chain For those states with queue