500 likes | 814 Views
Lecture 8 March 20, 2013 ( Mid-term exam on March 13 ). Chapter 5 Continuous-time Markov Chain. 5.1 Formulation. // Older knowledge. Formulation.
E N D
Lecture 8March 20, 2013(Mid-term exam on March 13) Prof. Bob Li
Chapter 5Continuous-time Markov Chain Prof. Bob Li
5.1 Formulation Prof. Bob Li
// Older knowledge Formulation Definition. A continuous-time Markov chain (or, a Markov process) is a continuous-time discrete-valued process {X(t)}t0 such that, for all s, t > 0 and all states i,j, // Memoryless transition // Stationary transition, i.e., independent of the time shift s • = The transition probability function Prof. Bob Li
Alternative formulation • // In view of the five definitions for a Poisson process, it should not • // be surprising that a continuous-time Markov chain allows two. • Alternative definition. A continuous-time Markov chain is defined in two steps. • (3) First, there is a discrete-time Markov chain with the transition probabilities Pij such that Pii = 0 for all i. • // This is called the imbedded discrete-time Markov chain. • // It observes the state only at the instances of changing the state. • // State-memoryless property =不要問我從那裡來 • (4) Then, every state i is associated with a departure rate i so that, whenever the process enters state i, the amount of time till the next transition is Exponential(i) distributed. • // Time-memorylessproperty =不要問我來咗幾耐 • // The word rateis often associated with exponential time because exponential • // time is memoryless and P(Occurrence in t time) = it +o(t). Prof. Bob Li
Intuition • Approximate a continuous-time markov chain by a discrete-time one with one transition per nanosecond. Thus the transition probabilities would be: • pij, where i j, is proportionally small to a nanosecond. • Hence pii is almost 1 • It makes sense to focus at those nanosecondswhen the state changes, which leads to theimbedded markov chain. Concomitant concepts include: • From any state i, it takes a geometric time to change into another state. This is approximately exponential at the rate i, which is the departure rate (or, the transition rate)from state i. • When the process transits out of state i, it will enter state j with the probability Pij= pij/ (1pii). This will be called the one-stage transition probabilityof the imbedded markov chain. • The above intuition leads to two lemmas in the sequel.
Transition rate & transition probability • Pij = one-stage transition probabilityin the imbedded discrete-time markov chain • = theprobability that, when the process transits out of state i, it will enter state j • // Pijisnot directly related to the transition probability functionPij(t). • // Note thatPii = 0, but Pii(t) can be any nonnegative number. • i = departure rate = transition rate from the state i. • qij=iPij=transition rate from state i to state j // P{Transit to j in a t interval | now in state i} // = P{A transition in t interval; toward j | in state i} // = (it)Pij+ o(t) because of indep. events // =qijt + o(t) // Clearly, and Prof. Bob Li
Equivalence • It is only intuitive that (1), (2)(3), (4). The opposite implication can be seen from the next two lemmas. • Lemma 6.1(a). // Think of h as a nanosecond Proof. Let the r.v. Ti denote the duration that the continuous-time markov chain stays in state i before transiting away. // By Taylor expansion of ex Prof. Bob Li
Equivalence Lemma 6.1(b). = the instantaneous transition rate =rate of transiting into state j when in state i // P(Transit into state jin t time | in state i) = qijt +o(t) Proof. Corollary. At the state i, the waiting time for a transition to state j is Exponential(qij) distributed except that such a transition may be preempted by a transition to some other state.
Equivalence • The following definition recovers the imbedded discrete-time markov chain from the continuous-time chain. • Definition. Let Sn be the nthtransition time in the continuous-time markov chain {X(t), t 0}. The sequence of states {X(Sn)}n0 is called the (discrete-time) embedded markov chain, in which the transition probabilities are the one-stage transition probabilities Pij of the continuous-time markov chain. • // Note that Prof. Bob Li
5.2Birth-deathprocess Prof. Bob Li
Formulation • Definition. A continuous-time markov chain with states 0, 1, 2, … is called a birth-death (B-D) process when • Pij 0 j = i+1or j = i1 • Convention. For a B-D process, abbreviate qi,i+1= iand qi,i1 =i • Thus, • In the diagram of transition ratesfor a B-D process, all states line up in a chain. • // Rather than diagram of transition probabilities Prof. Bob Li
Transition probabilities of B-D process At the state n, the waiting times Tn for the next birth is exponentially distributed, so is the waiting time Xn for the next death. // Whichever occurs first will preempt the other. Thus, Pn,n+1 = and The waiting time for a transition out of state n is min{Tn, Xn}, which is Exponential(n+n). // Minimum of indep. exponential is exponential. Prof. Bob Li
M/M/s M/G/ M/M/1/blocking M/D/1 Queueing discipline (optional) Interarrival time • Memoryless, • General, • Deterministic, • Uniform, • etc. Service time is • Memoryless, • General, • Deterministic, • Uniform, • etc. Number of servers, where 1 s. Typical queueing system are B-D processes • Assumptions and notation. • Inter-arrival times are i.i.d. The mean is 1/, while the distribution depends on the “1st letter” of queueing model. An “M” Def-C of Poisson process. • Service times are i.i.d. The mean is 1/, while the distribution depends the “2nd letter.” An “M” means exponential service time. • All inter-arrival times and service times are independent.
M/M/s as a B-D process Recall the diagram of transition rates (rather than transition probabilities) for a B-D process. • The customer population in an M/M/squeueing system, where 1 s, is a B-D process. Let n be the number of customers in the queueing system. • In the M/M/1 case,and • For a general s, Prof. Bob Li
Pure birth process Definition. A birth-death process is called a pure birth process when for all i 0. // Thus, n = Example. The Poisson process is the special case of a pure birth process with for all i 0. Prof. Bob Li
The Yule process The Yule process is a pure birth process in which each individual in the population independently gives birth at (memoryless) rate . The state of the markov chain is the population. From the state n, the rate of transition is Q. Given the initial population X(0) = 1, what is the distribution of the population X(t) at time t? Remark. Adopt the abbreviation as before. The problem is to compute Pn(t). As the distribution of X(t) results from a continuous-time process, it is only natural that the computation involves a differential equation for Pn(t). The simplest form of such an equation expressesPn(t) in term of Pn(t), just as in the proof of“Def-E Def-D” for a Poisson process in Chapter 4. Prof. Bob Li
The Yule process Q. Given the initial population X(0) = 1, what is the distribution of the population X(t) at time t? Solution. // P{X(t+h) = n}
The Yule process The differential equation we arrive at is a differential-difference equation: From X(0) = 1, we have the boundary conditions: Solving and P1(0) = 1, we get Next, we solve the said differential-difference equation for P2(t), P3(t), P4(t), P5(t), … successively by elementary calculus. We then observe the pattern in the solution, which fortunately can be verified by easy induction on n: There is a systematic way of solving differential-difference equation:
A general method for solving differential-difference equations By Laplace transform (see http://www.sosmath.com/diffeq/laplace/basic/basic.html) in t: Solving the recursive equation, … , taking the inverse Laplace transform, … Prof. Bob Li
Linear growth model with immigration A linear growth model with immigration means a B-D process with The constant isimmigration rate. Let X(t) be the population size at time t. Assume an initial population X(0) = i. We shall calculate E[X(t)]. // The distribution of X(t) is hard to calculate. As in the problem of the Yule process, we shall first derive a differential equation for E[X(t)]. Given X(t) = n, Therefore,
Conditioning on X(t), Adopt the abbreviation of
Thus, when , m(t) + /() = Ke()t The boundary condition m(0) = i gives K= i + /(). m(t) + /() = ie()t + e()t/() On the other hand, if = , then m(t) = and m(t) = t + i. Prof. Bob Li
5.3The transition probability function The goal of this section is to calculate Pij(t). The approach is to derive a differential equation for Pij(t) from the continuous-time C-K equation. Prof. Bob Li
Lecture 9March 27, 2013 Prof. Bob Li
Continuous-time C-K equation • One way to state the C-K equation for a markov chain is: • P(Move from i to j in t+s steps) • = from i to k in t steps)from kto j in ssteps) • Lemma 6.2 (Continuous-time Chapman-Kolmogorov equation). For all • Proof. Start in state i at time 0. Calculate the probability for the state j at time t+s by conditioning on the state at the time t. • We can differentiate Pij(t+s) in the C-K equation with respect to either t or s, resulting in two different formulas for Pij(t), called Kolmogorov’s Backward Equation and Forward Equation, respectively. // Stationary memoryless transition Prof. Bob Li
Kolmogorov’s Backward Equation • Theorem 6.1 (Kolmogorov’s Backward Equation). For t 0 and for all state i, j, • Proof. • // For an n-state chain, the Backward Equation • // means a system of n2 differential equations for // the n2 functions of Pij(t). • // Replace the parameter tin the C-K • // equation by the infinitesimal amount h. • // Interchange order here • // can be justified by math. • // By Lemmas 6.1(b) and (a).
Backward Equation in the case of B-D • // Instant. trans. rateqik = 0 unless k = i 1 • // i = i + i with the convention of 0 = 0 • In the special case of pure birth process, including the Poisson Process, • // i = i in pure birth
Backward Equation in the case of B-D For an n-state chain, the Backward Equation means a system of n2 differential equations for the n2 functions of Pij(t). The n2 boundary conditions are: Pii(0) = 1 for all i Pij(0) = 0 for i j. There is no guarantee that the solution for Pij(t) can be in a closed form. Below we show that it is the case for the 2-state chain. Prof. Bob Li
1 = on 0 = off Example of on-off alternation • With just two states, labeled 0 and 1, the process is B-D. • Given the initial state 0, we want to calculate the probability P00(t) of the off state at time t. So we shall first derive a differential equation for P00(t). • The Backward Equation in this case is reduced to • withP00(0) = 1 • with P10(0) = 0 Prof. Bob Li
Example of on-off alternation • We now solve the first two differential equations together. • [P00(t) + P10(t)] = P00(t) + P10(t) = 0 • P00(t) + P10(t) = constant • = // By boundary conditions • P10(t) = [1 P00(t)] • = (+)P00(t) • Using the boundary conditions again, the solution is: • P00(t) = and • P10(t) =
Kolmogorov’s Forward Equation • Theorem 6.2 (Kolmogorov’s Forward Equation). For t 0 and for all state i, j, • under suitable regularity conditions. Prof. Bob Li
Forward Equation in the B-D case • When j 0, • and • The special case of a pure birth process, including the Poisson Process: • When j 0, • and • // when j < i in a pure birth process. Prof. Bob Li
5.4Limiting probability Prof. Bob Li
Ergodic continuous-time markov chain Definition. A continuous-time markov chain is said to be ergodicwhen, for every state j, the limit exists and is independent of the state i. In that case, we write , which is called the limiting probability of the ergodic continuous-time markov chain. It is analogous to the limiting probability of an ergodic discrete-time markov chain. Just asPij(t) isnot directly related to theone-stage transition probability Pij, neither is Pi. Definition. A continuous-time markov chain is irreducibleif the imbedded chain is (when all states in the imbedded chain communicate with each other). An irreducible continuous-time markov chain is positive recurrent when, starting in any state, the mean (continuous) time to return is finite.
Facts • When a continuous-time markov chain ispositive recurrent, so is the imbedded discrete-time markov chain. • If a continuous-time markov chain is irreducible and positive recurrent, then it is ergodic. In that case, the imbedded discrete-time markov chain has a unique long-run distribution {j}, which is a solution to Prof. Bob Li
Flow conservation law of limiting probabilities Being a probability, the functionPij(t) is bounded between 0 and 1 for all t. Hence Pij(t) = 0. • As t , the Forward Equation for all j • becomes for all j. • This equation is interpreted as the flow conservation lawbecause: • On the left-hand side, • = rate of emigration from state j • // Think of population migration among different towns. • // j= rate at which the process leaves state j when in j. • On the right-hand side,
Flow conservation law of limiting probabilities • The flow conservation law , together with the normalization of , determine Pjfor all j. This is in analogy with the determination of the long-run distribution {j} of the imbedded markov chain by • j= kjkPkjand • jmeans the proportion of transitions into state j, and the average time the continuous-time markov chain stays in j per visit is 1/j. • It is therefore only intuitive thatPjis proportional to j/j : Prof. Bob Li
Flow conservation law of limiting probabilities Proposition. is proportional to j/jand hence Proof. Use the flow conservation law Pjj = kjPkqkj// Emigration rate = immigration rate = kjPkkPkj// While Pii = 0 Compare with j = kjkPkj// While j= 0 We find that {j}j and {Pjj}j are determined by the same set of equations except for normalization. Thus Pjj is proportional to j. Prof. Bob Li
Limiting probabilities for B-D process • We now calculate explicitly the solution of Pjwhen the markov process is a birth-death process. • Since the transition diagram places all states on a line, the flow conservation laws • can be simplified into the equivalent form of Prof. Bob Li
Limiting probabilities for B-D process • Thus, • Since // Assuming the system is ergodic • // The normalizer Prof. Bob Li
Limiting probabilities for M/M/1 queue, a special B-D • For M/M/1, we have for all n. • Write • Pn = nP0and when <1 • We arrive at the Geometric0(1) distribution: • Pn = n(1) for all n 0 when <1 • // Repeated toss a biased coin with probability for Head. • // There will be Geometric0(1) Heads before the first Tail. Prof. Bob Li
5.5Time reversed chain &time reversible chain Prof. Bob Li
Time reversed chain • Proposition. Consider an ergodicmarkov chain that has been in operation for a long time. At a large time T we trace the process going back in time. Thus, let for 0 tT. Then, the reverse process 0tT • is a continuous-time markov chain with • the one-stage transition probabilities and • the same transition rates jas in the forward-time process. • // Recall the alternative def. of a continuous-time chain by (3), (4). • Proof. First, the embedded discrete-time chain yields the one-stage transition probabilities • Let X(t) = j,where t is very large. It remains to show that • the backward process remains in state j for an Exponential(j) time. • or, in other words, Prof. Bob Li
Time reversed chain • the age of the current stay at the state j is Exponential(j) distributed. • P{(Age of the current stay at the state j)> s | X(t) = j} • = P{Process is in state j throughout [ts, t] | X(t) = j} • = • = • = P{X(ts) = j} • Pj// As both t and ts • = Prof. Bob Li
Time reversibility • Definition. An ergodic continuous-time markov chain is time reversibleif the process reversed in time has the same probabilistic structure as the original process, that is, they share • the same one-stage transition probabilities Pij and • // i.e., imbedded discrete-time chain is time reversible • the same transition rates j. // True from the preceding Theorem • Thus, an ergodic continuous-time chain is time reversible • Pij= Qij for all i, j • for all i, j • iPij = jPji for all i, j • PiiPij = PjjPji for all i, j //i is proportional to Pii • Piqij = Pjqji for all i, j // iPij = qij • Direct-flow rate from i to j = direct-flow rate from j to i • // i.e., balanced flow between any two states i and j
Time reversibility For a markov chain in the steady state (i.e., the initial distribution = the long-run distribution), the total rate for the process to exit from any state is equal to the total rate of entering that state. An analogy is balanced foreign trade of a nation. However, nation i may have trade surplus against nation j and, at the same time, trade deficit against nation k. The Law of balanced flow for a time-reversible markov chain is the analogy of a stronger statement: the trade between any two nations is balanced. Prof. Bob Li
Ergodic B-D process is time reversible Proposition 6.3. An ergodic B-D process is time reversible. Proof. This transition diagram of transition rates places all states on a line. Hence the imbedded chain is time-reversible. Remark. We can also verify Piqij = Pjqji as follows. For j = i+1, let i = qij and j = qji in the birth-death process. Thus Prof. Bob Li
Burke’s Theorem on M/M/s • // Burke was a medical doctor. • Corollary 6.4 (Burke’s Theorem). If < s, then the stationary output process of M/M/s is a Poisson process with intensity . • Proof. Let X(t) denote the number of customers in the system at timet . • Recall that an M/M/s (1 s) queueing system {X(t)}t is a B-D process. Because < s, this M/M/sB-D process can be proven to be an ergodic continuous-time markov chain, which is time-reversible by Proposition 6.3. • The time-reversed process has the same probabilistic structure as the original process. That is, they have the same transition rates j and same one-stage transition probabilities Pij. // Imagine M/M/s being as a video playing backwards. • Departure times of the stationary forward process • = Arrival times of the backward process • = Same probabilistic structure as arrival times of the (stationary) forward process // Time reversibility • = Poisson process with intensity Prof. Bob Li