390 likes | 1.23k Views
CS433 : Modeling and Simulation. Lecture 10: Discrete Time Markov Chains Dr. Shafique Ahmad Chaudhry Department of Computer Science E-mail: hazrat.shafique@gmail.com Office # GR-02-C Tel # 2581328. Markov Processes. Stochastic Process X ( t ) is a random variable that varies with time.
E N D
CS433 : Modeling and Simulation Lecture 10: Discrete Time Markov Chains Dr. Shafique Ahmad Chaudhry Department of Computer Science E-mail: hazrat.shafique@gmail.com Office # GR-02-C Tel # 2581328
Markov Processes • Stochastic Process X(t)is a random variable that varies with time. • A state of the process is a possible value of X(t) • Markov Process • The future of a process does not depend on its past, only on its present • a Markov process is a stochastic (random) process in which the probability distribution of the current value is conditionally independent of the series of past value, a characteristic called the Markov property. • Markov property: the conditional probability distribution of future states of the process, given the present state and all past states, depends only upon the present state and not on any past states • Marko Chain: is a discrete-time stochastic process with the Markov property
Classification of States: 1 Apathis a sequence of states, where each transition has a positive probability of occurring. State jis reachable (or accessible)(يمكن الوصول إليه) from state i(ij) if there is a path from i to j –equivalently Pij(n)> 0 for some n≥0, i.e.the probability to go from ito j in nsteps is greater than zero. States i and j communicate (ij)(يتصل) ifiis reachable fromjandjis reachable fromi. (Note: a state i always communicates with itself) A set of states C is a communicating classif every pair of states in C communicates with each other, and no state in C communicates with any state not in C.
Classification of States: 1 A state i is said to be an absorbing state if pii= 1. A subset S of the state space Xis a closed set if no state outside of S is reachable from any state in S (like an absorbing state, but with multiple states), this means pij= 0for every iS and j S A closed set S of states is irreducible(غير قابل للتخفيض) if any state j Sis reachable from every state iS. A Markov chain is said to be irreducible if the state space X is irreducible.
Example Irreducible Markov Chain p01 p12 p22 p00 p21 p10 p01 p12 p23 0 1 2 3 p32 p10 p00 p14 0 1 2 p22 p33 4 • Reducible Markov Chain Absorbing State Closed irreducible set
Classification of States: 2 State iis atransient state(حالة عابرة)if there exists a state j such that j is reachable from ibut i is not reachable from j. A state that is not transient is recurrent(حالة متكررة) . There are two types of recurrent states: Positive recurrent: if the expected time to return to the state is finite. Null recurrent (less common): if the expected time to return to the state is infinite(this requires an infinite number of states). A state iis periodic with periodk>1, ifkis the smallest number such that all paths leading from state iback to state i have a multiple of k transitions. A state is aperiodic if it has period k =1. A state is ergodic if it is positive recurrent and aperiodic.
Classification of States: 2 Example from Book Introduction to Probability: Lecture Notes D. Bertsekas and J. Tistsiklis – Fall 2000
Transient and Recurrent States We define the hittingtime Tijas the random variable that represents the time to go from state j to stat i, and is expressed as: k is the number of transition in a path from i to j. Tijis the minimum number of transitions in a path from i to j. We define the recurrence timeTii as the first time that the Markov Chain returns to state i. The probability that the first recurrence to state ioccurs at the nth-step is TiTime for first visit to i given X0 = i. The probability of recurrence to state iis
Transient and Recurrent States • The mean recurrence time is • A state is recurrent if fi=1 • If Mi < then it is said Positive Recurrent • If Mi = then it is said Null Recurrent • A state is transient if fi<1 • If , then is the probability of never returning to state i.
Transient and Recurrent States We define Niasthe number of visits to stateigiven X0=i, Theorem: If Ni is the number of visits to state igiven X0=i,then Proof Transition Probability from state i to state i after n steps
Transient and Recurrent States The probability of reaching state j for first time in n-steps starting from X0 = i. The probability of ever reaching j starting from state i is
Three Theorems If a Markov Chain has finite state space, then: at least one of the states is recurrent. If state i is recurrent and state j is reachable from state i then: state j is also recurrent. IfS is a finite closed irreducible set of states, then: every state in S is recurrent.
Positive and Null Recurrent States Let Mi be the mean recurrence time of state i A state is said to be positive recurrent if Mi<∞. If Mi=∞ then the state is said to be null-recurrent. Three Theorems If state i is positive recurrent and statej is reachable from state i then, state j is also positive recurrent. If S is a closed irreducible set of states, then every state in S is positive recurrent or, every state in S is null recurrent, or, every state in S is transient. If S is a finite closed irreducible set of states, then every state in S is positive recurrent.
Example p01 p12 p23 0 1 2 3 Positive Recurrent States Transient States p32 p10 p00 p14 p22 p33 4 Recurrent State
Periodic and Aperiodic States Suppose that the structure of the Markov Chain is such that state i is visited after a number of steps that is an integer multiple of an integer d >1. Then the state is called periodic with period d. If no such integer exists (i.e., d =1) then the state is called aperiodic. Example 1 0.5 0 1 2 1 0.5 Periodic State d = 2
Steady State Analysis Recall that the state probability, which is the probability of finding the MC at state i after the kth step is given by: • An interesting question is what happens in the “long run”, i.e., • This is referred to as steady stateor equilibrium or stationary state probability • Questions: • Do these limits exists? • If they exist, do they converge to a legitimate probability distribution, i.e., • How do we evaluate πj, for all j.
Multi-step (t-step) Transitions Example: TAX auditing problem: Assume that whether a tax payer is audited by Tax department or not in the n + 1 is dependent only on whether he was audit in the previous year or not. • If he is not audited in year n, he will not be audited with prob 0.6, and will be audited with prob 0.4 • If he is audited in year n, he will be audited with prob 0.5, and will not be audited with prob 0.5 How to model this problem as a stochastic process ? 17
The Tax Auditing Example State Space: Two states: s0 = 0 (no audit), s1 = 1 (audit) Transition matrix Transition Matrix P is the prob. of transition in one step How do we calculate the probabilities for transitions involving more than one step? Notice: p01 = 0.4, is conditional probability of audit next year given no audit this year. p01 = p (x1 = 1 | x0 = 0) OR 18
n-Step Transition Probabilities This idea generalizes to an arbitrary number of steps: In matrix form, P(2) = P P, P(3) = P(2) P = P2 P = P3 or more generally P(n) = P(m) P(n-m) The ij'th entry of this reduces to Pij(n) = Pik(m) Pkj(n-m) 1 m n1 m k=0 Chapman - Kolmogorov Equations “The probability of going from i to k in m steps & then going from k to j in the remaining nm steps, summed over all possible intermediate states k”
Steady-State Solutions – n Steps What happens with t get large? 20
Steady State Transition Probability Observations: as n gets large, the values in row of the matrix becomes identical OR they asymptotically approach a steady state value What does it mean? The probability of being in any future state becomes independent of the initial state as time process j = limn Pr (Xn=j |X0=i } = limnpij (n) for all i and j These asymptoticalvalues are calledSteady-State Probabilities 22
Steady State Analysis Recall the recursive probability • If steady state exists, then π(k+1)π(k), and therefore the steady state probabilities are given by the solution to the equations and • If an Irreducible Markov Chain, then the presence of periodic states prevents the existence of a steady state probability
Steady State Analysis • THEOREM: In an irreducible aperiodic Markov chain consisting of positive recurrent states a unique stationary state probabilityvector π exists such that πj > 0 and where Mj is the mean recurrence time of state j • The steady state vector πis determined by solving and • Ergodic Markov chain.
Comments on Steady-State Results 1. Steady-state probabilities might not exist unless the Markov chain is ergodic. 2.Steady-state predictions are never achieved in actuality due to a combination of (i) errors in estimating P, (ii) changes in P over time, and (iii) changes in the nature of dependence relationships among the states. Nevertheless, the use of steady-state values is an important diagnostic tool for the decision maker. 25
Interpretation of Steady-State Conditions Just because an ergodic system has steady-state probabilities does not mean that the system “settles down” into any one state. j is simply the likelihood of finding the system in state j after a large number of steps. The limiting probability πj that the process is in state j after a large number of steps is also equals the long-run proportion of time that the process will be in state j. When the Markov chain is finite, irreducible and periodic, we still have the result that the πj, j Î S, uniquely solves the steady-state equations, but now πj must be interpreted as the long-run proportion of time that the chain is in state j. 26
Discrete Birth-Death Example 1-p 1-p 1-p 0 1 i p p p p • Thus, to find the steady state vector πwe need to solve and
Discrete Birth-Death Example • In other words • Solving these equations we get • In general • Summing all terms we get
Discrete Birth-Death Example • Therefore, for all states j we get • If p<1/2, then All states are transient • If p>1/2, then All states are positive recurrent
Discrete Birth-Death Example • If p=1/2, then All states are null recurrent
Reducible Markov Chains Transient Set T Irreducible Set S1 Irreducible Set S2 • In steady state, we know that the Markov chain will eventually end in an irreducible set and the previous analysis still holds, or an absorbing state. • The only question that arises, in case there are two or more irreducible sets, is the probability it will end in each set
Reducible Markov Chains Transient Set T Irreducible Set S s1 r sn i • Suppose we start from state i. Then, there are two ways to go to S. • In one step or • Go to r T after k steps, and then to S. • Define
Reducible Markov Chains • First consider the one-step transition • Next consider the general case for k=2,3,…