250 likes | 430 Views
Chapter 17. Markov Processes – Part 1. Markov Processes. Markov process models are useful in studying the evolution of systems over repeated trials or sequential time periods or stages. Examples: Brand Loyalty Equipment performance Stock performance. Markov Processes.
E N D
Chapter 17 Markov Processes – Part 1
Markov Processes • Markov process models are useful in studying the evolution of systems over repeated trials or sequential time periods or stages. • Examples: • Brand Loyalty • Equipment performance • Stock performance
Markov Processes • When utilized, they can state the probability of switching from one state to another at a given period of time • Examples: • The probability that a person buying Colgate this period will purchase Crest next period • The probability that a machine that is working properly this period will break down the next period
Markov Processes • A Markov system (or Markov process or Markov chain) is a system that can be in one of several (numbered) states, and can pass from one state to another each time step according to fixed probabilities. • If a Markov system is in state i, there is a fixed probability, pij, of it going into state j the next time step, and pij is called a transition probability.
Markov Processes • A Markov system can be illustrated by means of a state transition diagram, which is a diagram showing all the states and transition probabilities– probabilities of switching from one state to another.
Transition Diagram .2 .4 1 2 .8 .35 .50 .65 What does the diagram mean? 3 .15
Transition Matrix • The matrix P whose ijth entry is pij is called the transition matrix associated with the system. • The entries in each row add up to 1. • Thus, for instance, a 2 2 transition matrix P would be set up as shown at the right. To From
Diagram & Matrix .2 .4 1 2 .8 To .35 .50 .6 3 From .15
Vectors & Transition Matrix • A probability vector is a row vector in which the entries are nonnegative and add up to 1. • The entries in a probability vector can represent the probabilities of finding a system in each of the states.
Probability Vector • Let P =
State Probabilities • The state probabilities at any stage of the process can be recursively calculated by multiplying the initial state probabilities by the state of the process at stage n.
State Probabilities • Example: • (n) = [1 (n) 2 (n) ] • (1) = (0) P • (2) = (1) P • (3) = (2) P • (n+1) = (n) P
Steady State Probabilities • The probabilities that we approach after a large number of transitions are referred to as steady state probabilities. • As n gets large, the state probabilities at the (n+1)th period are very close to those at the nth period.
Steady State Probabilities • Knowing this, we can compute steady state probabilities without having to carry out a large # of calculations (n) = [1 (n) 2 (n) ] [ 1 (n+1) 2 (n+1) ] = p11 p12 [1 (n)2 (n)] p21 p22
Example • Henry, a persistent salesman, calls North's Hardware Store once a week hoping to speak with the store's buying agent, Shirley. If Shirley does not accept Henry's call this week, the probability she will do the same next week (and not accept his call) is .35. On the other hand, if she accepts Henry's call this week, the probability she will not accept his call next week is .20.
Example: Transition Matrix Next Week’s Call This Week’s Call
Example • How many times per year can Henry expect to talk to Shirley? • Answer: To find the expected number of accepted calls per year, find the long-run proportion (probability) of a call being accepted and multiply it by 52 weeks.
Example Let 1 = long run proportion of refused calls 2 = long run proportion of accepted calls Then, .35 .65 [1 2 ] .20 .80 = [1 2 ]
Example + = (1) + = (2) + = 1 (3) Solve for and
The probability of the system being in a particular state after a large number of stages is called a steady-state probability.