480 likes | 618 Views
Eager Markov Chains. Parosh Aziz Abdulla Noomene Ben Henda Richard Mayr Sven Sandberg. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A. Outline. Introduction Expectation Problem Algorithm Scheme Termination Conditions
E N D
Eager Markov Chains Parosh Aziz Abdulla Noomene Ben Henda Richard Mayr Sven Sandberg TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAA
Outline • Introduction • Expectation Problem • Algorithm Scheme • Termination Conditions • Subclasses of Markov Chains • Examples • Conclusion
Introduction • Model: Infinite-state Markov chains • Used to model programs with unreliable channels, randomized algorithms… • Interest: Conditional expectations • Expected execution time of a program • Expected resource usage of a program
0.3 1 0.2 0.5 0.9 0.5 0.5 1 0.1 0.7 0.3 Introduction Example • Infinite-state Markov chain • Infinite set of states • Target set • Probability distributions
2 -3 0.3 1 0.2 0.5 2 0.9 -1 2 0.5 0.5 1 0.1 0.7 0.3 0 Introduction Example • Reward function • Defined over paths reaching the target set
Expectation Problem • Instance • A Markov chain • A reward function • Task • Compute/approximate the conditional expectation of the reward function
Expectation Problem 2 2 • Example: • The weighted sum • The reachability probability • The conditional expectation 1 1 0.8 0.1 1 0 -5 0.1 1 -3 0.8*4+0.1*(-5)=2.7 0.8+0.1=0.9 2.7/0.9=3
Expectation Problem • Remark • Problem in general studied for finite-state Markov chains • Contribution • Algorithm scheme to compute it for infinite-state Markov chains • Sufficient conditions for termination
At each iteration n Compute paths up to depth n Consider only those ending in the target set Update the expectation accordingly Algorithm Scheme Path Exploration
Algorithm Scheme • Correctness • The algorithm computes/approximates the correct value • Termination • Not guaranteed: lower-bounds but no upper-bounds
Termination Conditions • Exponentially bounded reward function • The intuition: limit on the growth of the reward functions • Remark: The limit is reasonable: for example polynomial functions are exponentially bounded
Termination Conditions Bound on the reward 0 The abs of the reward
Termination Conditions • Eager Markov chain • The intuition: Long paths contribute less in the expectation value • Remark: Reasonable: for example PLCS, PVASS, NTM induce all eager Markov chains
Termination Conditions Prob. of reaching the target in more than n steps Bound on the probability 1 0
Termination Conditions Ce Ws Pf
NTM PVASS Subclasses of Markov Chains • Eager Markov chains • Markov chains with finite eager attractor • Markov chains with the bounded coarseness property PLCS
Attractor: Almost surely reached from every state Finite eager attractor: Almost surely reached Unlikely to stay ”too long” outside of it EA A Finite Eager Attractor
Prob. to return in More than n steps Finite Eager Attractor EA b 1 0
Prob. of reaching the target in more than n steps Finite Eager Attractor • Finite eager attractor implies eager Markov chain?? • Reminder: Eager Markov chain:
Paths of length n that visit the attractor t times Finite Eager Attractor FEA
Finite Eager Attractor • Proof idea: identify 2 sets of paths • Paths that visit the attractor often without going to the target set: • Paths that visit the attractor rarely without going the target set:
Pr_n Finite Eager Attractor Paths visiting the attractor rarely: t less than n/c FEA
Pt FEA Pl Po_n Finite Eager Attractor Paths visiting the attractor often: t greater than n/c
Probabilistic Lossy Channel Systems (PLCS) • Motivation: • Finite-state processes communicating through unbounded and unreliable channels • Widely used to model systems with unreliable channels (link protocol)
Sendc!a c!b q1 q0 1 b nop c!a 1 2 1 1 a b q3 q2 nop Receivec?b Channel c a b a b a a PLCS c?b
c!b q1 q0 1 nop Loss c!a 1 2 1 a b b a b 1 q3 q2 nop a b Channel c a b a PLCS c?b
c!b q1 q0 1 nop c!a 1 2 1 1 q3 q2 nop Channel c a b a PLCS • Configuration • Control location • Content of the channel • Example • [q3,”aba”] c?b
PLCS • A PLCS induces a Markov chain: • States: Configurations • Transitions: Loss steps combined with discrete steps
c!b q1 q0 1 nop c!a 1 2 1 1 q3 q2 nop Channel c a b a PLCS • Example: • [q1,”abb”] [q2,”a”] • By losing one of the messages ”b” and firing the marked step. • Probability: • P=Ploss*2/3 c?b
PLCS • Result: Each PLCS induces a Markov chain with finite eager attractor. • Proof hint: When the size of the channels is big enough, it is more likely (with a probability greater than ½) to lose a message.
Bounded Coarseness • The probability of reaching the target within K steps is bounded from below by a constant b.
Prob. of reaching the target in more than n steps Bounded Coarseness • Boundedly coarse Markov chain implies eager Markov chain?? • Reminder: Eager Markov chain:
Pn:Prob. of avoiding the target in nK steps Prob. Reach. Within K steps Pn P2 P1 2K Bounded Coarseness K nK steps
Probabilistic Vector Addition Systems with states (PVASS) • Motivation: • PVASS are generalizations of Petri-nets. • Widely used to model parallel processes, mutual exclusion program…
++x q1 q0 1 ++y --y 1 2 --x 1 1 q3 q2 nop 4 ++x PVASS • Configuration • Control location • Values of the variables x and y • Example: • [q1,x=2,y=0]
PVASS • A PVASS induces a Markov chain: • States: Configurations • Transitions: discrete steps
++x q1 q0 1 ++y --y 1 2 --x 1 1 q3 q2 nop 4 ++x PVASS • Example: • [q1,1,1] [q2,1,0] • By taking the marked step. • Probability: • P=2/3
PVASS • Result: Each PVASS induces a Markov chain which has the bounded coarseness property.
Noisy Turing Machines (NTM) • Motivation: • They are Turing Machines augmented with a noise parameter. • Used to model systems operating in ”hostile” environment
q1 a/b b R R R a/b S # q2 q3 R R b a/b S # q4 # b b a a b a # NTM • Fully described by a Turing Machine and a noise parameter.
q1 a/b b R R # b b a a b a # R a/b S # q2 q3 R R b a/b S # # b b b a b a # q4 # b b a a b a # NTM Discret Step
q1 a/b b R R # b b a a b a # R a/b S # q2 q3 R R b a/b S # # b b # a b a # q4 # b b a a b a # NTM Noise Step
NTM • Result: Each NTM induces a Markov chain which has the bounded coarseness property.
Conclusion • Summary: • Algorithm scheme for approximating expectations of reward functions • Sufficient conditions to guarantee termination: • Exponentially bounded reward function • Eager Markov chains
Conclusion • Direction for future work • Extending the result to Markov decision processes and stochastic games • Find more concrete applications
++x q1 q0 1 ++y --y 1 2 --x 1 1 q3 q2 nop 4 ++x PVASS • Order on configurations: <= • Same control locations • Ordered values of the variables • Example: • [q0,3,4] <= [q0,3,5]
Target set ++x q1 q0 1 ++y --y 1 2 --x 1 1 q3 q2 nop 4 ++x PVASS • Probability of each step > 1/10 • Boundedly coarse: parameters K and 1/10^K K iterations