200 likes | 216 Views
This presentation introduces the operational semantics, distributed implementation, and encoding process of the probabilistic asynchronous p-calculus. It covers probabilistic automata, scheduling policies, compilation in Java, encoding p into ppa, and the dining philosophers problem.
E N D
The probabilistic asynchronous p-calculus Catuscia Palamidessi, INRIA Futurs, France PPS - Groupe de Travail en Concurrence
Motivations • Main motivation behind the development of the probabilistic asynchronous p-calculus: To use it as an intermediate language for the fully distributed implementation of the p-calculus with mixed choice Plan of the talk • The p-calculus, • Operational semantics • Expressive power • The probabilistic asynchronous p-calculus • Probabilistic automata, operational semantics • Distributed implementation • Encoding the p-calculus into the probabilistic asynchronous p-calculus PPS - Groupe de Travail en Concurrence
ppa: the Probabilistic Asynchonous p Syntax g ::= x(y) | t prefixes P ::= Sipigi . Pi pr. inp. guard. choiceSi pi = 1 | x^youtputaction | P | Pparallel | (x) Pnewname | recA Precursion | Aprocedurename PPS - Groupe de Travail en Concurrence
1/2 1/3 1/2 1/3 1/3 1/2 1/3 1/2 1/3 1/3 2/3 2/3 1/3 1/3 1/2 1/3 1/3 1/2 1/3 2/3 1/3 The operational semantics of ppa • Based on the Probabilistic Automata of Segala and Lynch • Distinction between • nondeterministic behavior (choice of the scheduler)and • probabilistic behavior (choice of the process) Scheduling Policy: The scheduler chooses the group of transitions Execution: The process choosesprobabilistically the transition within the group PPS - Groupe de Travail en Concurrence
The operational semantics of ppa • Representation of a group of transition P { --gi-> piPi } i • Rules Choice Si pi gi . Pi {--gi-> piPi }i P{--gi-> piPi }i Par ____________________ Q | P {--gi-> piQ | Pi }i PPS - Groupe de Travail en Concurrence
The operational semantics of ppa • Rules (continued) P{--xi(yi)-> piPi }i Q{--x^z-> 1 Q’}i Com ____________________________________ P | Q {--t-> piPi[z/yi]|Q’ }xi=x U { --xi(yi)-> pi Pi |Q }xi=/=x P{--xi(yi)-> piPi }i Res ___________________ qi renormalized (x) P { --xi(yi)-> qi (x) Pi }xi =/= x PPS - Groupe de Travail en Concurrence
Implementation of ppa • Compilation in Java << >> : ppa Java • Distributed << P | Q >> = << P >>.start(); << Q >>.start(); • Compositional << P op Q >> = << P >> jop << Q >> for all op • Channels are one-position buffers with test-and-set (synchronized) methods for input and output PPS - Groupe de Travail en Concurrence
Encoding p into ppa • [[ ]] : pppa • Fully distributed [[ P | Q ]] = [[ P ]] | [[ Q ]] • Preserves the communication structure [[ P s ]] = [[ P ]] s • Compositional [[ P op Q ]] = Cop [ [[ P ]] , [[ Q ]] ] • Correct wrt a notion of probabilistic testing semantics P must O iff [[ P ]] must [[ O ]] with prob 1 PPS - Groupe de Travail en Concurrence
Encoding p into ppa. Idea: • Every mixed choice is translated into a parallel comp. of processes corresponding to the branches, plus a lock f • The input processes compete for acquiring both its own lock and the lock of the partner • The input process which succeeds first, establishes the communication. The other alternatives are discarded f Pi P R f Ri Qi R’i Q S f Si f The problem is reduced to a generalized DP problem where each fork (lock) can be adjacent to more than two philosophers. We are interested in progress, i.e. deadlock freedom. PPS - Groupe de Travail en Concurrence
Encoding p into asynchonous p • The requirements on the encoding imply preservation of symmetry and full distribution • There are many solution to the DP problem, but only randomized solutions can be symmetric and fully distributed. [Lehmann and Rabin 81] - They also provided a randomized algorithm (for the classic case) Hence we have actually considered a probabilistic extension of the asynchronous p for the implementation. • Note that the DP problem can be solved in p in a fully distributed, symmetric way [Francez and Rodeh ’80] – (they actually used CSP) Hence the need for randomization is not a characteristic of our approach: it would arise in any encoding of pinto an asynchronous language. PPS - Groupe de Travail en Concurrence
Dining Philosophers: classic case • Each philosopher needs exactly two forks • Each fork is shared by exactly two philosophers PPS - Groupe de Travail en Concurrence
The algorithm of Lehmann and Rabin • Think • choose first_fork in {left,right} %commit • if taken(first_fork) then goto 3 • take(first_fork) • if taken(first_fork) then goto 2 • take(second_fork) • eat • release(second_fork) • release(first_fork) • goto 1 PPS - Groupe de Travail en Concurrence
Problems • Wrt to our encoding goal, the algorithm of Lehmann and Rabin has two problems: • It works only for fairschedulers • It was conceived for rings. It is not clear, a priory, whether it works also in the general case • The first problem however can be solved by replacing the busy waiting in step 3 with suspension. [Herescu Palamidessi 2002] This result was independently obtained also by [Duflot, Friburg, Picaronny 2002] PPS - Groupe de Travail en Concurrence
The algorithm of Lehmann and RabinModified so to avoid the need for fairness • Think • choose first_fork in {left,right} %commit • if taken(first_fork) then wait • take(first_fork) • if taken(first_fork) then goto 2 • take(second_fork) • eat • release(second_fork) • release(first_fork) • goto 1 PPS - Groupe de Travail en Concurrence
Generalized Dining Philosophers • Each phil still needs two forks, but • Each fork can now be shared by more than two philosophers • We need to consider arbitrary graphs • arcs philosophers • vertices forks • Of course, the problem is interesting when there is at least one cycle PPS - Groupe de Travail en Concurrence
Restriction on the graph • Theorem:The algorithm of Lehmann and Rabin is deadlock-free if and only if all cycles are pairwise disconnected • There are essentially three ways in which two cycles can be connected: PPS - Groupe de Travail en Concurrence
Proof of the theorem • If part) Each cycle can be considered separately. On each of them the classic algorithm is deadlock-free. Some additional care must be taken for the arcs that are not part of the cycle. • Only if part) By analysis of the three possible cases. Actually they are all similar. We illustrate the first case committed taken PPS - Groupe de Travail en Concurrence
Proof of the theorem • The initial situation has probability p > 0 • The scheduler forces the processes to loop • Hence the system has a deadlock (livelock) with probability p • Note that this scheduler is not fair. However we can define even a fair scheduler which induces an infinite loop with probability > 0. The idea is to have a scheduler that “gives up” after n attempts when the process keep choosing the “wrong” fork, but that increases (by f) its “stubborness” at every round. • With a suitable choice of n and f we have that the probability of a loop is p/4 PPS - Groupe de Travail en Concurrence
Solution for the Generalization • As we have seen, the algorithm of Lehmann and Rabin does not work on general graphs • However, it is easy to modify the algorithm so that it works in general • The idea is to reduce the problem to the pairwise disconnetted cycles case: Each fork is initially associated with one token. Each phil needs to acquire a token in order to participate to the competition. After this initial phase, the algorithm is the same as the Lemahn & Rabin’s Theorem: The competing phils determine a graph in which all cycles are pairwise disconnected Proof: By case analysis. To have a situation with two connected cycles we would need a node with two tokens. PPS - Groupe de Travail en Concurrence
Conclusion • We have provided an encoding of the p calculus into a probabilistic version of its asynchronous fragment • fully distributed • compositional • correct wrt a notion of testing semantics • Advantages: • high-level solutions to distributed algorithms • Easier to prove correct (no reasoning about randomization required) PPS - Groupe de Travail en Concurrence