340 likes | 349 Views
Dive into mobile calculi including p.calculus, CCS, and related versions for dynamic network restructuring and solving distributed problems with mixed choice solutions.
E N D
Mobile Calculi for Distributed Programming Catuscia Palamidessi, INRIA Futurs, France joint work with Mihaela Herescu, IBM, Austin PPDP / GPCE 2002
Mobile calculi • The p calculus [Milner, Parrow, Walker ‘89] • CCS + Mobility of links • Dynamic reconfiguration of the communication structure
Mobile Calculi • The asynchonous p calc [Honda-Tokoro’92, Boudol ’91] • Action Calculi [Milner, early ‘90] • The Fusion calculus [Parrow, Victor, early ‘90] • Join Calculus [Fournet, Gonthier, Levy, …’96] … Related calculi • Mobile Ambients [Cardelli, Gordon ‘97] • The seal calculus [Castagna, Vitek, mid ‘90] • Boxed Ambients[Bugliesi, Castagna, Crafa, late ‘90] • The spi calculus [Abadi, Gordon, mid ‘90] • a calculus for specification and verification of security protocols • based on the p calculus
The p calculus • Basic constructs to expess parallelism, communication, choice, generation of new names (which can be communicated and in turn used as channels), scope • Scope extrusion: a name can be communicated and its scope extended to include the recipient Q z z x y R P z
Expressive Power of p • link mobility • network reconfiguration • express HO (e.g. l calculus) in a natural way • mixed choice • solution to distributed problems involving distributed agreement
y P Q x The expressive power of p • Example of distributed agreement: the leader election problem • A symmetric and fully distributed solution in p x.Pwins+ y^.Ploses| y.Qwins+ x^.Qloses –tPloses| Qwins –tPwins| Qloses
p : the p calculus (w/ mixed choice) Syntax g ::= x(y) | x^y | t prefixes (input, output, silent) P ::= Si gi . Pimixed guarded choice | P | P parallel | (x) P new name | recA Precursion | Aprocedure name
Operational semantics • Transition system P -a Q • Rules ChoiceSi gi . Pi –giPi P-x^yP’ Open ___________________ (y) P -x^(y)P’
Operational semantics • Rules (continued) P -x(y) P’Q-x^zQ’ Com ________________________ P | Q -t P’ [z/y]|Q’ P -x(y) P’Q-x^(z)Q’ Close _________________________ P | Q -t (z) (P’ [z/y] |Q’) P -g P’ Par _________________ f(Q) and b(g) disjoint Q | P -gQ | P
Implementation issues • It is well known that formalisms able to express distributed agreement are difficult to implement in a distributed fashion • For this reason, the field has evolved towards asynchronous variants of p or other asynchronous formalisms • for instance, the asynchronous p calculus [Honda-Tokoro’92, Boudol, ’92]
pa : the Asynchonous p Syntax g ::= x(y) | t prefixes P ::= Si gi . Piinput guarded choice | x^youtput action | P | P parallel | (x) P new name | recA Precursion | Aprocedure name
Operational semantics of pa • Additional rule: Out x^y–x^y 0 • Asynchronous communication: • we can’t write a continuation after an output, i.e. no x^y.P, but only x^y | P • so P will proceed without waitingfor the actual delivery of the message • Note: the originalpadid not contain a choice construct.However the version presented here was shown expressively equivalent to the original paby [Nestmann and Pierce, ’96]
p vs. pa • pa is suitable for distributed implementation, in contrast to p • However, despite the difficulties regarding implementation, the p calculus is still very appealing, because of its superior expressive power • Examples of problems that can be solved in p and not in pa : • dining philosophers ( following [Francez and Rodeh, ’82] ) • the symmetric leader election problem , for any ring of processes • The solution uses name mobility to fully connect the graph, and then mixed choice to break the symmetry. • This problem cannot be solved in pa , nor in CCS [ Palamidessi 97]
Towards a fully distributed implementation of p • The results of previous pages show that a fully distributed implementation of p must necessarily be randomized • A two-steps approach: p Advantages: the correctness proof is easier since [[ ]] (which is the difficult part of the implementation) is between two similar languages [[ ]] probabilistic asynchronous p << >> distributed machine
ppa: the Probabilistic Asynchonous p Syntax g ::= x(y) | t prefixes P ::= Sipigi . Pi pr. inp. guard. choiceSi pi = 1 | x^youtputaction | P | Pparallel | (x) Pnewname | recA Precursion | Aprocedurename
1/2 1/3 1/2 1/3 1/3 1/2 1/3 1/2 1/3 1/3 2/3 2/3 1/3 1/3 1/2 1/3 1/3 1/2 1/3 2/3 1/3 The operational semantics of ppa • Based on the Probabilistic Automata of Segala and Lynch • Distinction between • nondeterministic behavior (choice of the scheduler)and • probabilistic behavior (choice of the process) Scheduling Policy: The scheduler chooses the group of transitions Execution: The process choosesprobabilistically the transition within the group
The operational semantics of ppa • Representation of a group of transition P { --gi-> piPi } i • Rules Choice Si pi gi . Pi {--gi-> piPi }i P{--gi-> piPi }i Par ____________________ Q | P {--gi-> piQ | Pi }i
The operational semantics of ppa • Rules (continued) P{--xi(yi)-> piPi }i Q{--x^z-> 1 Q’}i Com ____________________________________ P | Q {--t-> piPi[z/yi]|Q’ }xi=x U { --xi(yi)-> pi Pi |Q }xi=/=x P{--xi(yi)-> piPi }i Res ___________________ qi renormalized (x) P { --xi(yi)-> qi (x) Pi }xi =/= x
Implementation of ppa • Compilation in Java << >> : ppa Java • Distributed << P | Q >> = << P >>.start(); << Q >>.start(); • Compositional << P op Q >> = << P >> jop << Q >> for all op • Channels are one-position buffers with test-and-set (synchronized) methods for input and output • The probabilistic input guarded construct is implemented as a while loop in which channels to be tried are selected according to their probability. The loop repeats until an input is successful
Encoding p into ppa • [[ ]] : pppa • Fully distributed [[ P | Q ]] = [[ P ]] | [[ Q ]] • Uniform [[ P s ]] = [[ P ]] s • Correct wrt a notion of probabilistic testing semantics P must O iff [[ P ]] must [[ O ]] with prob 1
Encoding p into ppa • Idea: • Every mixed choice is translated into a parallel comp. of processes corresponding to the branches, plus a lock f • The input processes compete for acquiring both its own lock and the lock of the partner • The input process which succeeds first, establishes the communication. The other alternatives are discarded f Pi P R f Ri Qi R’i Q S f Si f The problem is reduced to a generalized dining philosophers problem where each fork (lock) can be adjacent to more than two philosophers Further, we can reduce the generalized DP to the classic case, and then apply the algorithm of Lehmann and Rabin
Dining Philosophers: classic case Each fork is shared by exactly two philosophers
Dining Philosophers, classic case • The requirements on the encoding pppaimply symmetry and full distribution • There are many solution to the DP problem, but in order to be symmetric and fully distributed a solution has necessarily to be randomized. Proved by [Lehmann and Rabin 81] - They also provided a randomized algorithm (for the classic case) • Note that the DP problem can be solved in p in a fully distributed, symmetric way. Hence the need for randomization is not a characteristic of our approach: it would arise in any encoding of pinto an asynchronous language.
The algorithm of Lehmann and Rabin • Think • choose first_fork in {left,right} %commit • if taken(first_fork) then goto 3 • take(first_fork) • if taken(first_fork) then goto 2 • take(second_fork) • eat • release(second_fork) • release(first_fork) • goto 1
Dining Phils: generalized case Each fork can be shared by more than two philosophers Reduction to the classic case: each fork is initially associated with a token. Each phil needs to acquire a token in order to participate to the competition. The competing phils determine a set of subgraphs in which each subgraph contains at most one cycle
Generalized philosophers • Another problem we had to face: the solution of Lehmann and Rabin works only for fair schedulers, while ppa does not provide any guarantee of fairness • Fortunately, it turns out that the fairness is required only in order to avoid a busy-waiting livelock at instruction 3. If we replace busy-waiting with suspension, then the algorithm works for any scheduler This result was achieved independently also by Fribourg et al, TCS 2002
The algorithm of Lehmann and RabinModified so to avoid the need for fairness The algorithm of Lehmann and Rabin • Think • choose first_fork in {left,right} %commit • if taken(first_fork) then wait • take(first_fork) • if taken(first_fork) then goto 2 • take(second_fork) • eat • release(second_fork) • release(first_fork) • goto 1 • Think • choose first_fork in {left,right} %commit • if taken(first_fork) then goto 3 • take(first_fork) • if taken(first_fork) then goto 2 • take(second_fork) • eat • release(second_fork) • release(first_fork) • goto 1
Conclusion • We have provided an encoding of the p calculus into its asynchronous fragment, enriched with probabilities • fully distributed • compositional • correct wrt a notion of testing semantics • Advantages: • high-level solutions to distributed algorithms • Easier to prove correct (no reasoning about randomization required)
Future work: Application of ppa to Security protocols • Propis:a small language based on ppa to express and verify security protocols and their properties, like • Secrecy • messages, keys, etc. remain secret • Authentication • guarantees about the parties involved in the protocol • Non-repudiation • evidence of the involvement of the other party • Anonymity • protecting the identity of agents wrt particular events • Formal tools for automatic verification
Features of PROPIS • PRObabilistic PI for Security • ppa enriched with cryptographic primitives similar to those of the spi-calculus [Abadi and Gordon] • The probability features will allow to analyse security protocols at a finer level (cryptographic level), i.e. beyond the Dolew-Yao assumptions:In our approach an attacker can guess a key. The point is to prove that the probability that it actually guess the right key is negligible. • The probability features will also allow to express protocols that require randomization.
Example: The dining cryptographers An example of achieving anonymity Crypt(0) notpays0 pays0 Master Crypt(1) Crypt(2)
The dining cryptographers • The Problem: • Three cryptographers share a meal • The meal is paid either by the organization (master) or by one of them. The master decides who pays • Each of the cryptographers is informed by the master whether or not he is paying • GOAL: • The cryptographers would like to know whether the meal is being paid by the master or by one of them, but without knowing who is paying (if it is one of them).
The dining cryptographers: Solution • Solution: Each cryptographer tosses a coin (probabilistic choice). Each coin is in between two cryptographers. • The result of each coin-tossing is visible to the adjacent cryptographers, and only to them. • Each cryptographer examines the two adjacent coins • If he is paying, he announces “agree” if the results are the same, and “disagree” otherwise. • If he is not paying, he says the opposite • Claim 1: if the number of “disagree” is even, then the master is paying. Otherwise, one of them is paying. • Claim 2: In the latter case, if the coin is fair the non paying cryptographers will not be able to deduce whom exactly is paying
The dining cryptographers: Solution Crypt(0) notpays0 pays0 Coin(0) Coin(1) look20 Master out1 Crypt(1) Crypt(2) Coin(2)