190 likes | 416 Views
Chapter 12 . Message Ordering. Causal Ordering . A single message should not be overtaken by a sequence of messages Stronger than FIFO Example of FIFO but not causal. Causal and FIFO ordering . FIFO: Any two messages from a process P i to P j are received Causal:
E N D
Chapter 12 Message Ordering
Causal Ordering • A single message should not be overtaken by a sequence of messages • Stronger than FIFO • Example of FIFO but not causal
Causal and FIFO ordering • FIFO: • Any two messages from a process Pi to Pj are received • Causal: • If r1 , r2 are the receive events on some process and s1, s2 are the corresponding send events :
Algorithm for causal ordering • Maintain a matrix M[1..N,1..N] at each process • When Pi sends a message to Pj • M[i,j] = M[i,j] + 1 • Piggyback M with the message • When Pi receives a message with matrix W from Pj • If Then receive message else block M = max (M, W)
Algorithm for causal ordering • The entry M[k,j] at process i , records the number of messages sent by j to process k as known by process I • If a process i receives a message from j with the matrix W then • If W[k,i] >M[k,i] then j knows of a message k has sent to i , though i has not received the message till then. Hence process i blocks the message from j.
Applications • Causal chat – Figure 12.5 • Uses Causal Linker Figure 12.4 • P0 P1 • P1 P2 • P0 P2 • If P0 sends a message to P1 and P2 and P1 sends a reply to both P0 and P2 then causal linker gives the guarantee that P1’s reply cannot reach P2 before the original query
Synchronous Ordering • Equivalent to a computation in which all messages are logically instantaneous • Stronger than Causal and FIFO ordering • Formally, let be the set of all external events. Then, • a computation is synchronous iff there exists a • mapping T from to the set of natural numbers such that and
Examples Non-Synchronous Synchronous
Synchronous order : Algorithm • The algorithm cannot be totally symmetric (if two processes wish to simultaneously send messages to each other) • Use process numbers to order all processes • Use control messages to enforce synchronous ordering
Synchronous order : Algorithm • Messages: • Big : sent by a bigger process to smaller process • Small: sent by a smaller process to bigger process • All processes are initially active • An active process can send a big message • After sending turn passive till an ack is received • Passive process cannot send or receive any message (except, of course, the ack )
Synchronous order : Algorithm • Small messages: • Request permission from the bigger process before sending • Permission can be granted by an active process. The bigger process turns passive after granting the permission • Once the message is received the bigger process can turn active
Total order for multicast messages • If process Pi sends messages x, y to processes Pj, Pk ,.. then all the processes Pj, Pk … receive the messages in the same order (x,y or y,x) • Observe that this does not imply causal or even FIFO ordering • Algorithms: • Similar to the mutex problem • Assume FIFO channels
Centralized and Lamport Algorithms • Assume FIFO channels • Broadcast a message requestCS • Centralized: Coordinator multicasts the message instead of sending the lock • Lamport: • The broadcast is stored in a queue by all processes and a timestamped ack is sent back • A process can deliver (act on) a message with timestamp t in its request queue if it has received a message with timestamp greater than t from all other processes ( Entering the CS in Lamport’s mutex algorithm)
Skeen’s Algorithm • Lamport’s Algorithm is wasteful if messages are multicast (the other processes simple ignore the messages) • Skeen’s algorithm results in # of messages proportional to the number of recipients of the message
Skeen’s Algorithm • Send a timestamped message to all the destination processes • On receiving a message, a process marks it as undeliverable and sends the value of the logical clock as the proposed timestamp to the initiator • Set the max of all proposals as the final timestamp and send to all destinations • On receiving the final timestamp of a message, it is marked as deliverable. • A deliverable message is delivered if it has the smallest timestamp in the message queue.
0 1 • Process 0 multicasts msg to 1 and 2 • On receiving 1 and 2 they mark it undeliverable and send propose with values 2 and 4 respectively • If 1 receives another message from a lower priority process (say with id 3), then it ignores the message till it has received final from 0 • Process 0 takes the max of the proposed timestamps and send out final 4 to processes 1 and 2 • Processes 1 and 2 mark msg as deliverable and deliver it if it has the smallest timestamp 2
Application • Replicated State Machine • Provide fault tolerant service using multiple servers • All machines should process all requests in the same order • Use total ordering of messages