360 likes | 512 Views
Lecture 10. Mutual Exclusion in distributed systems PART 2. SE-9048 Concurrency & Distributed System Huma Ayub ( huma.ayub@uettaxila.edu.pk ) Assistant Professor Software Engineering Department. Contention based algorithms Lamport Algorithm
E N D
Lecture 10 Mutual Exclusion in distributed systems PART 2 SE-9048 Concurrency & Distributed System Huma Ayub (huma.ayub@uettaxila.edu.pk) Assistant Professor Software Engineering Department
Contention based algorithms • Lamport Algorithm • Ricart-Agrawala Algorithm Time stamped prioritized scheme
Lamport Algorithm Lamport developed a distributed mutual exclusion algorithm as an illustration of his clock synchronization scheme . The algorithm is fair in the sense that a request for CS are executed in the order of their timestamps and time is determined by logical clocks.
Lamport Algorithm When a site processes a request for the CS, it updates its local clock and assigns the request a timestamp. The algorithm executes CS requests in the increasing order of timestamps. Every site Si keeps a queue, request_queuei, which contains mutual exclusion requests ordered by their timestamps. This algorithm requires communication channels to deliver messages the FIFO order.
Lamport’s Mutual Excution Requesting the critical section: • When a site Si wants to enter the CS, it broadcasts a REQUEST(tsi, i) message to all other sites and places the request on request_queuei. ((tsi, i) denotes the timestamp of the request.) • When a site Sj receives the REQUEST(tsi, i) message from site Si,places site Si’s request on request_queuej and it returns a timestamped REPLY message to Si.
Lamport’s Mutual Excution Executing the critical section: Site Si enters the CS when the following two conditions hold: L1: Si has received a message with timestamp larger than (tsi, i) from all other sites. L2: Si’s request is at the top of request_queuei. Releasing the critical section: • Site Si, upon exiting the CS, removes its request from the top of its request queue and broadcasts a timestamped RELEASE message to all other sites. • When a site Sj receives a RELEASE message from site Si, it removes Si’s request from its request queue.
Lamport’s Mutual Excution When a site removes a request from its request queue, its own request may come at the top of the queue, enabling it to enter the CS. Clearly, when a site receives a REQUEST, REPLY or RELEASE message, it updates its clock using the timestamp in the message.
Lamport’s Mutual Excution Correctness Theorem 1: Lamport’s algorithm achieves mutual exclusion. Proof: Proof is by contradiction. Suppose two sites Si and Sj are executing the CS concurrently. For this to happen conditions L1 and L2 must hold at both the sites concurrently. This S1 and S2 are Making Requests for the CS implies that at some instant in time, say t, both Si and Sj have their own requests at the top of their request_queues and condition L1 holds at them. Without loss of overview, assume that Si’s request has smaller timestamp than the request of Sj . From condition L1 and FIFO property of the communication channels, it is clear that at instant t the request of Si must be present in request_queuej when Sj was executing its CS. This implies that Sj ’s own request is at the top of its own request_queue when a smaller timestamp request, Si’s request, is present in the request_queuej – a contradiction!! Hence, Lamport’s algorithm achieves mutual exclusion.
Lamport’s Mutual Excution Theorem 2: Lamport’s algorithm is fair. Proof: A distributed mutual exclusion algorithm is fair if the requests for CS are executed in the order of their timestamps. The proof is by contradiction. Suppose a site Si’s request has a smaller timestamp than the request of another site Sj and Sj is able to execute the CS before Si. For Sj to execute the CS, it has to satisfy the conditions L1 and L2. This implies that at some instant in time say t, Sj has its own request at the top of its queue and it has also received a message with timestamp larger than the timestamp of its request from all other sites. But request_queue at a site is ordered by timestamp, and according to our assumption Si has lower timestamp. So Si’s request must be placed ahead of the Sj ’s request in the request_queuej . This is a contradiction. Hence Lamport’s algorithm is a fair mutual exclusion algorithm.
Ricart-Agrawala Algorithm • Ricart-Agrawala algorithm assumes the communication channels are FIFO. Requesting Site: • A requesting site Pi sends a message request(ts,i) to all sites. Receiving Site: • Upon reception of a request(ts,i) message, the receiving site Pj will immediately send a timestampedreply(ts,j) message if and only if: • Pj is not requesting or executing the critical section OR • Pj is requesting the critical section but sent a request with a higher timestamp than the timestamp of Pi • Otherwise, Pj will defer the reply message.
Description of the Algorithm • Executing the critical section: • Site Si enters the CS after it has received a REPLY message from every site it sent a REQUEST message to. • Releasing the critical section: • When site Si exits the CS, it sends all the deferred REPLY messages
Performance • For each CS execution, Ricart-Agrawala algorithm requires (N − 1) REQUEST messages and (N−1) REPLY messages. • Thus, it requires 2(N−1) messages per CS execution. Synchronization delay in the algorithm is T.
Ricart and Agrawala algorithm Contd.. • Mutual exclusion: Site Pi enters its critical section only after receiving all reply messages. • Progress: Upon exiting the critical section, Pi sends all deferred reply messages.
Ricart and Agrawala algorithm Contd.. Disadvantage of Ricart and Agarwala Algorithm: • Failure of a node – May result in starvation. • Solution: This problem can be solved by detecting failure of nodes after some timeout.[1]
Ricart and Agrawala algorithm Contd.. Disadvantage of Ricart and Agarwala Algorithm: • Failure of a node – May result in starvation. • Solution: This problem can be solved by detecting failure of nodes after some timeout.[1]
Lamport’s Mutual Exclusion Difference from Ricart-Agrawala: • Everyone responds … always - no hold-back • Process decides to go based on whether its request is the earliest in its queue
Dealing With Failure – Additional Messages • When a request comes in, the receiver always sends a reply (Yes or No) • If a reply doesn’t come in a reasonable amount of time, repeat the request. • Continue until a reply arrives or until the sender decides the processor is dead. • Lots of message traffic, n bottlenecks instead of one, must keep track of all group members, etc.
Dealing with Failure - A Voting Scheme Modification • But it really isn’t necessary to get 100% permission • When a processor receives a Request it sends a reply (vote) only if it hasn't "voted" for some other process. In other words, you can only send out one OK Reply at a time. • There should be at most N votes in the system at any one time.
Voting Solution to Mutual Exclusion (3,1) (2,2) (3,1) P1 P2 (2, 2) (2,2) (3, 1) P3 Process 3 receives two requests; it “votes” for P2 because it received P2’s request first.
A Further Problem • Voting improves fault tolerance but what about deadlock? • Suppose a system has 10 processes • Also assume that three Requests are generated at about the same time and that two get 3 votes each, one gets 4 votes, so no process has a majority.
Modification the Voting Solution to Prevent Deadlock • A processor can change its vote if it receives a Request with an earlier time stamp than the request it originally voted for. Additional messages: Retract(withdrawl) and Relinquish(surrender). • If Pi receives a Retract message from Pk and has not yet entered its critical section, it sends Pk a Relinquish message and no longer counts that vote.
Token-based Mutual Exclusion[1] • Although contention-based distributed mutual exclusion algorithms can have attractive properties, their messaging overhead is high. An alternative to contention-based algorithms is to use an explicit control token, possession of which grants access to the critical section. • Token-based algorithms use three typical topologies: ring, tree, and broadcast structures.
Token Ring algorithm • For this algorithm, we assume that there is a group of processes with no inherent ordering of processes • But that some ordering can be imposed on the group. For example, we can identify each process by its machine address and process ID to obtain an ordering. Using this imposed ordering, a logical ring is constructed in software. Each process is assigned a position in the ring and each process must know who is next to it in the ring
Algorithm Description • At initialization, process 0 gets the token. • The token is passed around the ring. • If a process needs to access a shared resource it waits for the token to arrive. • Execute critical section & release resource • Pass token to next processor. • If a process receives the token and doesn’t need a critical section, hand to next processor.
Analysis • Mutual exclusion is guaranteed . • Starvation is impossible, since no processor will get the token again until every other processor has had a chance. • Deadlock is not possible because the only resource being contended/struggle for is the token. • The problem: lost token
Lost Tokens • What does it mean if a processor waits a long time for the token? • Another processor may be holding it • It’s lost • No way to tell the difference; in the first case continue to wait; in the second case, regenerate the token.
Crashed Processors • This is usually easier to detect – you can require an acknowledgement when the token is passed; if not received in bounded time, • Reconfigure the ring without the crashed processor • Pass the token to the new “next” processor
Next week ----- Tree…… • broad cast • Leader Election ….????