290 likes | 317 Views
CSC 8320 4.5 Distributed Mutual Exclusion. Presenter: Weiling Li Instructor: Dr. Zhang. Overview. Part I: Basic Knowledge [1,2,3] * Contention-based Mutual Exclusion * Token-based Mutual Exclusion Part II: Current Research * Hybrid Distributed Mutual Exclusion[4]
E N D
CSC 83204.5 Distributed Mutual Exclusion Presenter: Weiling Li Instructor: Dr. Zhang
Overview • Part I: Basic Knowledge [1,2,3] * Contention-based Mutual Exclusion * Token-based Mutual Exclusion • Part II: Current Research * Hybrid Distributed Mutual Exclusion[4] * Use Multicast in Distributed Mutual Exclusion Algorithms for Grid File Systems [5] • Part III: Future Research * The Weak Mutual Exclusion problem [6] • References • Q&A
Preview: three major communication scenarios • One-way communication * usually do not need synchronization. • Client/Server communication * multiple clients make service request to a shared server. * Co-ordination: there is no explicit interaction among client process. * Intercrosses communication: oftentimes processes need to exchange information to reach some conclusion about the system or some agreement among the cooperating processes. • Peer communication * there is no shared object or centralized controller.
I. Distributed Mutual Exclusion [1] • Mutual exclusion ensures that concurrent processes make a serialized access to shared resources or data. • A distributed mutual exclusion algorithm achieves mutual exclusion using only peer communication. The problem can be solved using either a contention-based or a token-based approach.
Contd.. • Contention-based Mutual Exclusion • Timestamp Prioritized Schemes • Voting Schemes • Token-based Mutual Exclusion • Ring Structure • Tree Structure • Broadcast Structure
Contention-based Mutual Exclusion[1] • A contention-based approach means that each process freely and equally competes for the right to use shared resource by using a request resolution criteria. • The fairest way is to grant the request to the process which asked first (Timestamp Prioritized scheme) or the process which got the most votes from the other processes (Voting scheme).
Timestamp Prioritized Schemes • 1) Lamport's algorithm [1] • The general mechanism is for a process P[i] to send a REQUEST ( with ID and time stamp ) to all other processes. • When a process P[j] receives such a request, it may reply immediately or it may defer sending a REPLY back. • When responses are received from all processes, then P[i] can enter its Critical Section. • When P[i] exits its critical section, the process sends RELEASE messages to all its deferred requests. The total message count is 3*(N-1), where N is the number of cooperating processes.
Contd.. • 2) Ricart and Agrawala algorithm [2] Requesting Site: • A requesting site Pi sends a message request(ts,i) to all sites. Receiving Site: • Upon reception of a request(ts,i) message, the receiving site Pj will immediately send a timestamped reply(ts,j) message if and only if: • Pj is not requesting or executing the critical section OR • Pj is requesting the critical section but sent a request with a higher timestamp than the timestamp of Pi • Otherwise, Pj will defer the reply message. Improvement: Number of network messages is 2*(N-1) Disadvantage: One of the problems in this algorithm is failure of a node. In such a situation a process may starve forever. This problem can be solved by detecting failure of nodes after some timeout.
Voting schemes [1] Requestor: • Send a request to all other process. • Enter critical section once REPLY from a majority is received • Broadcast RELEASE upon exit from the critical section. Other processes: • REPLY to a request if no REPLY has been sent. Otherwise, hold the request in a queue. • If a REPLY has been sent, do not send another REPLY till the RELEASE is received. O(N) messages per request. Possibility of deadlock.
Voting Schemes (Improved) • One of the possible solutions would be : Any process retrieves its REPLY message by sending an INQUIRY if the requestor is not currently executing in the critical section. The Requestor has to return the vote through a RELINQUISH message.
Token-based Mutual Exclusion[1] • Although contention-based distributed mutual exclusion algorithms can have attractive properties, their messaging overhead is high. An alternative to contention-based algorithms is to use an explicit control token, possession of which grants access to the critical section. • Token-based algorithms use three typical topologies: ring, tree, and broadcast structures.
Ring Structure [1] • Here we have a bus network, as shown in Fig. a, (e.g., Ethernet), with no inherent ordering of the processes. • In software, a logical ring is constructed in which each process is assigned a position in the ring, as shown in Fig. b. The ring positions may be allocated in numerical order of network addresses or some other means. It does not matter what the ordering is. All that matters is that each process knows who is next in line after itself.
Contd.. • Advantages: - simple, deadlock-free, and fair. - Token can also carry state information. • Disadvantages: -The token circulates even in the absence of any request (unnecessary traffic). -Long path (O(N)) – the wait for token may be high.
Tree Structure [1] • A problem with the ring structure approach is that the idle token is passed along the ring when no process is competing for it. • Raymond's Algorithm: Each process explicitly requests for a token and the token is moved only when no process if the process knows of a pending request.
Tree Structure (Raymond's Algorithm)[3] • The processes are organized in a logical tree structure, each node pointing to its parent. Further, each node maintains a FIFO list of token requesting neighbors. Each node has a variable Tokenholder initialized to false for everybody except for the first token holder (token generator). • Entry Condition If not Tokenholder If the request queue empty request token from parent; put itself in request queue; block self until Tokenholder is true;
Contd.. • Exit section: • If the request queue is not empty parent = dequeue(request queue); send token to parent; set Tokenholder to false; if the request queue is still not empty, request token from parent; • Upon receipt of a request: • If Tokenholder If in critical section put the requestor in the queue parent = requestor; Tokenholder = false; send token to parent; elseif the queue is empty send a request to the parent; put the requestor in queue;
Contd.. Upon receipt of a token: Parent = Dequeue(request queue); if self is the parent Tokenholder= true else send token to the parent; if the queue is not emptyrequest token from parent;
Broadcast Structure [1] • Imposing a logical topology like a ring or tree is efficient but also complex because the topology has to be implemented and maintained. • However, a token can carry global information that can be useful for process coordination. The control token for mutual exclusion is centralized and can be used to serialize requests to critical sections. (Suzuki/Kasami's broadcast algorithm).
Broadcast structure( Suzuki/ Kasami’s algorithm) Data Structure: The token contains * Token vector T(.) –number of completions of the critical sectionfor every process. * Request queue Q(.)– queue of requesting processes. Every process (i) maintains the following * seq_no– how many times i requested critical section. * Si(.)– the highest sequence number from every process i heard of.
Contd.. Entry Section (process i): • Broadcast a REQUEST message stamped with seq_no. • Enter critical section after receiving token. Exit Section (process i): • Update the token vector T by setting T(i) to Si(i). • If process k is not in request queue Q and there are pending requests from k (Si(k)>T(k)), append process k to Q. • If Q is non-empty, remove the first entry from Q and send the token to the process indicated by the top entry.
Contd.. Processing a REQUEST (process j): • Set Sj(k) to max(Sj(k), seq_no) after receiving a REQUEST from process k. • If holds an idle token, send it to k. Disadvantage: Requires broadcast. Therefore message overhead is high.
II. Current Research: • 1) Hybrid Distributed Mutual Exclusion[4] • Providing deadlock-free distributed mutual exclusion algorithms is often difficult and it involves passing many messages. • A hybrid distributed mutual exclusion algorithm is proposed.
Contd..(Hybrid Distributed Mutual Exclusion) • In this algorithm, it was assumed that the logical structure of the interconnecting network is a wraparound two-dimension array. It was proved that the algorithm is deadlock-free. The number of messages passed was calculated and it was shown that this number is at the least one and at the most 4√N, which are better than many other algorithms.
2) Use Multicast in Distributed Mutual Exclusion Algorithms for Grid File Systems • In a file system, critical sections are represented by means of read or write operations on useful data (i.e files) as well as metadata of the system. In a grid environment, processes are compared to grid nodes and their synchronization is ensured by the sending of messages on the network. • A method based on the use of the multicast, which makes it possible in some cases to decrease the access time to critical sections. This solution was implemented within a token- and tree-based distributed mutual exclusion algorithm and it was integrated in the grid middleware.
Contd..(Multicast) • the multicast is used to seek information concerning the identity of the root-node by means of the neighboring nodes of a site, in the hope to reduce the length of the path to reach the root.
III. Future Research: Weak Mutual Exclusion[6] • Weak Mutual Exclusion (WME) problem, analogously to classical Distributed Mutual Exclusion (DME), WME serializes the accesses to a shared resource. • Differently from DME, however, the WME abstraction regulates the access to a replicated shared resource, whose copies are locally maintained by every participating process. Also, in WME, processes suspected to have crashed are possibly ejected from the critical section. • It's proved that, unlike DME, WME is solvable in a partially synchronous model, i.e. a system where the bounds on communication latency and on relative process speeds are not known in advance, or are known but only hold after an unknown time.
Contd.. • Benefits: • -Performance: • the ability of the WME abstraction to serialize the sequences of operations issued by each user within the critical section can also provide benefits for what concerns the performances of typical building blocks used by replication schemes. • - Simplicity: • finally, the WME abstraction exposes a simple lock-like interface that is familiar even to developers with no experience with distributed programming.
References • [1] Randy Chow, Theodore Johnson, Distributed Operating Systems & Algorithms. Addison Wesley, 1997 • [2] http://en.wikipedia.org/wiki/Ricart-Agrawala_algorithm • [3] Kerry Raymond, A tree-based algorithm for distributed mutual exclusion. ACM Transactions on Computer Systems (TOCS) archive Volume 7 , Issue 1 ,February 1989. • [4] Paydar, S.; Naghibzadeh, M.; Yavari, A.; A Hybrid Distributed Mutual Exclusion Algorithm. Emerging Technologies, 2006. ICET '06. International Conference on 13-14 Nov. 2006 Page(s):263 - 270 • [5] Ortiz, A.; Thiebolt, F.; Jorda, J.; M'zoughi, A.; How to use multicast in distributed mutual exclusion algorithms for grid file systems. High Performance Computing & Simulation, 2009. HPCS '09. International Conference on 21-24. June 2009 Page(s):122 - 130 • [6] Romano, P.; Rodrigues, L.; Carvalho, N.;The Weak Mutual Exclusion problem. Parallel & Distributed Processing, 2009. IPDPS 2009. IEEE International Symposium on 23-29 May 2009 Page(s):1 - 12
Q&A • Thank you !