200 likes | 366 Views
An experimental comparison of lock-based distributed mutual exclusion algorithms. Victor Lee Kent State University Department of Computer Science. Distributed Mutual Exclusion Algorithms. Ricart-Agrawala REQUEST, REPLY Lock needed from every process Maekawa
E N D
An experimental comparison of lock-based distributed mutual exclusion algorithms Victor LeeKent State UniversityDepartment of Computer Science
Distributed Mutual Exclusion Algorithms • Ricart-Agrawala • REQUEST, REPLY • Lock needed from every process • Maekawa • REQUEST, GRANT, FAIL, INQUIRE, YIELD, RELEASE • Locked needed from a quorum, where • A process is a member of its own quorum • Each quorum intersects with every other quorum
Performance Measures • Maekawa’s quorum and arbitration scheme • Intent is to reduce the number of messages needed per CS entry • Messages per CS: • Ricart-Agrawala: 2(N-1) (REQ + REPLY)*(# Proc) • Maekawa: K√N, where • Quorum size ~ √N • K(min) = 3 (REQ + GRANT + RELEASE) • As load increases, K increases because FAIL, INQUIRE, and YIELD messages come into use
Experiment Proposal • Measure how the average number of messages per CS entry varies with: • The number of processes N, load is constant • The load L, number of processes is constant • Compare Ricart-Agrawala to Maekawa
Quorums and Limitations on N • Need an algorithm to construct quorums: • Billiard quorum algorithm:N = ( Q2 – 1)/2, Q must be odd: • Q = 3, N = 4 • Q = 5, N = 12 • Q = 7, N = 24 • Q = 9, N = 40 • Experiments use these four values of N Reference: Agrawal et al, Billiard quorums on the grid, Information Processing Letters 64 (1997) 9-16. Elsevier.
Defining and Controlling Load • Load is the number of processes that are contending for the CS. • Ranges from 1 to N • Experiments use L = 1, ½ N, and N • Simulator initializes a run by selecting L random processes to contend for CS. • When a process exits CS, actual L decrements, so • Simulator selects another random process to contend for CS, restoring L
Experimental Results: Msg/CS vs. N Ricart-Agrawala: Expected results; all trials were identical Test Conditions: • Low Load (L = 1) • CS = 100 entries/exits • Contending processes are chosen randomly
Experimental Results: Msg/CS vs. L Ricart-Agrawala: Expected results, no variation Test Conditions: • N = 40 • CS = 100 entries/exits • Contending processes are chosen randomly
Discussion of Results • Ricart-Agrawala results were exactly as expected • Maekawa trends were as expected, but • K was slightly higher than expected for low load (expected 3.0) • K did not increase as much as expected with load (expected ~6)
Explanation of Behavior of K • Test logs show that FAIL, INQUIRE, and YIELD in fact occur even when only one process contends. • Example: • A process P1 that exits CS sends RELEASE to all its quorum members. • Another process P2 may now send out REQUESTs • A third process P3 might receive the P2’s request before receive P1’s RELEASE. • K does not reach 6 because even with maximum load, not every REQUEST is followed by FAIL, INQUIRE,YIELD, GRANT, and RELEASE
Future Investigations • Investigate the frequency distribution of FAIL, INQUIRE, and YIELD in Maekawa’s algorithm. • Example: In a 100-CS run with 1098 messages, what percentage of the messages were of each type? • CS latency increases as load increases, and Maekawa’s algorithm permits out-of-timestamp order CS entry. Can we make predictions about the expected latency?
Implementation Notes:Object-Oriented Components Process (abstract)TMessage (sender, recvr, timestamp, body)Channel (linked list of Messages) ME MEProcess (abstract send(), receive())MEChannel (queuing and delivery)MESimulator (basic execution cycle) Ricart-Agrawala Maekawa RAProcess (RA algorithm)RASimulator (RA topology) MaekProcess (ME algorithm)MaekSimulator (ME topology)
Implementation Notes:The Channel • Initial implementation of the Channel was a linked list (Project 1 observed global in-order delivery) • Ricart-Agrawala supports out-of-order delivery • In Project 2, send() inserts a message in a random location in the list. Receive always removes the first item in the list. • Maekawa receives in-order delivery • Didn’t notice this during Project 2, not until Project 3… • Problem: I use 1 global channel to represent the many local process-process channels. Ordering must be observed locally but not necessarily globally • Don’t want to create N2 or N*Q separate channel objects!
Implementation Notes:The Channel • Solution: Replace the single list with N senderLists. • Messages in one list are from a single sender but may be addresses to any other process. • For send(), if Maekawa: • search from the end of the list for the first occurrence of a message with the same receiver as the new message. • Insert the new message anywhere between the end and this point. • If Ricart-Agrawala, can still insert anywhere in the list • For receive(): • Remove from a random senderList. If the list if empty, try again.
Implementation Notes:Simulator Cycle & Load Management initializeCSreq(N, L) Execute CSReq() L times CSReq() Move one randomly chosen process P from readyList to contendList Tell P to RequestCS EnteringCS(Pid) inCS = true CSid = Pid CSExit() inCS = false numCSexits++ Move Process CSid from contendList to readyList Tell Process CSid to exit
Implementation Notes:Simulator Cycle & Load Management contendList = {} // procs in contention readyList = {all P’s} // procs not in contention messageCount = 0 initializeCSreq(N, L); while (numCSexits < maxCSexits && Channel.notEmpty) { if (inCS && ProbabilityOfExiting) { CSExit; } if (contendList.size < L) { CSReq; } if (Channel.notEmpty) { Message m = Channel.remove; messageCount++ Deliver(m) } }
Implementation Notes:Tracking Locks and Fails in Maekawa • Fails are just as important as Locks • Process must track not only that it has received a FAIL, but it must recall the identity of the FAIL senders. • Fail is a state variable, affecting how a process responds to INQUIRE • A FAIL gets “erased” when the sender of the FAIL later sends a GRANT. If a requestor receives FAIL from two different processes, it will remain in the Fail state until it receives GRANT from those two processes. • Thus, we need a Fail array just like the Lock array. • Fail state = any Fail[i] is setLock state= all Lock[i] are set
Implementation Notes:Tracking Locks and Fails in Maekawa • A requesting Maekawa process must track which quorum members have granted it a lock. • When all locks are received, it may enter CS • Maekawa discusses the need for a lock list or array. Conceptually, an associative array of size Q. Index is a quorum member’s process ID. • When would a process give up its own lock? • If it receives a preceding request from another process • When would it get it back? • When its request pops up to the top of its request queue