240 likes | 397 Views
Performance of Fair Distributed Mutual Exclusion Algorithms Kandarp Jani Ajay D. Kshemkalyani University of Illinois at Chicago. Presentation Plan. Introduction to fair distributed mutual exclusion Previous fair algorithms Lamport ‘78: 3(n-1) msgs/CS
E N D
Performance of Fair Distributed Mutual Exclusion Algorithms Kandarp Jani Ajay D. Kshemkalyani University of Illinois at Chicago
Presentation Plan • Introduction to fair distributed mutual exclusion • Previous fair algorithms • Lamport ‘78: 3(n-1) msgs/CS • Ricart-Agrawala (RA) ‘81: 2(n-1) msgs/CS • Lodha-Kshemkalyani (LK) TPDS‘00: [n, 2n-1] msgs/CS • Simulation expts study improvement of LK over RA • Conclusion: LK has fewer messages, & lower waiting time, w/o compromising message size or other metrics
Model and Metrics • Asynchronous distributed message-passing system • d: time for a message hop • Metrics for mutual exclusion algorithms • number of messages/CS • response time: Ω(2d+css) • synchronization delay:Ω(d) • waiting time:Ω(2d) • throughput: 1/ response time • fairness • message size: O(1) • Single request outstanding at a time
Fair Mutual Exclusion • Popular definition of fairness: requests must be answered in the order of their causality-based scalar clock values • If clk(Req1) < clk(Req2): • Req1 has higher priority • If clk(Req1) = clk(Req2): • Use requestors’ PIDs as tie-breaker (i.e., define a lexicographic order) • Only Lamport, RA, and LK algorithms are fair
LK Algorithm (messages) • REQUEST, REPLY, FLUSH messages • REQUEST: contains timestamp of request • REPLY, FLUSH: contain timestamp of last completed CS access by sender of message • Local Request Queue (LRQ): at each process, LRQ tracks concurrent requests
LK Algorithm (concurrency set) • Message overhead: 2n - |Cset| msgs/CS • (n-1)REQUEST messages • n - |Cset|REPLY messages • 1FLUSH message
LK algorithm (REQUEST) • Multiple uses of REQUEST • to seek permission to enter CS • if requesting concurrently, the REQUEST acts as a REPLY from the lower priority requestor (i) to the higher priority requester (j) • j remembers i’s request in itsLRQ • i remembers j’s request in itsLRQ • After j finishes CS, i will eventually get logical permission from j via a chain ofREPLYandFLUSH messages
LK algorithm (REPLY) • REPLY message has timestamp of last completed CS request of sender of REPLY • Multiple uses of REPLY • Sender gives individual permission • Sender gives collective permission on behalf of all processes with higher priority requests • It acts as multiple logical reply messages
LK algorithm (FLUSH) • FLUSHsent after exiting CS, to the concurrently requesting process with the next highest priority • FLUSH timestamped w/timestamp of just completed CS request of sender of FLUSH • Multiple uses of FLUSH • Sender gives individual permission • Sender gives collective permission on behalf of all processes with higher priority requests • It acts as multiple logical reply messages
Simulation Parameters (on OPNET) • Input parameters: • Number of processes : n (10-40) • Inter-request time: exp. Distributed, mean λ (0.1 ms to 10 s) • Critical Section Sitting time: exp. distributed, mean CSS (0.1 microsec to 10 millisec) • Propagation delay: D, implicitly modeled in CSS • Output parameters: • Normalized message complexity: M • Waiting time: T
Experiments • Experiment 1: • M = f(λ), for multiple settings of (n, CSS) • Experiment 2: • M = f(n), for multiple settings of (CSS, λ) • Experiment 3: • T = f(n), for multiple settings of (CSS, λ) • compared LK and RA algorithms
Conclusions • LK is the best known fair mutex algorithm • LK outperforms Ricart-Agrawala i.t.o. • Number of messages/CS • Waiting time/CS without compromising message size or any other metric • Studied behaviour of LK using simulations under a wide range of conditions