240 likes | 587 Views
Presented by Evan Yang. Munin. Overview of Munin. Distributed shared memory (DSM) system Unique features Multiple consistency protocols Release consistency Annotate data items according to how they are shared Implemented on the V kernel. Review.
E N D
Presented by Evan Yang Munin
Overview of Munin • Distributed shared memory (DSM) system • Unique features • Multiple consistency protocols • Release consistency • Annotate data items according to how they are shared • Implemented on the V kernel
Review • Distributed shared memory (DSM) systems provide an abstraction for sharing data between processes that do not share physical memory • Spares programmer the concerns of message passing • Central problem is scalability
Consistency: other models • Sequential • Causal • Processor • Pipelined RAM • Entry (H) • Scope (H) • Weak (H)
Consistency: Munin • Release consistency • Weaker than sequential • Cheaper to implement • Each shared memory access is either synchronization or ordinary access • For synch, either release or acquire • Sequential vs. release
Multiple consistency protocols • Annotate by expected access pattern • Choose consistency protocol suited to pattern • Why? No single consistency protocol is best suited for all parallel programs
Basics of Munin Programming • CreateThread(), DestroyThread() • user_init() - # of threads and processors • Shared objects correspond to a single shared variable • CreateLock(), AcquireLock(), ReleaseLock(), CreateBarrier(), WaitAtBarrier() • Delayed Update Queue (DUQ)
Munin Protocol Parameters • I – Invalidate or update? • R – replicas allowed? • D – delayed operations allowed? • FO – fixed owner? • M – multiple writers allowed? • S – Stable sharing pattern • Fl – flush changes to owner? • W – Writable?
Annotations in Munin • Read-only • Migratory • Write-shared • Producer-consumer • Reduction • Result • Conventional • ChangeAnnotation()
Implementation and Performance of Munin • Munin vs. message passing • Two programs: Matrix Multiply and Successive Over-Relaxation (SOR) • Hand-coded the message passing versions • Same hardware, identical computations • Assess the overhead for each approach
Matrix Multiply • Multiply two 400x400 matrices • Performs within 10% of message passing version • By reducing the number of access misses, Munin comes within 2% of message passing version
Successive Over-Relaxation • Used to model natural phenomena (determining temperature gradient over a square area) • Divide area into sections, compute iteratively
Summary • Approximately as efficient as message passing • What little is lost in efficiency is gained in decreased program complexity
Critique of Munin Study • Compare/contrast with other consistency models • Only compared against hand-coded message passing • Didn’t challenge how Munin scales • Researchers did say they will do another study where Munin is implemented on a high-speed network of supercomputer workstations