1.02k likes | 1.21k Views
Scalable Reader Writer Synchronization. John M.Mellor-Crummey , Michael L.Scott. Outline. Abstract Introduction Simple Reader Writer Spin Lock Reader Preference Lock Fair Lock Locks with Local-Only-Spinning Fair Lock Reader Preference Lock Writer Preference Lock
E N D
Scalable Reader Writer Synchronization John M.Mellor-Crummey, Michael L.Scott
Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization
Abstract – readers & writers • All processes request mutex access to the same memory section. • Multiple readers can access the memory section at the same time. • Only one writer can access the memory section at a time. Scalable Reader Writer Synchronization writers readers 0100110101010100010001000101001010 0100110101010100010001000101001010 0100110101010100010001000101001010
Abstract (continued) • Mutex locks implementation using busy wait. • Busy wait locks causes memory and network contention which degrades performance. • The problem: busy wait is implemented globally (everyone busy wait on the same variable / memory location), creating a global bottleneck instead of a local one. • The global bottleneck created by the busy wait, prevents efficient, larger scale (scalability) implementation of mutex synchronization. Scalable Reader Writer Synchronization
Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization
The purpose of the paper Presenting readers/writers locks which exploits local spin busy wait implementation, in order to reduce memory and network contention . Scalable Reader Writer Synchronization Global – everyone busy wait (spin) on the same location. Local – everyone busy wait (spin) on a different memory location.
Definitions • Fair lock • readers wait for earlier writers • writers wait for any earlier process (reader or writer) • no starvation • Readers preference lock • writers wait as long as there are readers requests. • possible starvation • minimizes the delay for readers • maximizes the throughput • Writers preference lock • readers wait as long as there are writer waiting • possible starvation • prevents the system from using outdated information Scalable Reader Writer Synchronization
The MCS lock The MCS (Mellor-Crummey and Scott) lock is a queue based local spin lock Scalable Reader Writer Synchronization
The MCS lock – acquire lock tail tail Scalable Reader Writer Synchronization lock new_node new_node lock
The MCS lock – release lock tail tail lock Scalable Reader Writer Synchronization lock my_node my_node
The MCS lock – release lock tail Scalable Reader Writer Synchronization lock my_node
The MCS lock – release lock The spin is local since each process spins (busy wait) on its own node Scalable Reader Writer Synchronization
Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization
Simple Reader-Writer Locks This section presents centralized (not local) algorithms for busy wait reader-writer locks. WRITER start_write(lock) writing_critical_section end_write(lock) READER start_read(lock) reading_critical_section end_read(lock) Scalable Reader Writer Synchronization
Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization
Reader Preference Lock • A reader preference lock is used in several cases: • when there are many writers requests, and the preference for readers is required to prevent their starvation. • when the throughput of the system is more important than how up to date the information is. Scalable Reader Writer Synchronization
Reader Preference Lock • The lowest bit indicates if a writer is writing • The upper bits count the interested and currently reading processes. • When a reader arrives it inc the counter, and waits until the writer bit is deactivated. • Writers wait until the whole counter is 0. Scalable Reader Writer Synchronization lock writers flag 31 1 0 readers counter
Reader Preference Lock a writer can write, only when no reader is interested or reading, and no writer is writing lock writers flag 31 1 0 1 1 1 1 1 0 0 0 0 0 readers counter end writing start writing Scalable Reader Writer Synchronization Notice that everything is done on the same 32 bit location in the memory.
Reader Preference Lock readers always get in front of the line, before any writer, other than the one already writing lock writers flag 31 1 0 1 0 0 0 1 0 0 readers counter end reading start reading Scalable Reader Writer Synchronization Again, notice that everything is done on the same 32 bit location in the memory.
Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization
Fair Lock • A Fair lock is used when the system must maintain a balance between keeping the information up to date, and still being reactive (the system should respond to data requests within a reasonable amount of time) Scalable Reader Writer Synchronization
Fair Lock • The readers have 2 counters: • completed readers/writers: those who finished reading/writing • current readers/writers: those who finished + requests • prev/ticket: for waiting in line • writers ticket = total readers + total writers • readers ticket = total writers (because they can read with the rest of the readers) Scalable Reader Writer Synchronization total readers total writers readers writers / / completed writers completed readers Ticket = prev
Fair Lock total readers total writers readers writers 5 1 1 2 / 3 / completed readers completed writers 6 Ticket = prev Scalable Reader Writer Synchronization
Fair Lock total readers total writers readers writers 5 1 2 / 3 5 / completed readers completed writers 6 Ticket = prev Scalable Reader Writer Synchronization
Fair Lock total readers total writers readers writers 5 1 2 / 5 / 2 completed readers completed writers 6 Ticket = prev Scalable Reader Writer Synchronization
Fair Lock total readers total writers readers writers 5 6 2 3 / 5 / completed readers completed writers 3 Ticket = prev Scalable Reader Writer Synchronization
Fair Lock total readers total writers readers writers 6 2 3 / 5 / 3 completed readers completed writers 3 Ticket = prev Scalable Reader Writer Synchronization
Fair Lock total readers total writers readers writers 6 6 3 3 / 5 / completed readers completed writers 3 Ticket = prev Scalable Reader Writer Synchronization
Fair Lock total readers total writers readers writers / / completed readers completed writers Ticket = prev Scalable Reader Writer Synchronization Again, notice that everything is done on the same centralized location in the memory – 3 counters.
Spin On A Global Location • The last 2 algorithms use busy wait by spinning on the same memory location. • When many processes try and spin on the same location, it causes a hot spot in the system. • Interference from still waiting/spinning processes, increase the time required to release the lock by those who are finished waiting. • Also, Interference from still waiting/spinning degrades the performance of processes who are trying to access the same memory area (not just the same exact location) Scalable Reader Writer Synchronization
Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization
Locks with Local Only Spinning • This is the main section of the paper, it contains the implementation of reader/writer locks which uses busy wait on local locations (not all on the same location). • Why not just use the previously mentioned MCS algorithm? • too much serialization for the readers, they can read at the same time • too long code path for this purpose, can be done more efficiently lock Scalable Reader Writer Synchronization my_node
Outline • Abstract • Introduction • Simple Reader Writer Spin Lock • Reader Preference Lock • Fair Lock • Locks with Local-Only-Spinning • Fair Lock • Reader Preference Lock • Writer Preference Lock • Empirical Results & Conclusions • Summary Scalable Reader Writer Synchronization
Fair Lock (local spinning only) • Writing can be done when all previous read and write requests have been met. • Reading can be done when all previous write requests have been met. • Like the MCS algorithm, using a queue. • A reader can begin reading when its predecessor is an active reader or when the previous writer finished. • A writer can write if its predecessor is done and there are on active readers. Scalable Reader Writer Synchronization
Fair Lock (local spinning only) blocked type : reader/writer node next : pointer free blocked : boolean successor_type : reader/writer Scalable Reader Writer Synchronization lock tail : pointer to a node (nil) reader_count : counter (0) next_writer : pointer to a node (nil) n w e r x I t t e r
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none Scalable Reader Writer Synchronization
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none Scalable Reader Writer Synchronization lock = tail
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none 0 reader_counter Scalable Reader Writer Synchronization lock = tail new_node next_writer
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none 0 reader_counter Scalable Reader Writer Synchronization lock = tail new_node
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none 0 reader_counter Scalable Reader Writer Synchronization lock = tail new_node
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none 0 reader_counter Scalable Reader Writer Synchronization lock = tail new_node
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none 1 reader_counter Scalable Reader Writer Synchronization lock = tail new_node The busy wait is on my own node
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none 0 reader_counter Scalable Reader Writer Synchronization lock = tail new_node next_writer
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none Scalable Reader Writer Synchronization lock = tail
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none Scalable Reader Writer Synchronization new_node lock = tail
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: none succ: writer Scalable Reader Writer Synchronization lock = tail new_node pred
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked new_node writer succ: writer Scalable Reader Writer Synchronization lock = tail new_node
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked my_node writer Scalable Reader Writer Synchronization lock = tail
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked my_node writer Scalable Reader Writer Synchronization lock = tail my_node
Fair Lock (local spinning only) reader/writer succ : reader/writer free blocked my_node writer Scalable Reader Writer Synchronization lock = tail my_node