330 likes | 351 Views
Learn about lock implementation in thread synchronization, utilizing atomic operations and interrupt disable-enable techniques. Understand the need for locks in concurrency control. Explore different lock implementation strategies.
E N D
CPS110: Implementing locks Landon Cox February 2, 2009
Recap and looking ahead Applications Threads, synchronization primitives Atomic Load-Store, Interrupt disable- enable, Atomic Test-Set OS If you don’t have atomic operations, you can’t make any. Hardware
Using interrupt disable-enable • Disable-enable on a uni-processor • Assume atomic (can use atomic load/store) • How do threads get switched out (2 ways)? • Internal events (yield, I/O request) • External events (interrupts, e.g. timers) • Easy to prevent internal events • Use disable-enable to prevent external events
Why use locks? • If we have disable-enable, why do we need locks? • Program could bracket critical sections with disable-enable • Might not be able to give control back to thread library • Can’t have multiple locks (over-constrains concurrency) • Not sufficient on multiprocessor • Project 1: disable interrupts only in thread library disable interrupts while (1){}
Lock implementation #1 • Disable interrupts, busy-waiting unlock () { disable interrupts value = FREE enable interrupts } lock () { disable interrupts while (value != FREE) { enable interrupts disable interrupts } value = BUSY enable interrupts } Why is it ok for lock code to disable interrupts? It’s in the trusted kernel (we have to trust something).
Lock implementation #1 • Disable interrupts, busy-waiting unlock () { disable interrupts value = FREE enable interrupts } lock () { disable interrupts while (value != FREE) { enable interrupts disable interrupts } value = BUSY enable interrupts } Do we need to disable interrupts in unlock? Only if “value = FREE” is multiple instructions (safer)
Lock implementation #1 • Disable interrupts, busy-waiting unlock () { disable interrupts value = FREE enable interrupts } lock () { disable interrupts while (value != FREE) { enable interrupts disable interrupts } value = BUSY enable interrupts } Why enable-disable in lock loop body? Otherwise, no one else will run (including unlockers)
Using read-modify-write instructions • Disabling interrupts • Ok for uni-processor, breaks on multi-processor • Why? • Could use atomic load-store to make a lock • Inefficient, lots of busy-waiting • Hardware people to the rescue!
Using read-modify-write instructions • Most modern processor architectures • Provide an atomic read-modify-write instruction • Atomically • Read value from memory into register • Write new value to memory • Implementation details • Lock memory location at the memory controller
Too much milk, Solution 2 if (noMilk) { if (noNote){ leave note; buy milk; remove note; } } • Block is not atomic. • Must atomically • check if lock is free • grab it What key part of lock code had to be atomic?
Test&set on most architectures • Slightly different on x86 (Exchange) • Atomically swaps value between register and memory test&set (X) { tmp = X X = 1 return (tmp) } Set: sets location to 1 Test: returns old value
Lock implementation #2 • Use test&set • Initially, value = 0 unlock () { value = 0 } lock () { while (test&set(value) == 1) { } } What happens if value = 1? What happens if value = 0?
Course administration • Project 1 • Due in a little less than two weeks • Should be finishing up 1d in the next few days • Do not put off starting 1t!! • Hints • Read through entire spec first • Use STL for data structures (will probably make life easier) • OO might be overkill • Global variables are fine • In thread library, only use swapcontext, never uc_link
Locks and busy-waiting • All implementations have used busy-waiting • Wastes CPU cycles • To reduce busy-waiting, integrate • Lock implementation • Thread dispatcher data structures
Lock implementation #3 • Interrupt disable, no busy-waiting lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts }
Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Who gets the lock after someone calls unlock?
Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Who might get the lock if it weren’t handed-off directly? (e.g. if value weren’t set BUSY in unlock)
Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } What kind of ordering of lock acquisition guarantees does the hand-off lock provide? Fumble lock?
Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } What does this mean? Are we saving the PC?
Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { lockqueue.push(¤t_thread->ucontext); swapcontext(¤t_thread->ucontext, &new_thread->ucontext)); } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } No, just adding a pointer to the TCB/context.
Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Why a separate queue for the lock? What happens to acq ordering w/ one queue?
Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Project 1 note: you must guarantee FIFO ordering of lock acquisition.
Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { enable interrupts add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } When and how could this fail?
Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock enable interrupts switch to next ready thread // don’t add to ready queue } enable interrupts } 1 3 unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } 2 When and how could this fail?
Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Putting lock thread on lock wait queue, switch must be atomic. Must call switch with interrupts off.
How is switch returned to? • Review from last time • Think of switch as three phases • Save current location (SP, PC) • Point processor to another stack (SP’) • Jump to another instruction (PC’) • Only way to get back to a switch • Have another thread call switch
Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } What is lock() assuming about the state of interrupts after switch returns?
Interrupts and returning to switch • Lock() can assume that switch • Is always called with interrupts disabled • On return from switch • Previous thread must have disabled interrupts • Next thread to run • Becomes responsible for re-enabling interrupts • Invariants: threads promise to • Disable interrupts before switch is called • Re-enable interrupts after returning from switch
Thread A Thread B yield () { disable interrupts … switch enable interrupts } // exit thread library function <user code> lock () { disable interrupts … switch back from switch … enable interrupts } // exit yield <user code> unlock () // moves A to ready queue yield () { disable interrupts … switch back from switch … enable interrupts } // exit lock <user code>
Lock implementation #4 • Test&set, minimal busy-waiting lock () { while (test&set (guard)) {} // like interrupt disable if (value == FREE) { value = BUSY } else { put on queue of threads waiting for lock switch to another thread // don’t add to ready queue } guard = 0 // like interrupt enable } unlock () { while (test&set (guard)) {} // like interrupt disable value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } guard = 0 // like interrupt enable }
Lock implementation #4 Why is this better than implementation #2? lock () { while (test&set (guard)) {} // like interrupt disable if (value == FREE) { value = BUSY } else { put on queue of threads waiting for lock switch to another thread // don’t add to ready queue } guard = 0 // like interrupt enable } Only busy-wait while another thread is in lock or unlock unlock () { while (test&set (guard)) {} // like interrupt disable value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } guard = 0 // like interrupt enable } Before, we busy-waited while lock was held
Lock implementation #4 What is the switch invariant? Threads promise to call switch with guard set to 1. lock () { while (test&set (guard)) {} // like interrupt disable if (value == FREE) { value = BUSY } else { put on queue of threads waiting for lock switch to another thread // don’t add to ready queue } guard = 0 // like interrupt enable } unlock () { while (test&set (guard)) {} // like interrupt disable value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } guard = 0 // like interrupt enable }
Summary of implementing locks • Synchronization code needs atomicity • Three options • Atomic load-store • Lots of busy-waiting • Interrupt disable-enable • No busy-waiting • Breaks on a multi-processor machine • Atomic test-set • Minimal busy-waiting • Works on multi-processor machines