1 / 33

CPS110: Implementing locks

Learn about lock implementation in thread synchronization, utilizing atomic operations and interrupt disable-enable techniques. Understand the need for locks in concurrency control. Explore different lock implementation strategies.

busch
Download Presentation

CPS110: Implementing locks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CPS110: Implementing locks Landon Cox February 2, 2009

  2. Recap and looking ahead Applications Threads, synchronization primitives Atomic Load-Store, Interrupt disable- enable, Atomic Test-Set OS If you don’t have atomic operations, you can’t make any. Hardware

  3. Using interrupt disable-enable • Disable-enable on a uni-processor • Assume atomic (can use atomic load/store) • How do threads get switched out (2 ways)? • Internal events (yield, I/O request) • External events (interrupts, e.g. timers) • Easy to prevent internal events • Use disable-enable to prevent external events

  4. Why use locks? • If we have disable-enable, why do we need locks? • Program could bracket critical sections with disable-enable • Might not be able to give control back to thread library • Can’t have multiple locks (over-constrains concurrency) • Not sufficient on multiprocessor • Project 1: disable interrupts only in thread library disable interrupts while (1){}

  5. Lock implementation #1 • Disable interrupts, busy-waiting unlock () { disable interrupts value = FREE enable interrupts } lock () { disable interrupts while (value != FREE) { enable interrupts disable interrupts } value = BUSY enable interrupts } Why is it ok for lock code to disable interrupts? It’s in the trusted kernel (we have to trust something).

  6. Lock implementation #1 • Disable interrupts, busy-waiting unlock () { disable interrupts value = FREE enable interrupts } lock () { disable interrupts while (value != FREE) { enable interrupts disable interrupts } value = BUSY enable interrupts } Do we need to disable interrupts in unlock? Only if “value = FREE” is multiple instructions (safer)

  7. Lock implementation #1 • Disable interrupts, busy-waiting unlock () { disable interrupts value = FREE enable interrupts } lock () { disable interrupts while (value != FREE) { enable interrupts disable interrupts } value = BUSY enable interrupts } Why enable-disable in lock loop body? Otherwise, no one else will run (including unlockers)

  8. Using read-modify-write instructions • Disabling interrupts • Ok for uni-processor, breaks on multi-processor • Why? • Could use atomic load-store to make a lock • Inefficient, lots of busy-waiting • Hardware people to the rescue!

  9. Using read-modify-write instructions • Most modern processor architectures • Provide an atomic read-modify-write instruction • Atomically • Read value from memory into register • Write new value to memory • Implementation details • Lock memory location at the memory controller

  10. Too much milk, Solution 2 if (noMilk) { if (noNote){ leave note; buy milk; remove note; } } • Block is not atomic. • Must atomically • check if lock is free • grab it What key part of lock code had to be atomic?

  11. Test&set on most architectures • Slightly different on x86 (Exchange) • Atomically swaps value between register and memory test&set (X) { tmp = X X = 1 return (tmp) } Set: sets location to 1 Test: returns old value

  12. Lock implementation #2 • Use test&set • Initially, value = 0 unlock () { value = 0 } lock () { while (test&set(value) == 1) { } } What happens if value = 1? What happens if value = 0?

  13. Course administration • Project 1 • Due in a little less than two weeks • Should be finishing up 1d in the next few days • Do not put off starting 1t!! • Hints • Read through entire spec first • Use STL for data structures (will probably make life easier) • OO might be overkill • Global variables are fine • In thread library, only use swapcontext, never uc_link

  14. Locks and busy-waiting • All implementations have used busy-waiting • Wastes CPU cycles • To reduce busy-waiting, integrate • Lock implementation • Thread dispatcher data structures

  15. Lock implementation #3 • Interrupt disable, no busy-waiting lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts }

  16. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Who gets the lock after someone calls unlock?

  17. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Who might get the lock if it weren’t handed-off directly? (e.g. if value weren’t set BUSY in unlock)

  18. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } What kind of ordering of lock acquisition guarantees does the hand-off lock provide? Fumble lock?

  19. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } What does this mean? Are we saving the PC?

  20. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { lockqueue.push(&current_thread->ucontext); swapcontext(&current_thread->ucontext, &new_thread->ucontext)); } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } No, just adding a pointer to the TCB/context.

  21. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Why a separate queue for the lock? What happens to acq ordering w/ one queue?

  22. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } This is called a “hand-off” lock. unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Project 1 note: you must guarantee FIFO ordering of lock acquisition.

  23. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { enable interrupts add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } When and how could this fail?

  24. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock enable interrupts switch to next ready thread // don’t add to ready queue } enable interrupts } 1 3 unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } 2 When and how could this fail?

  25. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } Putting lock thread on lock wait queue, switch must be atomic. Must call switch with interrupts off.

  26. How is switch returned to? • Review from last time • Think of switch as three phases • Save current location (SP, PC) • Point processor to another stack (SP’) • Jump to another instruction (PC’) • Only way to get back to a switch • Have another thread call switch

  27. Lock implementation #3 lock () { disable interrupts if (value == FREE) { value = BUSY // lock acquire } else { add thread to queue of threads waiting for lock switch to next ready thread // don’t add to ready queue } enable interrupts } unlock () { disable interrupts value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } enable interrupts } What is lock() assuming about the state of interrupts after switch returns?

  28. Interrupts and returning to switch • Lock() can assume that switch • Is always called with interrupts disabled • On return from switch • Previous thread must have disabled interrupts • Next thread to run • Becomes responsible for re-enabling interrupts • Invariants: threads promise to • Disable interrupts before switch is called • Re-enable interrupts after returning from switch

  29. Thread A Thread B yield () { disable interrupts … switch enable interrupts } // exit thread library function <user code> lock () { disable interrupts … switch back from switch … enable interrupts } // exit yield <user code> unlock () // moves A to ready queue yield () { disable interrupts … switch back from switch … enable interrupts } // exit lock <user code>

  30. Lock implementation #4 • Test&set, minimal busy-waiting lock () { while (test&set (guard)) {} // like interrupt disable if (value == FREE) { value = BUSY } else { put on queue of threads waiting for lock switch to another thread // don’t add to ready queue } guard = 0 // like interrupt enable } unlock () { while (test&set (guard)) {} // like interrupt disable value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } guard = 0 // like interrupt enable }

  31. Lock implementation #4 Why is this better than implementation #2? lock () { while (test&set (guard)) {} // like interrupt disable if (value == FREE) { value = BUSY } else { put on queue of threads waiting for lock switch to another thread // don’t add to ready queue } guard = 0 // like interrupt enable } Only busy-wait while another thread is in lock or unlock unlock () { while (test&set (guard)) {} // like interrupt disable value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } guard = 0 // like interrupt enable } Before, we busy-waited while lock was held

  32. Lock implementation #4 What is the switch invariant? Threads promise to call switch with guard set to 1. lock () { while (test&set (guard)) {} // like interrupt disable if (value == FREE) { value = BUSY } else { put on queue of threads waiting for lock switch to another thread // don’t add to ready queue } guard = 0 // like interrupt enable } unlock () { while (test&set (guard)) {} // like interrupt disable value = FREE if anyone on queue of threads waiting for lock { take waiting thread off queue, put on ready queue value = BUSY } guard = 0 // like interrupt enable }

  33. Summary of implementing locks • Synchronization code needs atomicity • Three options • Atomic load-store • Lots of busy-waiting • Interrupt disable-enable • No busy-waiting • Breaks on a multi-processor machine • Atomic test-set • Minimal busy-waiting • Works on multi-processor machines

More Related