280 likes | 292 Views
Solve the classic barbershop synchronization problem using semaphores. Implement virtual/physical interfaces, atomic operations, and hardware and software synchronization. Manage customer and barber threads with maximum capacities for rooms, sofas, and chairs.
E N D
CPS110: Threads review and wrap up Landon Cox February 12, 2008
Virtual/physical interfaces Applications SW atomic operations HW atomic operations OS Hardware
Classic synchronization problem Sofa capacity = 4 Standing room Customer room capacity = 9
Barbershop Customer () { lock.acquire (); while (numInRoom == 9) roomCV.wait (lock); numInRoom++; while (numOnSofa == 4) sofaCV.wait (lock); numOnSofa++; while (numInChair == 3) chairCV.wait (lock); numInChair++; numOnSofa--; sofaCV.signal (lock); customerCV.signal (lock); cutCV.wait (lock); numInChair--; chairCV.signal (lock); numInRoom--; roomCV.signal (lock); lock.release (); } Enter room lock; roomCV; numInRoom=0; sofaCV; numOnSofa=0; chairCV; numInChair=0; customerCV; cutCV; Sit on sofa Sit in chair Wake up barber Wait for cut to finish Leave the shop
Barbershop Customer () { lock.acquire (); while (numInRoom == 9) roomCV.wait (lock); numInRoom++; while (numOnSofa == 4) sofaCV.wait (lock); numOnSofa++; while (numInChair == 3) chairCV.wait (lock); numInChair++; numOnSofa--; sofaCV.signal (lock); customerCV.signal (lock); cutCV.wait (lock); numInChair--; chairCV.signal (lock); numInRoom--; roomCV.signal (lock); lock.release (); } Barber () { lock.acquire (); while (1) { while (numInChair == 0) customerCV.wait (lock); cutHair (); cutCV.signal (); } lock.release (); }
Barbershop with semaphores Customer () { room.down () // enter room sofa.down () // sit on sofa chair.down () // sit on chair sofa.up () customer.up () cut.down () // leave chair chair.up () // leave room room.up () } Semaphore room = 9, sofa = 4, chair = 3, customer = 0, cut = 0; Barber () { while (1) { customer.down() // cut hair cut.up () } } Is anything weird here?
Course administration • Project 1 • Due in ~1.5 weeks • Should be done with disk scheduler • Should be knee-deep in thread library • Extra office hours • Will post/announce on Thursday • Any questions?
Course administration • Looking ahead • Wrapping up synchronization today • Address spaces up next • Mid-term exam • In two weeks (February 26) • Rest of lecture: mini-review of threads
stack stack Program with two threads address space “on deck” and ready to run 0 common runtime program x code library running thread data R0 Rn CPU y x PC y SP registers high “memory”
stack Thread context switch switch in switch out address space 0 common runtime program x code library data R0 1. save registers Rn CPU y x PC y SP registers 2. load registers stack high “memory”
name/status, etc. machine state Portrait of a thread t = new Thread(name); t->Fork(MyFunc, arg); currentThread->Sleep(); currentThread->Yield(); “fencepost” Thread* t low unused region 0xdeadbeef Stack high stack top char stack[StackSize] thread object or thread control block
/* * Save context of the calling thread (old), restore registers of * the next thread to run (new), and return in context of new. */ switch/MIPS (old, new) { old->stackTop = SP; save RA in old->MachineState[PC]; save callee registers in old->MachineState restore callee registers from new->MachineState RA = new->MachineState[PC]; SP = new->stackTop; return (to RA) }
Ex: SwitchContext on MIPS Save current stack pointer and caller’s return address in old thread object. /* * Save context of the calling thread (old), restore registers of * the next thread to run (new), and return in context of new. */ switch/MIPS (old, new) { old->stackTop = SP; save RA in old->MachineState[PC]; save callee registers in old->MachineState restore callee registers from new->MachineState RA = new->MachineState[PC]; SP = new->stackTop; return (to RA) } Caller-saved registers (if needed) are already saved on the thread’s stack. Caller-saved regs restored automatically on return. Switch off of old stack and back to new stack. Return to procedure that called switch in new thread.
Thread States and Transitions running Thread::Yield (voluntary or involuntary) Thread::Sleep (voluntary) Scheduler::Run blocked ready “wakeup”
data data Threads vs. Processes • A process is an abstraction • “Independent executing program” • Includes at least one “thread of control” • Also has a private address space (VAS) • Requires OS kernel support • To be covered in upcoming lectures
data data Threads vs. Processes • Threads may share an address space • Have “context” just like vanilla processes • Exist within some process VAS • Processes may be “multithreaded” • Key difference • thread context switch vs. process context switch • Project 2: manage process context switches
Play analogy What is a process? Threads Threads Another performance! Address space Address space
Locks • Ensuremutual exclusion in critical sections. • A lock is an object, a data item in memory. • Threads pair calls to Acquire and Release. • Acquire before entering a critical section. • Release after leaving a critical section. • Between Acquire/Release, the lock is held. • Acquire doesn’t return until previous holder releases. • Waiting locks can spin (a spinlock) or block (a mutex). A A R R
Portrait of a Lock in Motion Who can explain this figure? R A R A
R2 R1 A1 A2 A1 A2 R2 R1 Dining philosophers Who can explain this figure? X ??? 2 1 Y
Kernel-supported threads • Most newer OS kernels have kernel-supported threads. • Thread model and scheduling defined by OS • NT, advanced Unix, etc. • Linux: threads are “lightweight processes” Kernel scheduler (not a library) decides which thread to run next. New kernel system calls, e.g.: thread_fork thread_exit thread_block thread_alert etc... data Threads can block independently in kernel system calls. Threads must enter the kernel to block: no blocking in user space
data readyList User-level threads • Can also implement user-level threads in a library. • No special support needed from the kernel (use any Unix) • Thread creation and context switch are fast (no syscall) • Defines its own thread model and scheduling policies • Kernel only sees a single process • Project 1t while(1) { t = get next ready thread; scheduler->Run(t); }
Kernel threads Thread Thread Thread Thread PC SP PC SP PC SP PC SP … … … … User mode Kernel mode Scheduler What is “kernel mode?” Mode the OS executes in Heightened privileges Will cover later
User threads Thread Thread Thread Thread PC SP PC SP PC SP PC SP … … … … Sched User mode Kernel mode Kernel mode Scheduler
Kernel vs user tradeoffs • Which do you expect to perform better? • User-threads should perform better • Synchronization functions are just a function call • Attractive in low-contention scenarios • Don’t want to kernel trap on every lock acquire • What is the danger of user-level threads? • If a user thread blocks, entire process blocks • May be hard to take advantage of multiple CPUs
Possible solution • Asynchronous process-kernel interactions • Process never blocks in kernel • Library sleeps thread before entering kernel • If IO, etc ready, library handles interrupt • Restarts thread with correct syscall results • Still not ideal for multi-core • Idea: want to create kernel thread for each core • Let user library manage multiple kernel threads
Performance Comparison • Comparing user-level threads, kernel threads, and processes. Procedure call takes 7 microseconds. Kernel trap takes 19 microseconds. Maybe kernel trap is not so significant.
What’s next • We now understand threads • Each has it a private stack, registers, TCB • Next, we’ll try to understand processes • Address spaces differentiate processes • Address spaces are tied to memory, not CPU • Can think of as a private namespace