420 likes | 570 Views
CS252: Systems Programming. Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 12: Condition Variable, Read/Write Lock, and Deadlock. Pseudo-Code Implementing Semaphore Using Mutex Lock. sem_post ( sem_t * sem ){ lock( sem -> mutex ); sem ->count++;
E N D
CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 12: Condition Variable, Read/Write Lock, and Deadlock
Pseudo-Code Implementing Semaphore Using Mutex Lock sem_post(sem_t*sem){ lock(sem -> mutex); sem->count++; if(sem->count < 0){ wake up a thread; } unlock(sem->mutex); } sem_wait(sem_t*sem){ lock(sem -> mutex); sem -> count--; if(sem->count < 0){ unlock(sem->mutex); wait(); } else { unlock(sem->mutex) } } Assume that wait() causes a thread to be blocked. What could go wrong? How to fix it?
Condition Variable • What we need is the ability to wait on a condition while simultaneously giving up the mutex lock. • Condition Variable (CV): • A thread can wait on a CV; it will be blocked until another thread call signal on the CV • A condition variable is always used in conjunction with a mutex lock. • The thread calling wait should hold the lock, and the wait call will releases the lock while going to wait
Using Condition Variable • Declaration: • #include <pthread.h> • pthread_cond_t cv; • Initialization: • pthread_cond_init(&cv, pthread_condattr_t *attr); • Wait on the condition variable: • intpthread_cond_wait(pthread_cond_t*cv, pthread_mutex_t *mutex); • The calling threshold should hold mutex; it will be released atomically while start waiting on cv • Upon successful return, the thread has re-aquired the mutex; however, the thread waking up and reaquiring the lock is not atomic.
Using Condition Variable • Waking up waiting threads: • intpthread_cond_signal(pthread_cond_t*cv); • Unblocks one thread waiting on cv • intpthread_cond_broadcast(pthread_cond_t *cv); • Unblocks all threads waiting on cv • The two methods can be called with or without holding the mutexthat the thread calls wait with; but it is better to call it while holding the mutex
What is a Condition Variable? • Each Condition Variable is a queue of blocked threads • The cond_wait(cv, mutex) call adds the calling thread to cv’s queue, while releasing mutex; • The call returns when the thread is unblocked (by another thread calling cond_signal), and the thread obtaining the mutex • The cond_signal(cv) call removes one thread from the queue and unblocks it.
Implementing Semaphore using Mutex and Cond Var struct semaphore { pthread_cond_tcond; pthread_mutex_tmutex; int count; }; typedefstruct semaphore semaphore_t; intsemaphore_wait (semaphore_t *sem) { intres = pthread_mutex_lock(&(sem->mutex)); if (res != 0) return res; sem->count --; if (sem->count < 0) res= pthread_cond_wait(&(sem->cond),&(sem->mutex)); pthread_mutex_unlock(&(sem->mutex)); return res; }
Implementing Semaphore using Mutex and Cond Var intsemaphore_post (semaphore_t *sem) { int res = pthread_mutex_lock(&(sem->mutex)); if (res != 0) return res; sem->count ++; if (sem->count <= 0) { res = pthread_cond_signal(&(sem->cond)); } pthread_mutex_unlock(&(sem->mutex)); return res; }
Usage of Semaphore: Bounded Buffer • Implement a queue that has two functions • enqueue() - adds one item into the queue. It blocks if queue if full • dequeue() - remove one item from the queue. It blocks if queue is empty • Strategy: • Use an _emptySem semaphore that dequeue() will use to wait until there are items in the queue • Use a _fullSem semaphore that enqueue() will use to wait until there is space in the queue.
Bounded Buffer #include <pthread.h> #include <semaphore.h> enum {MaxSize = 10}; class BoundedBuffer{ int _queue[MaxSize]; int _head; int _tail; mutex_t _mutex; sem_t _emptySem; sem_t_fullSem; public: BoundedBuffer(); void enqueue(intval); intdequeue(); }; BoundedBuffer::BoundedBuffer() { _head = 0; _tail = 0; pthtread_mutex_init(&_mutex, NULL); sem_init(&_emptySem, 0, 0); sem_init(&_fullSem, 0, MaxSize); }
Bounded Buffer void BoundedBuffer::enqueue(intval) { sem_wait(&_fullSem); mutex_lock(_mutex); _queue[_tail]=val; _tail = (_tail+1)%MaxSize; mutex_unlock(_mutex); sem_post(_emptySem); } int BoundedBuffer::dequeue() { sem_wait(&_emptySem); mutex_lock(_mutex); intval = _queue[_head]; _head = (_head+1)%MaxSize; mutex_unlock(_mutex); sem_post(_fullSem); return val; }
Bounded Buffer Assume queue is empty T1 T2 T3 v=dequeue() sem_wait(&_emptySem); _emptySem.count==-1 wait v=dequeue() sem_wait(&_emptySem); _emptySem.count==-2 wait enqueue(6) sem_wait(&_fullSem) put item in queue sem_post(&emptySem) _emptySem.count==-1 wakeup T1 T1 continues Get item from queue
Bounded Buffer Assume queue is empty T1 T2 …… T10 enqueue(1) sem_wait(&_fullSem); _fullSem.count==9 put item in queue enqueue(2) sem_wait(&_fullSem); _fullSem.count==8 put item in queue enqueue(10) sem_wait(&_fullSem); _fullSem.count==0 put item in queue
Bounded Buffer T11 T12 enqueue(11) sem_wait(&_fullSem); _fullSem.count==-1 wait val=dequeue() sem_wait(&_emptySem); _emptySem.count==9 get item from queue sem_post(&_fullSem) _fullSem.count==0 wakeup T11
Bounded Buffer Notes The counter for _emptySem represents the number of items in the queue The counter for _fullSem represents the number of spaces in the queue. Mutex locks are necessary since sem_wait(_emptySem) or sem_wait(_fullSem) may allow more than one thread to execute the critical section.
Read/Write Locks They are locks for data structures that can be read by multiple threads simultaneously ( multiple readers ) but that can be modified by only one thread at a time. Example uses: Data Bases, lookup tables, dictionaries etc where lookups are more frequent than modifications.
Read/Write Locks Multiple readers may read the data structure simultaneously Only one writer may modify it and it needs to exclude the readers. Interface: ReadLock() – Lock for reading. Wait if there are writers holding the lock ReadUnlock() – Unlock for reading WriteLock() - Lock for writing. Wait if there are readers or writers holding the lock WriteUnlock() – Unlock for writing
Read/Write Locks Threads: R1 R2 R3 R4 W1 --- --- --- --- --- RL RL RL WL wait RU RU RU continue RL Wait WU continue rl = readLock; ru = readUnlock; wl = writeLock; wu = writeUnlock;
Read/Write Locks Implementation class RWLock { int _nreaders; //Controls access //to readers/writers sem_t_semAccess; mutex_t _mutex; public: RWLock(); void readLock(); void writeLock(); void readUnlock(); void writeUnlock(); }; RWLock::RWLock() { _nreaders = 0; sem_init( &semAccess, 1 ); mutex_init( &_mutex ); }
Read/Write Locks Implementation void RWLock::readLock() { mutex_Lock( &_mutex ); _nreaders++; if( _nreaders == 1 ) { //This is the // first reader //Get sem_Access sem_wait(&_semAccess); } mutex_unlock( &_mutex ); } void RWLock::readUnlock() { mutex_lock( &_mutex ); _nreaders--; if( _nreaders == 0 ) { //This is the last reader //Allow one writer to //proceed if any sem_post( &_semAccess ); } mutex_unlock( &_mutex ); }
Read/Write Locks Implementation void RWLock::writeLock() { sem_wait( &_semAccess ); } void RWLock::writeUnlock() { sem_post( &_semAccess ); }
Read/Write Locks Example Threads: R1 R2 R3 W1 W2 ----------- ------------ -------- -------- -------- readLock nreaders++(1) if (nreaders==1) sem_wait continue readLock nreaders++(2) readLock nreaders++(3) writeLock sem_wait (block)
Read/Write Locks Example Threads: R1 R2 R3 W1 W2 ----------- ------------ -------- -------- -------- writeLock sem_wait (block) readUnlock() nreaders—(2) readUnlock() nreaders—(1) readUnlock() nreaders—(0) if (nreaders==0) sem_post W1 continues writeUnlock sem_post W2 continues
Read/Write Locks Example Threads: (W2 is holding lock in write mode) R1 R2 R3 W1 W2 ----------- ------------ -------- -------- -------- readLock mutex_lock nreaders++(1) if (nreaders==1) sema_wait block readLock mutex_lock block writeUnlock sema_post R1 continues mutex_unlock R2 continues
Notes on Read/Write Locks • Fairness in locking: First-come-first serve • Mutexesand semaphores are fair. The thread that has been waiting the longest is the first one to wake up. • Spin locks do not guarantee fairness, the one waiting the longest may not be the one getting it • This should not be an issue in the situation when one wants to use spin locks, namely low contention, and short lock holding time • This implementation of read/write locks suffers from “starvation” of writers. That is, a writer may never be able to write if the number of readers is always greater than 0.
Write Lock Starvation (Overlapping readers) Threads: R1 R2 R3 R4 W1 --- --- --- --- --- RL RL RL WL wait RU RL RU RU RL RU RL rl = readLock; ru = readUnlock; wl = writeLock; wu = writeUnlock;
Deadlock and Starvation • Deadlock • It happens when one or more threads will have to block forever ( or until process is terminated) because they have to wait for a resource that will never be available. • Once a deadlock happens, the process has to be killed. Therefore we have to prevent deadlocks in the first place. • Starvation • This condition is not as serious as a deadlock. Starvation happens when a thread may need to wait for a long time before a resource becomes available. • Example: Read/Write Locks
Example of a Deadlock Assume two bank accounts protected with two mutexes int balance1 = 100; int balance2 = 20; mutex_t m1, m2; Transfer1_to_2(int amount) { mutex_lock(&m1); mutex_lock(&m2); balance1 - = amount; balance2 += amount; mutex_unlock(&m1); mutex_unlock(&m2); } Transfer2_to_1(int amount) { mutex_lock(&m2); mutex_lock(&m1); balance2 - = amount; balance1 += amount; mutex_unlock(&m2); mutex_unlock(&m1); }
Example of a Deadlock Thread 1 Thread 2 ---------------------------------------- ------------------------------------- Transfer1_to_2(int amount) { mutex_lock(&m1); context switch Transfer2_to_1(int amount) { mutex_lock(&m2); mutex_lock(&m1); block waiting for m1 mutex_lock(&m2); block waiting for m2
Example of a Deadlock Once a deadlock happens, the process becomes unresponsive. You have to kill it. Before killing get as much info as possible since this event usually is difficult to reproduce. Use gdb to attach the debugger to the processes to see where the deadlock happens. gdbprogname <pid> gdb> threads //Lists all threads gdb> thread <thread number> //Switch to a thread gdb >where // Prints stack trace Do this for every thread. Then you can kill the process.
Deadlock • A deadlock happens when there is a combination of instructions in time that causes resources and threads to wait for each other. • You may need to run your program for a long time and stress-test them in order to find possible deadlocks • Also you can increase the probability of a deadlock by running your program in a multi-processor (multi-core) machine. • We need to prevent deadlocks to happen in the first place.
Graph Representation of Deadlocks Thread T1 is waiting for mutex M1 Thread T1 is holding mutex M1 T1 M1 T1 M1
Deadlock Representation T1 M2 T2 M1 Deadlock = Cycle in the graph.
Larger Deadlock T1 M2 T2 M3 M1 T3 T4 M4
Deadlock Prevention A deadlock is represented as a cycle in the graph. To prevent deadlocks we need to assign an order to the locks: m1, m2, m3 … Notice in the previous graph that a cycle follows the ordering of the mutexes except at one point.
Deadlock Prevention • Deadlock Prevention: • When calling mutex_lock mi, lock the mutexes with lower order of I before the ones with higher order. • If m1 and m3 have to be locked, lock m1 before locking m3. • This will prevent deadlocks because this will force not to lock a higher mutex before a lower mutex breaking the cycle.
Lock Ordering => Deadlock Prevention • Claim: • By following the lock ordering deadlocks are prevented. • Proof by contradiction • Assume that the ordering was followed but we have a cycle. • By following the cycle in the directed graph we will find mi before mj. Most of the time i< j but due to the nature of the cycle, at some point we will find i > j . • This means that a tread locked mi before mj where i>j so it did not follow the ordering. This is a contradiction to our assumptions. • Therefore, lock ordering prevents deadlock.
Preventing a Deadlock Rearranging the Bank code to prevent the deadlock, we make sure that the mutex_locks are locked in order. int balance1 = 100; int balance2 = 20; mutex_t m1, m2; Transfer1_to_2(int amount) { mutex_lock(&m1); mutex_lock(&m2); balance1 -= amount; balance2 += amount; mutex_unlock(&m1); mutex_unlock(&m2); } Transfer2_to_1(int amount) { mutex_lock(&m1); mutex_lock(&m2); balance2 -= amount; balance1 += amount; mutex_unlock(&m2); mutex_unlock(&m1); }
Preventing a Deadlock We can rewrite the Transfer function s more generically as: balance1 -= amount; balance2 += amount; mutex_unlock(&mutex[i]); mutex_unlock(&mutex[j]); } int balance[MAXACOUNTS]; mutex_tmutex[MAXACOUNTS]; Transfer_i_to_j(inti, int j, int amount) { if ( i< j) { mutex_lock(&mutex[i]); mutex_lock(&mutex[j]); } else { mutex_lock(&mutex[j]); mutex_lock(&mutex[i]); }
Ordering of Unlocking Since mutex_unlock does not force threads to wait, then the ordering of unlocking does not matter.
Review Questions What are Condition Variables? What is the behavior of wait/signal on CV? How to implement semaphores using using CV and Mutex? How to implement bounded buffer using semaphores? What is a deadlock? How to prevent deadlocks by enforcing a global ordering of locks? Why this prevents deadlocks?
Review Questions What are read/write locks? What is the behavior of read/write lock/unlock? How to implement R/W locks using semaphore? Why the implementation given in the slides can cause writer starvation? How to Implement a read/write lock where writer is preferred (i.e., when a writer is waiting, no reader can gain read lock and must wait until all writers are finished)?