310 likes | 563 Views
A Critical Section. while (true) { non-critical section#1; entry protocol; critical section; exit protocol; non-critical section#2; }. Mutual Exclusion. Mutual exclusion = at most one worker in its critical section Avoid deadlock : two or more workers haven’t locked one another out
E N D
A Critical Section while (true) { non-critical section#1; entry protocol; critical section; exit protocol; non-critical section#2; }
Mutual Exclusion • Mutual exclusion = at most one worker in its critical section • Avoid deadlock: two or more workers haven’t locked one another out • Avoid useless delay: never wait for critical section when no else in theirs • Avoid starvation: a worker will eventually get to its critical section
Synchronization Tools • Basic (unadorned) shared variables • Spin locks • atomic test & set primitives • Semaphores • wait & signal primitives • Barriers & Monitors • built using locks and/or semaphores
Shared Variable: 1st Attempt int turn = 1; [thread#1] { while (true) { while (turn == 2) /* do nothing */; critical section; turn = 2; non-critical section#1; } } [thread#2] { while (true) { while (turn == 1) /* do nothing */; critical section; turn = 1; non-critical section#2; } }
First Attempt Evaluation • Mutual exclusion? • yup. only one gets in at a time. • Deadlock? • nope. can’t overlap holdings with just one lock • Useless delay? • absolutely. they are synchronized in lock-step • Starvation? • yup. one could die with the “key” to the lock.
Shared Variable: 2nd Attempt int one = 1, two = 1; [thread#1] { while (true) { while (two == 0) /* do nothing */; one = 0; critical section; one = 1; non-critical section#1; } } [thread#2] { while (true) { while (one == 0) /* do nothing */; two = 0; critical section; two = 1; non-critical section#2; } }
Second Attempt Evaluation • Mutual exclusion? • nope. both can get in at the same time! • Consider this history • Phase 1: each checks the other’s variable • Phase 2: each sets its own variable • Phase 3: each enters its critical section
Shared Variable: 3rd Attempt int one = 1, two = 1; [thread#1] { while (true) { one = 0; while (two == 0) /* do nothing */; critical section; one = 1; non-critical section#1; } } [thread#2] { while (true) { two = 0; while (one == 0) /* do nothing */; critical section; two = 1; non-critical section#2; } }
Third Attempt Evaluation • Mutual exclusion? • yup. only one gets in at a time. • Deadlock? • nope. each states intent to enter, but may not • Consider this history • Phase 1: each sets its variable of intent • Phase 2: each checks the other variable forever
Shared Variable: 4th Attempt int one = 1, two = 1; [thread#1] { while (true) { one = 0; while (two == 0) { one = 1; /* do nothing for a few moments */ one = 0; } critical section; one = 1; non-critical section#1; } } [thread#2] { while (true) { two = 0; while (one == 0) {two = 1; /* do nothing for a few moments */ two = 0; } critical section; two = 1; non-critical section#2; } }
Fourth Attempt Evaluation • Mutual exclusion? • yup. only one gets in at a time. • Deadlock? • nope. the inner while loop takes care of this if we establish a random delay pattern • Useless delay? • unlikely, but possible. • each could defer to the other multiple times
So, Is There A Solution? • Tie-Breaker (Peterson’s) Algorithm • Ticket Algorithm • Bakery Algorithm • Dekker’s Algorithm • Something other than shared variables
Critical Section Method#1 • The Tie-Breaker (Peterson’s) algorithm • each worker has a globally visible variable • another variable tracks “who entered last” • Problem • Quite cumbersome for many workers • For busy scenarios, looks a lot like Round Robin methods for ordering/scheduling
Tie-Breaker Algorithm bool in1 = false, in2 = false; int last = 1; [thread#1] { while (true) { non-critical section last = 1; in1 = true; while (in2 and last == 1); critical section; in1 = false; non-critical section; } } [thread#2] { while (true) { non-critical section; last = 2; in2 = true; while (in1 and last == 2); critical section; in2 = false; non-critical section; } }
Critical Section Method#2 • The Ticket algorithm • one variable for active ticket number • each worker has separate ticket number • centralized variable to determine who is next • Problems • Really need atomic changes to some variables • Arithmetic overflow very likely in practice
Ticket Algorithm int number = 1, next = 1, turn1 = 0, turn2 = 0; [thread#1] { while (true) { non-critical section [turn1 = (++number);] while (turn1 != next); critical section; [next++;] non-critical section; } } [thread#2] { while (true) { non-critical section; [turn2 = (++number);] while (turn2 != next); critical section; [next++;] non-critical section; } }
Critical Section Method#3 • The Bakery algorithm • like Ticket, but decision to go “active” not centralized • workers compare tickets amongst themselves • advantage: no special locking instructions required • Problem • High communication overhead with many workers
Bakery Algorithm int turn[1..n] = all zero’s; [thread#i] { while (true) { non-critical section turn[i] = max(other turns) + 1; while (turn[i] > max(other turns) ); while (i > (all indexes with tying tickets) ); critical section; turn[i] = 0; non-critical section; } } [thread#j] { while (true) { non-critical section turn[j] = max(other turns) + 1; while (turn[j] > max(other turns) ); while ( j > (all indexes with tying tickets) ); critical section; turn[j] = 0; non-critical section; } }
Critical Section Method#4 • Dekker’s Algorithm • It meets all four criteria satisfactorily • Perhaps best and most general of methods so far • Problem • Like all the others: very complex for > 2 workers
Dekker’s Algorithm bool in1 = false, in2 = false; turn = 1; [thread#1] { while (true) { non-critical section in1 = true; while(in2) { if (turn == 2) { in1 = false; while (turn == 2); in1 = true; } } critical section; in1 = false; turn = 2; non-critical section; } } [thread#2] { while (true) { non-critical section; in2 = true; while (in1) { if (turn == 1) { in2 = false; while (turn == 1); in2 = true; } } critical section; in2 = false; turn = 1; non-critical section; } }
Critical Section Method#5: Locks Lock mylock = false; [thread#1] { while (true) { non-critical section [while (mylock == true); mylock = true; ] critical section; [mylock = false;] non-critical section; } } [thread#2] { while (true) { non-critical section [while (mylock == true); mylock = true; ] critical section; [mylock = false;] non-critical section; } } [while (lock == true); lock = true; ] must do a “Test and Set” atomically
Locks and Semaphores • Locks typically generate busy waiting • wasted CPU cycles on a multiprocessor • worker swapped out on uniprocessor • use when expecting “short” waiting periods • Semaphores typically generate sleeping • blocked workers put to sleep • use when expecting “long” waiting periods
Semaphores Basics • A semaphore is a nonnegative integer • initial value = # allowed in critical section • Two basic semaphore operators • wait(s) can be interpreted a couple ways [ await(s > 0) s = s - 1; ] OR [ while(s <=0); s = s - 1; ] • signal(s) means [ s = s + 1; ] • wait and signal often called P and V
Critical Section via Semaphores sem fred = 1;thread [i = 1 to n] { while(true) { P(fred); critical section; V(fred); noncritical section; }}
Critical Sections versus Barriers • Critical sections are all about “who gets in” • Barriers are all about “when to go in” or “when to continue” • Critical sections typically protect data • Barriers typically protect sequence • an example: data parallel search for largest • no one has to wait to get individual job done • however, need to synchronize after job done
Implementing Barriers • Shared counter • each worker increments counter when done • all workers done when counter reaches a limit • Per-worker flag • each worker has flag to set when done • all workers done when all flags set to done • Semaphores • still need counters or flags, but • each worker sleeps until all workers done
Barriers via Counters int arrivals = 0;[thread#1] { process before sync point [arrivals++;] while (arrivals < # workers); process after sync point}[thread#2] { process before sync point [arrivals++;] while (arrivals < # workers); process after sync point}
Barriers via Per-Worker Flags int arrive1 = 0, arrive2 = 0;[thread#1] { process before sync point arrive1++; while (arrive1 + arrive2 < 2); process after sync point}[thread#2] { process before sync point arrive2++; while (arrive1 + arrive2 < 2); process after sync point}
Barriers via Semaphores sem arrive1 = 0, arrive2 = 0;[thread#1] { process before sync point V(arrive1); P(arrive2); process after sync point}[thread#2] { process before sync point V(arrive2); P(arrive1); process after sync point}
Monitors • Basically an OO approach to synchronizing • Coordination details encapsulated • Monitor is essentially a class definition • Contains sync variables (e.g. a semaphore) • Contains sync methods • Rest of program uses monitor as black box • Monitor not new tool; new design, old tools