310 likes | 469 Views
Shared Memory Coordination. We will be looking at process coordination using shared memory and busy waiting . So we don't send messages but read and write shared variables. When we need to wait, we loop and don't context switch. This can be wasteful of resources if we must wait a long time.
E N D
Shared Memory Coordination • We will be looking at process coordination using shared memory and busy waiting. • So we don't send messages but read and write shared variables. • When we need to wait, we loop and don't context switch. • This can be wasteful of resources if we must wait a long time.
Shared Memory Coordination • Context switching primitives normally use busy waiting in their implementation. • Mutual Exclusion • Consider adding one to a shared variable V. • When compiled onto many machines get three instructions • load r1 V • add r1 r1+1 • store r1 V
Mutual Exclusion • Assume V is initially 10 and one process begins the 3 instruction sequence. • After the first instruction context switch to another process. • Registers are of course saved. • New process does all three instructions. • Context switch back. • Registers are of course restored. • First process finishes. • V has been incremented twice but has only reached 11.
Mutual Exclusion • The problem is that the 3 instruction sequence must be atomic, i.e. cannot be interleaved with another execution of these instructions • That is, one execution excludes the possibility of another. So they must exclude each other, i.e. we must have mutual exclusion. • This was a race condition. • Hard bugs to find since non-deterministic. • Can in general involve more than two processes
Mutual Exclusion • The portion of code that requires mutual exclusion is often called a critical section. • One approach is to prevent context switching. • We can do this for the kernel of a uniprocessor. • Mask interrupts. • Not feasible for user mode processes. • Not feasible for multiprocessors.
Mutual Exclusion • Critical Section Problem is to implement: loop trying-part critical-section releasing-part non-critical section end loop • So that when many processes execute this you never have more than one in the critical section. • That is you must write the trying-part and the releasing-part.
Mutual Exclusion • Trivial solution. • Let releasing-part be simply "halt”. • This shows we need to specify the problem better. • Additional requirement: • Assume that if a process begins execution of its critical section and no other process enters the critical section, then the first process will eventually exit the critical section.
Mutual Exclusion • Then the requirement is "If a process is executing its trying part, then some process will eventually enter the critical section". • Software-only solutions to CS problem. • We assume the existence of atomic loads and stores. • Only up to word length (i.e. not a whole page). • We start with the case of two processes. • Easy if we want tasks to alternate in CS and you know which one goes first in CS.
Mutual Exclusion shared int turn = 1 loop loop while (turn=2) while (turn=1) CS CS turn=2 turn=1 NCS NCS
Mutual Exclusion • But always alternating does not satisfy the additional requirement above. • Let NCS for process 1 be an infinite loop (or a halt). • We will get to a point when process 2 is in its trying part but turn=1 and turn will not change. • So some process enters its trying part but neither process will enter the CS.
Mutual Exclusion • The first solution that worked was discovered by a mathematician named Dekker. • Now we will use turn only to resolve disputes.
Dekker’s Algorithm /* Variables are global and shared */ for (; ;) { // process 1 - an infinite loop to show it enters // CS more than once. Turn is initially 1. p1wants = 1; while (p2wants == 1) { if (turn == 2) { p1wants = 0; while (turn == 2) {/* Empty loop */} p1wants = 1; } } critical_section(); turn = 2; p1wants = 0; noncritical_section(); }
Dekker’s Algorithm /* Variables are global and shared */ for (; ;) { // process 2 - an infinite loop to show it enters // CS more than once. Turn is initially 1. p2wants = 1; while (p1wants == 1) { if (turn == 1) { p2wants = 0; while (turn == 1) {/* Empty loop */} p2wants = 1; } } critical_section(); turn = 1; p2wants = 0; noncritical_section(); }
Mutual Exclusion • The winner-to-be just loops waiting for the loser to give up and then goes into the CS. • The loser-to-be: • Gives up. • Waits to see that the winner has finished. • Starts over (knowing it will win). • Dijkstra extended Dekker's solution for > 2 processes. • Others improved the fairness of Dijkstra's algorithm.
Mutual Exclusion • These complicated methods remained the simplest known until 1981 when Peterson found a much simpler method. • Keep Dekker's idea of using turn only to resolve disputes, but drop the complicated then body of the if.
Peterson’s Algorithm /* Variables are global and shared */ for (; ;) { // process 1 - an infinite loop to show it enters // CS more than once. p1wants = 1; turn = 2; while (p2wants && turn == 2) {/* empty loop */} critical_section(); p1wants = 0; noncritical_section(); }
Peterson’s Algorithm /* Variables are global and shared */ for (; ;) { // process 2 - an infinite loop to show it enters // CS more than once. p2wants = 1; turn = 1; while (p1wants && turn == 1) {/* empty loop */} critical_section(); p2wants = 0; noncritical_section(); }
Semaphores • Trying and release often called entry and exit, or wait and signal, or down and up, or P and V (the latter are from Dutch words since Dijkstra is Dutch). • Let’s try to formalize the entry and exit parts. • To get mutual exclusion we need to ensure that no more than one task can pass through P until a V has occurred. The idea is to keep trying to walk through the gate and when you succeed atomically close the gate behind you so that no one else can enter.
Semaphores • Definition (not an implementation): • Let S be an enumerated type with values closed and open (like a gate). P(S) is while S = closed S closed • The failed test and the assignment are a single atomic action.
Semaphores P(S) is label: {[ --begin atomic part if S = open S closed else }] --end atomic part goto label V(S) is S open
Semaphores • Note that this P and V (not yet implemented) can be used to solve the critical section problem very easily. • The entry part is P(S). • The exit part is V(S). • Note that Dekker and Peterson do not give us a P and V since each process has a unique entry and a unique exit. • S is called a (binary) semaphore.
Semaphores • To implement binary semaphores we need some help from our hardware friends. Boolean in out X TestAndSet(X) is oldx X X true return oldx • Note that the name is a good one. This function tests the value of X and sets it (i.e. sets it true; reset is to set false).
Semaphores • Now P/V for binary semaphores is trivial. • S is Boolean variable (false is open, true is closed). P(S) is while (TestAndSet(S)) V(S) is S false • This works fine no matter how many processes are involved.
Counting Semaphores • Now want to consider permitting a bounded number of processors into what might be called a semi-critical section. loop P(S) SCS // at most k processes can be here // simultaneously V(S) NCS • A semaphore S with this property is called a counting semaphore.
Counting Semaphores • If k=1, we get a binary semaphore so counting semaphore generalizes to binary semaphore. • How can we implement a counting semaphore given binary semaphores? • S is a nonnegative integer. • Initialize S to k, the max number allowed in SCS. • Use k=1 to get binary semaphore (hence the name binary). • We only ask for: • Limit of k in SCS (analogue of mutual exclusion). • Progress: If process enters P and < k in SCS, a process will enter the SCS.
Counting Semaphores • We do not ask for fairness, and don't assume it (for the binary semaphore) either. binary semaphore q // initially open binary semaphore r // initially closed integer NS; // might be negative, keeps value of S P(S) is V(S) is P(q) P(q) NS-- NS++ if NS < 0 if NS <= 0 V(q) V(r) P(r) else V(q) V(q)
Mutual Exclusion • Try to do mutual exclusion without shared memory. • Centralized approach • Pick a process as a coordinator (mutual-exclusion-server) • To get access to Critical Section send message to coordinator and await reply. • When you leave CS send message to coordinator. • When coordinator gets a message requesting CS it • Replies if the CS is free • Enter requesters name into waiting queue
Mutual Exclusion • When coordinator gets a message announcing departure from CS • Removes head entry from list of waiters and replies to it • The simplest solution and perhaps the best • Distributed solution • When you want to get into CS • Send request message to everyone (except yourself) • Include timestamp (logical clock!) • Wait until you receive OK from everyone • When receive request ...
Mutual Exclusion • If you are not in CS and don't want to be, say OK • If you are in CS, put requester's name on list • If you are not in CS but want to be, • If your TS is lower, put name on list • If your TS is higher, send OK • When you leave CS, send OK to all on your list • Token Passing solution • Form logical ring • Pass token around ring • When you have the token you can enter CS (hold on to the token until you exit)
Mutual Exclusion • Comparison • Centralized is best • Distributed of theoretical interest • Token passing good if hardware is ring based (e.g. a token ring LAN)