910 likes | 1.48k Views
GMU CS 571. Process Synchronization. Race ConditionsThe Critical Section ProblemSynchronization HardwareClassical Problems of SynchronizationSemaphoresMonitorsDeadlock. GMU CS 571. Concurrent Access to Shared Data. Suppose that two processes A and B have access to a shared variable Bal
E N D
1. CS 571 - Lecture 3Process Synchronization & Deadlock Ch. 6 (6.1-6.7), 7
Spring 2008
2. GMU CS 571 Process Synchronization Race Conditions
The Critical Section Problem
Synchronization Hardware
Classical Problems of Synchronization
Semaphores
Monitors
Deadlock
3. GMU CS 571 Concurrent Access to Shared Data Suppose that two processes A and B have access to a shared variable Balance.
PROCESS A: PROCESS B:
Balance = Balance - 100 Balance = Balance + 200
Further, assume that Process A and Process B are executing concurrently in a time-shared, multi-programmed system.
4. GMU CS 571 Concurrent Access to Shared Data The statement Balance = Balance 100 is implemented by several machine level instructions such as:
A1. lw $t0, BALANCE
A2. sub $t0,$t0,100
A3. sw $t0, BALANCE
Similarly, Balance = Balance + 200 can be implemented by the following:
B1. lw $t1, BALANCE
B2. add $t1,$t1,200
B3. sw $t1, BALANCE
5. GMU CS 571 Race Conditions Scenario 1:
A1. lw $t0, BALANCE
A2. sub $t0,$t0,100
A3. sw $t0, BALANCE
Context Switch
B1. lw $t1, BALANCE
B2. add $t1,$t1,200
B3. sw $t1, BALANCE
Balance is increased by 100
6. GMU CS 571 Race Conditions Situations where multiple processes are writing or reading some shared data and the final result depends on who runs precisely when are called race conditions.
A serious problem for most concurrent systems using shared variables! Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes.
We must make sure that some high-level code sections are executed atomically.
Atomic operation means that it completes in its entirety without worrying about interruption by any other potentially conflict-causing process.
7. GMU CS 571 The Critical-Section Problem n processes all competing to use some shared data
Each process has a code segment, called critical section (critical region), in which the shared data is accessed.
Problem ensure that when one process is executing in its critical section, no other process is allowed to execute in their critical section.
The execution of the critical sections by the processes must be mutually exclusive in time.
8. GMU CS 571 Mutual Exclusion
9. GMU CS 571 Solving Critical-Section Problem Any solution to the problem must satisfy three conditions.
Mutual Exclusion:
No two processes may be simultaneously inside the same critical section.
Bounded Waiting:
No process should have to wait forever to enter a critical section.
Progress:
No process executing a code segment unrelated to a given critical section can block another process trying to enter the same critical section.
Arbitrary Speed:
In addition, no assumption can be made about the relative speed of different processes (though all processes have a non-zero speed).
10. GMU CS 571 Attempts to solve mutual exclusion do {
.
entry section
critical section
exit section
remainder section
} while (1);
General structure as above
Two processes P1 and P2
Processes may share common variables
Assume each statement is executed atomically (is this realistic?)
11. GMU CS 571 Algorithm 1 Shared variables:
int turn; initially turn = 0
turn = i ? Pi can enter its critical section
Process Pi
do {
while (turn != i) ;
critical section
turn = j;
reminder section
} while (1);
Satisfies mutual exclusion, but not progress
12. GMU CS 571 Algorithm 2 Shared variables:
boolean flag[2]; initially flag[0] = flag[1] = false
flag[i] = true ? Pi wants to enter its critical region
Process Pi
do {
flag[i]=true;
while (flag[j]) ;
critical section
flag[i] = false;
reminder section
} while (1);
Satisfies mutual exclusion, but not progress
13. GMU CS 571 Algorithm 3: Petersons solution Combine shared variables of previous attempts
Process Pi
do {
flag[i]=true;
turn = j;
while (flag[j] and turn == j) ;
critical section
flag[i] = false;
reminder section
} while (1);
Meets our requirements (assuming each individual statement is executed atomically)
14. GMU CS 571 Synchronization Hardware Many machines provide special hardware instructions that help to achieve mutual exclusion
The TestAndSet (TAS) instruction tests and modifies the content of a memory word atomically
TAS R1,LOCK reads the contents of the memory word LOCK into register R1, and stores a nonzero value (e.g. 1) at the memory word LOCK (atomically)
Assume LOCK = 0
calling TAS R1, LOCK will set R1 to 0, and set LOCK to 1.
Assume LOCK = 1
calling TAS R1, LOCK will set R1 to 1, and set LOCK to 1.
15. GMU CS 571 Mutual Exclusion with Test-and-Set Initially, shared memory word LOCK = 0.
Process Pi
do {
entry_section:
TAS R1, LOCK
CMP R1, #0 /* was LOCK = 0? */
JNE entry_section
critical section
MOVE LOCK, #0 /* exit section */
remainder section
} while(1);
16. GMU CS 571 Classical Problems of Synchronization Producer-Consumer Problem
Readers-Writers Problem
Dining-Philosophers Problem
We will develop solutions using semaphores and monitors as synchronization tools (no busy waiting).
17. GMU CS 571 Producer/Consumer Problem
18. GMU CS 571 Readers-Writers Problem A data object (e.g. a file) is to be shared among several concurrent processes.
A writer process must have exclusive access to the data object.
Multiple reader processes may access the shared data simultaneously without a problem
Several variations on this general problem
19. GMU CS 571 Dining-Philosophers Problem
20. GMU CS 571 Synchronization Synchronization
Most synchronization can be regarded as either:
Mutual exclusion (making sure that only one process is executing a CRITICAL SECTION [touching a variable or data structure, for example] at a time), or as
CONDITION SYNCHRONIZATION, which means making sure that a given process does not proceed until some condition holds (e.g. that a variable contains a given value)
The sample problems will illustrate this
21. GMU CS 571 Semaphores Language level synchronization construct introduced by E.W. Dijkstra (1965)
Motivation: Avoid busy waiting by blocking a process execution until some condition is satisfied.
Each semaphore has an integer value and a queue.
Two operations are defined on a semaphore variable s:
wait(s) (also called P(s) or down(s))
signal(s) (also called V(s) or up(s))
We will assume that these are the only user-visible operations on a semaphore.
Semaphores are typically available in thread implementations.
22. GMU CS 571 Semaphore Operations Conceptually a semaphore has an integer value greater than or equal to 0.
wait(s):: wait/block until s.value > 0; s.value-- ; /* Executed atomically! */
A process executing the wait operation on a semaphore with value 0 is blocked (put on a queue) until the semaphores value becomes greater than 0.
No busy waiting
signal(s):: s.value++; /* Executed atomically! */
23. GMU CS 571 Semaphore Operations (cont.) If multiple processes are blocked on the same semaphore s, only one of them will be awakened when another process performs signal(s) operation. Who will have priority?
Binary semaphores only have value 0 or 1.
Counting semaphores can have any non-negative value. [We will see some of these later]
24. GMU CS 571 Shared data:
semaphore mutex; /* initially mutex = 1 */
Process Pi: do { wait(mutex); critical section
signal(mutex); remainder section} while (1);
Critical Section Problem with Semaphores
25. GMU CS 571 Re-visiting the Simultaneous Balance Update Problem
26. GMU CS 571 Suppose we need to execute B in Pj only after A executed in Pi
Use semaphore flag initialized to 0
Code:
Pi : Pj :
? ?
A wait(flag)
signal(flag) B Semaphore as a General Synchronization Tool
27. GMU CS 571 Returning to Producer/Consumer
28. GMU CS 571 Producer-Consumer Problem (Cont.) Make sure that:
The producer and the consumer do not access the buffer area and related variables at the same time.
No item is made available to the consumer if all the buffer slots are empty.
No slot in the buffer is made available to the producer if all the buffer slots are full.
29. GMU CS 571 Producer-Consumer Problem Shared datasemaphore full, empty, mutex;Initially:full = 0 /* The number of full buffers */
empty = n /* The number of empty buffers */
mutex = 1 /* Semaphore controlling the access to the buffer pool */
30. GMU CS 571 Producer Process
do {
produce an item in p
wait(empty);
wait(mutex);
add p to buffer
signal(mutex);
signal(full);
} while (1);
31. GMU CS 571 Consumer Process
do {
wait(full);
wait(mutex);
remove an item from buffer to c
signal(mutex);
signal(empty);
consume the item in c
} while (1);
32. GMU CS 571 Readers-Writers Problem A data object (e.g. a file) is to be shared among several concurrent processes.
A writer process must have exclusive access to the data object.
Multiple reader processes may access the shared data simultaneously without a problem
Shared datasemaphore mutex, wrt;
int readcount; Initiallymutex = 1, readcount = 0, wrt = 1;
33. GMU CS 571 Readers-Writers Problem: Writer Process wait(wrt);
writing is performed
signal(wrt);
34. GMU CS 571 Readers-Writers Problem: Reader Process
wait(mutex);
readcount++;
if (readcount == 1)
wait(wrt);
signal(mutex);
reading is performed
wait(mutex);
readcount--;
if (readcount == 0)
signal(wrt);
signal(mutex);
35. GMU CS 571 Dining-Philosophers Problem Shared data
semaphore chopstick[5];
Initially all semaphore values are 1
36. GMU CS 571 Dining-Philosophers Problem Philosopher i:
do {
wait(chopstick[i]);
wait(chopstick[(i+1) % 5]);
eat
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
think
} while (1);
37. GMU CS 571 Semaphores It is generally assumed that semaphores are fair, in the sense that processes complete semaphore operations in the same order they start them
Problems with semaphores
They're pretty low-level.
When using them for mutual exclusion, for example (the most common usage), it's easy to forget a wait or a release, especially when they don't occur in strictly matched pairs.
Their use is scattered.
If you want to change how processes synchronize access to a data structure, you have to find all the places in the code where they touch that structure, which is difficult and error-prone.
Order Matters
What could happen if we switch the two wait instructions in the consumer (or producer)?
38. GMU CS 571 Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes.
Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
M M
signal(S); signal(Q);
signal(Q) signal(S);
Starvation indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. Deadlock and Starvation
39. GMU CS 571 High Level Synchronization Mechanisms
Several high-level mechanisms that are easier to use have been proposed.
Monitors
Critical Regions
Read/Write locks
We will study monitors (Java and Pthreads provide synchronization mechanisms based on monitors)
They were suggested by Dijkstra, developed more thoroughly by Brinch Hansen, and formalized nicely by Tony Hoare in the early 1970s
Several parallel programming languages have incorporated some version of monitors as their fundamental synchronization mechanism
40. GMU CS 571 Monitors A monitor is a shared object with operations, internal state, and a number of condition queues. Only one operation of a given monitor may be active at a given point in time
A process that calls a busy monitor is delayed until the monitor is free
On behalf of its calling process, any operation may suspend itself by waiting on a condition
An operation may also signal a condition, in which case one of the waiting processes is resumed, usually the one that waited first
41. GMU CS 571 Monitors Condition (cooperation) Synchronization with Monitors:
Access to the shared data in the monitor is limited by the implementation to a single process at a time; therefore, mutually exclusive access is inherent in the semantic definition of the monitor
Multiple calls are queued Mutual Exclusion (competition) Synchronization with Monitors:
delay takes a queue type parameter; it puts the process that calls it in the specified queue and removes its exclusive access rights to the monitors data structure
continue takes a queue type parameter; it disconnects the caller from the monitor, thus freeing the monitor for use by another process. It also takes a process from the parameter queue (if the queue isnt empty) and starts it
42. GMU CS 571 Shared Data with Monitors Monitor SharedData
{ int balance;
void updateBalance(int amount);
int getBalance();
void init(int startValue) { balance = startValue; }
void updateBalance(int amount){ balance += amount; }
int getBalance(){ return balance; }
}
43. GMU CS 571 Monitors To allow a process to wait within the monitor, a condition variable must be declared, as
condition x, y;
Condition variable can only be used with the operations wait and signal.
The operation
x.wait();means that the process invoking this operation is suspended until another process invokes
x.signal();
The x.signal operation resumes exactly one suspended process on condition variable x. If no process is suspended on condition variable x, then the signal operation has no effect.
Wait and signal operations of the monitors are not the same as semaphore wait and signal operations!
44. GMU CS 571 Monitor with Condition Variables When a process P signals to wake up the process Q that was waiting on a condition, potentially both of them can be active.
However, monitor rules require that at most one process can be active within the monitor. Who will go first?
Signal-and-wait: P waits until Q leaves the monitor (or, until Q waits for another condition).
Signal-and-continue: Q waits until P leaves the monitor (or, until P waits for another condition).
Signal-and-leave: P has to leave the monitor after signaling (Concurrent Pascal)
The design decision is different for different programming languages
45. GMU CS 571 Monitor with Condition Variables
46. GMU CS 571 Producer-Consumer Problem with Monitors Monitor Producer-consumer
{
condition full, empty;
int count;
void insert(int item); //the following slide
int remove(); //the following slide
void init() {
count = 0;
}
}
47. GMU CS 571 Producer-Consumer Problem with Monitors (Cont.) void insert(int item){if (count == N) full.wait();
insert_item(item); // Add the new item count ++;if (count == 1) empty.signal();
}
int remove(){ int m;if (count == 0) empty.wait();m = remove_item(); // Retrieve one itemcount --;if (count == N1) full.signal();
return m;
}
48. GMU CS 571 Producer-Consumer Problem with Monitors (Cont.) void producer() { //Producer process while (1) {
item = Produce_Item();
Producer-consumer.insert(item); }
}
void consumer() { //Consumer process while (1) {
item = Producer-consumer.remove(item);
consume_item(item); }
}
49. GMU CS 571 Monitors Evaluation of monitors:
Strong support for mutual exclusion synchronization
Support for condition synchronization is very similar as with semaphores, so it has the same problems
Building a correct monitor requires that one think about the "monitor invariant. The monitor invariant is a predicate that captures the notion "the state of the monitor is consistent."
It needs to be true initially, and at monitor exit
It also needs to be true at every wait statement
Can implement monitors in terms of semaphores ? that semaphores can do anything monitors can.
The inverse is also true; it is trivial to build a semaphores from monitors
50. GMU CS 571 Dining-Philosophers Problem with Monitors monitor dp
{
enum {thinking, hungry, eating} state[5];
condition self[5];
void pickup(int i) // following slides
void putdown(int i) // following slides
void test(int i) // following slides
void init() {
for (int i = 0; i < 5; i++)
state[i] = thinking;
}
}
51. GMU CS 571 void pickup(int i) {
state[i] = hungry;
test(i);
if (state[i] != eating)
self[i].wait();
}
void putdown(int i) {
state[i] = thinking;
// test left and right neighbors, wake them up if possible
test((i+4) % 5);
test((i+1) % 5);
} Solving Dining-Philosophers Problem with Monitors
52. GMU CS 571 void test(int i) {
if ( (state[(i + 4) % 5] != eating) &&
(state[i] == hungry) &&
(state[(i + 1) % 5] != eating)) {
state[i] = eating;
self[i].signal();
}
}
Solving Dining-Philosophers Problem with Monitors
53. GMU CS 571 Dining Philosophers problem (Cont.) Philosopher1 arrives and starts eating
Philosopher2 arrives; he is suspended
Philosopher3 arrives and starts eating
Philosopher1 puts down the chopsticks, wakes up Philosopher2 (suspended once again)
Philosopher1 re-arrives, and starts eating
Philosopher3 puts down the chopsticks, wakes up Philosopher2 (suspended once again)
Philosopher3 re-arrives, and starts eating
54. GMU CS 571 Deadlock In a multi-programmed environment, processes/threads compete for (exclusive) use of a finite set of resources
55. Bridge Crossing Example Traffic only in one direction.
Each section of a bridge can be viewed as a resource.
If a deadlock occurs, it can be resolved if one car backs up (preempt resources and rollback).
Several cars may have to be backed up if a deadlock occurs.
Starvation is possible.
56. GMU CS 571 Necessary Conditions for Deadlock Mutual exclusion At least one resource must be held in a non-sharable mode
Hold and wait A process holds at least one resource while waiting to acquire additional resources that are being held by other processes
No preemption a resource is only released voluntarily from a process
Circular wait: A set of processes {P0,,Pn} such that Pi is waiting for a resource held by Pi+1 and Pn is waiting for a resource held by P0
57. GMU CS 571 Resource-Allocation Graphs Process
Resource Type with 4 instances
Pi requests instance of Rj
Pi is holding an instance of Rj
58. GMU CS 571 Resource Allocation Graph
59. GMU CS 571 Graph With A Cycle But No Deadlock
60. GMU CS 571 Handling Deadlock Deadlock prevention ensure that at least one of the necessary conditions cannot occur
Deadlock avoidance analysis determines whether a new request could lead toward a deadlock situation
Deadlock detection detect and recover from any deadlocks that occur
61. GMU CS 571 Deadlock Prevention Prevent deadlocks by ensuring one of the required conditions cannot occur
Mutual exclusion ? this condition typically cannot be removed for non-sharable resources
Hold and wait ? elimination requires that processes request and acquire all resources in single atomic action
No preemption ? add pre-emption when waiting for a resource and some other processes needs a resource currently held
Circular wait ? require processes to acquire resources in some ordered way.
62. Deadlock Avoidance When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state.
System is in safe state if there exists a sequence <P1, P2, , Pn> of ALL the processes in the systems such that for each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj, with j < i. If a system is in safe state ? no deadlocks.
If a system is in unsafe state ? possibility of deadlock.
Avoidance ? ensure that a system will never enter an unsafe state.
63. GMU CS 571 Example: Deadlock Avoidance Overall System has 12 tape drives and 3 processes:
Safe state: <P1, P0, P2>
P1 only needs 2 of the 3 remaining drives
P0 needs 5 (3 remaining + 2 held by P1)
P2 needs 7 (3 remaining + 4 held by P0)
64. GMU CS 571 Example (contd) Overall System has 12 tape drives and 3 processes.
Now P2 requests 1 more drive are we still in a safe state if we grant this request??
Unsafe state only P1 can be granted its request and once it terminates, there are still not enough drives for P0 and P2 ? potential deadlock
Avoidance algorithm would make P2 wait
65. Avoidance algorithms Single instance of a resource type. Use a resource-allocation graph
Multiple instances of a resource type. Use the bankers algorithm
66. Resource-Allocation Graph Scheme Claim edge Pi ? Rj indicated that process Pj may request resource Rj; represented by a dashed line.
Claim edge converts to request edge when a process requests a resource.
Request edge converted to an assignment edge when the resource is allocated to the process.
When a resource is released by a process, assignment edge reconverts to a claim edge.
Resources must be claimed a priori in the system.
67. Resource-Allocation Graph
68. Unsafe State In Resource-Allocation Graph
69. Bankers Algorithm Multiple instances.
Each process must a priori claim maximum use.
When a process requests a resource it may have to wait.
When a process gets all its resources it must return them in a finite amount of time.
70. GMU CS 571 Bankers Algorithm Overall System has m=3 different resource types (A=10, B=5, C=7) and n=5 processes:
In a safe state??
71. Data Structures for the Bankers Algorithm Available: Vector of length m. If available [j] = k, there are k instances of resource type Rj available.
Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj.
Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj.
Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task.
Need [i,j] = Max[i,j] Allocation [i,j].
72. GMU CS 571 Bankers Algorithm (contd) Overall System has 3 different resource types (A=10, B=5, C=7) and 5 processes:
safe state: <P1,P3,P4,P2,P0>
73. GMU CS 571 Bankers Algorithm (contd) <P1,P3,P4,P2,P0>
All of P1s requests can be granted from available
74. GMU CS 571 Bankers Algorithm (contd) <P1,P3,P4,P2,P0>
Once P1 ends, P3s requests could be granted
75. GMU CS 571 Bankers Algorithm (contd) <P1,P3,P4,P2,P0>
Once P3 ends, P4s requests could be granted
76. GMU CS 571 Bankers Algorithm (contd) <P1,P3,P4,P2,P0>
Once P4 ends, P2s requests could be granted
77. GMU CS 571 Bankers Algorithm (contd) <P1,P3,P4,P2,P0>
Once P2 ends, P0s requests could be granted
78. Safety Algorithm (Sec. 7.5.3.1) 1. Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish [i] = false for i = 0, 1, , n- 1.
2. Find an i such that both:
(a) Finish [i] = false
(b) Needi ? Work
If no such i exists, go to step 4.
3. Work = Work + AllocationiFinish[i] = truego to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state.
79. GMU CS 571 Bankers Algorithm (contd)
Starting from previous state, suppose P1 requests (1,0,2). Should this be granted?
<P1>
80. GMU CS 571 Bankers Algorithm (contd)
P1 requests (1,0,2). Should this be granted?
<P1,P3>
81. GMU CS 571 Bankers Algorithm (contd)
P1 requests (1,0,2). Should this be granted?
<P1,P3,P0>
82. GMU CS 571 Bankers Algorithm (contd)
P1 requests (1,0,2). Should this be granted?
<P1,P3,P0,P2>
83. GMU CS 571 Bankers Algorithm (contd)
P1 requests (1,0,2). Should this be granted?
<P1,P3,P0,P2,P4>
84. GMU CS 571 Bankers Algorithm (contd)
Now, P4 requests (3,3,0). Should this be granted?
No more than the available resources
85. GMU CS 571 Bankers Algorithm (contd)
Now, P0 requests (0,2,0). Should this be granted?
86. Resource-Request Algorithm for Process Pi (Sec 7.5.3.2) Request = request vector for process Pi. If Requesti [j] = k then process Pi wants k instances of resource type Rj.
1. If Requesti ? Needi go to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim.
2. If Requesti ? Available, go to step 3. Otherwise Pi must wait, since resources are not available.
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available Request;
Allocationi = Allocationi + Requesti;
Needi = Needi Requesti;
If safe ? the resources are allocated to Pi.
If unsafe ? Pi must wait, and the old resource-allocation state is restored
87. GMU CS 571 Deadlock Detection and Recovery Allow system to enter deadlock state
Detection algorithm
Single Instance (look for cycles in wait-for graph)
Multiple instance (related to bankers algorithm) Algorithm requires an order of O(m x n2) operations to detect whether the system is in deadlocked state.
Recovery scheme
88. Resource-Allocation Graph and Wait-for Graph
89. Detection-Algorithm Usage When, and how often, to invoke depends on:
How often a deadlock is likely to occur?
How many processes will need to be rolled back?
one for each disjoint cycle
If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not be able to tell which of the many deadlocked processes caused the deadlock.
90. Recovery from Deadlock: Process Termination Abort all deadlocked processes.
Abort one process at a time until the deadlock cycle is eliminated.
In which order should we choose to abort?
Priority of the process.
How long process has computed, and how much longer to completion.
Resources the process has used.
Resources process needs to complete.
How many processes will need to be terminated.
Is process interactive or batch?
91. Recovery from Deadlock: Resource Preemption Selecting a victim minimize cost.
Rollback return to some safe state, restart process for that state.
Starvation same process may always be picked as victim, include number of rollbacks in cost factor.