1.11k likes | 1.13k Views
Learn about basic synchronization principles and concurrency in interacting processes, including the challenges and solutions for managing shared resources. Explore the concept of critical sections and race conditions, and discover various techniques for synchronizing multiple threads.
E N D
Cooperating Processes • Independent process • Cannot affect or be affected by the other processes in the system • Does not share any data with other processes • Interacting process • Can affect or be affected by the other processes • Shares data with other processes • We focus on interacting processes through physically or logically shared memory
Concurrency • Threads/processes which logically appear to be executing in parallel • Value of concurrency – speed & economics • But few widely-accepted concurrent programming languages (Java is an exception) • Few concurrent programming paradigms • Each problem requires careful consideration • There is no common model • OS tools to support concurrency tend to be “low level”
Synchronization • Need to ensure that interacting processes/ threads begin to execute a designated block of code at the same logical time • May cause problems when using shared resources • Deadlock • Critical sections • Nondeterminacy (no assurance that repeating a parallel program will produce same results)
Synchronization Example • Want to synchronize a set of threads • Parent thread creates N child threads and signals the child threads when they are supposed to halt • Child threads attempts to synchronize with the parent at the end of each iteration • Child proceeds with computation if parent has not issued signal to synchronize
Synchronizing Multiple Threads with a Shared Variable Initialize CreateThread(…) Wait runTime seconds Thread Work … TRUE TRUE TRUE runFlag? runFlag=FALSE FALSE FALSE FALSE Terminate Exit
The Critical-Section Problem • n processes all competing to use some shared data • Each process has a code segment, called critical section, in which the shared data is accessed. • Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section, called mutual exclusion
The Critical-Section Problem – cont. • Structure of process Pi repeat entry section critical section exit section remainder section untilfalse;
shared double balance; Code for p1Code for p2 . . . . . . balance = balance + amount; balance = balance - amount; . . . . . . balance+=amount balance-=amount balance Updating A Shared Variable
Execution of p1 Execution of p2 … load R1, balance load R2, amount Timer interrupt … load R1, balance load R2, amount sub R1, R2 store R1, balance … Timer interrupt add R1, R2 store R1, balance … Updating A Shared Variable– cont.
Race Condition • Race condition • When several processes access and manipulate the same data concurrently, there is a race among the processes • The outcome of the execution depends on the particular order in which the access takes place • This is called a race condition
Critical Sections • Mutual exclusion: Only one process can be in the critical section at a time • There is a race to execute critical sections • The sections may be defined by different code in different processes • cannot easily detect with static analysis • Without mutual exclusion, results of multiple execution are not determinate • Need an OS mechanism so programmer can resolve races
Requirements for Critical-Section Solutions • Mutual Exclusion. • If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. • Progress • If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. • Bounded Waiting • A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.
Some Possible Solutions • Disable interrupts • Software solution – locks • Transactions • FORK(), JOIN(), and QUIT() [Chapter 2] • Terminate processes with QUIT() to synchronize • Create processes whenever critical section is complete • … something new …
Solution through Disabling Interrupts • In a uni-processor system, an interrupt causes the race condition • The scheduler can be called in the interrupt handler, resulting in context switch and data inconsistency
Solution through Disabling Interrupts– cont. • Process pi repeat disableInterrupts(); //entry section critical section enableInterrupts(); //exit section remainder section untilfalse;
Solution through Disabling Interrupts– cont. Program for p1 disableInterrupts(); balance = balance+amount; enableInterrupts(); Program for p2 disableInterrupts(); balance = balance-amount; enableInterrupts(); shared double amount, balance; /* shared variables */
Solution through Disabling Interrupts– cont. • This solution may affect the behavior of the I/O system • Interrupts can be disabled for an arbitrarily long time • The interrupts can be disabled permanently if the program contains an infinite loop in its critical section • Only want to prevent p1 and p2 from interfering with each other – this blocks all pi! • Try using a software approach – a shared lock variable • But even this has problems
Software Attempt to Solve the Problem • Only 2 processes, P1 and P2 • General structure of process Pi(other process Pj) repeat entry section critical section exit section remainder section untilfalse; • Processes may share some common variables to synchronize their actions.
Algorithm 1 • Shared variables: • varturn: (0..1);initially turn = 0 • turn = i Pi can enter its critical section • Process Pi repeat whileturn idono-op; critical section turn := 1 - i; remainder section untilfalse; • Satisfies mutual exclusion, but not progress
Algorithm 2 • Shared variables • varflag: array [0..1] ofboolean;initially flag [0] = flag [1] = false. • flag [i] = true Pi ready to enter its critical section • Process Pi (note that j represents the other process) repeat flag[i] := true;whileflag[j] dono-op; critical section flag [i] := false; remainder section untilfalse; • Satisfies mutual exclusion, but not progress requirement.
Algorithm 2 – version 2 • Shared variables • varflag: array [0..1] ofboolean;initially flag [0] = flag [1] = false. • flag [i] = true Pi ready to enter its critical section • Process Pi repeat whileflag[j] dono-op; flag[i] := true; critical section flag [i] := false; remainder section untilfalse; • Does not satisfy mutual exclusion
Algorithm 3 • Combined shared variables of algorithms 1 and 2. • Process Pi(again j represents the other process) repeat flag [i] := true;turn := j;while (flag [j] and turn = j) dono-op; critical section flag [i] := false; remainder section untilfalse; • Meets all three requirements; solves the critical-section problem for two processes.
Algorithm 4 • Shared variable – a lock • boolean lock;initially lock = FALSE • lock is FALSE Pi can enter its critical section • Process P1 Process P2 repeat while (lock)dono-op; lock = TRUE; critical section lock = FALSE reminder section untilfalse; repeat while (lock)dono-op; lock = TRUE; critical section lock = FALSE; reminder section untilfalse;
Algorithm 4 – cont. shared bool lock = FALSE; shared double balance; Code for p1Code for p2 /* Acquire the lock */ /* Acquire the lock */ while(lock) ; while(lock) ; lock = TRUE; lock = TRUE; /* Critical sect */ /* Critical sect */ balance += amount; balance -= amount; /* Release lock */ /* Release lock */ lock = FALSE; lock = FALSE;
Blocked at while p2 p1 lock = FALSE lock = TRUE Interrupt Interrupt Interrupt Algorithm 4 – cont. • Creates a busy-wait condition • If p1 is interrupted while in its critical section (preempted), then p2 will spend its entire timeslice executing the while statement
Algorithm 4 – cont. • Another problem • What if the process is interrupted immediately after executing the while, but before setting the lock? • Both process could be in their critical sections at the same time!
Worse yet … another race condition … • Is it possible to solve the problem? Unsafe “Solution” shared boolean lock = FALSE; shared double balance; Code for p1Code for p2 /* Acquire the lock */ /* Acquire the lock */ while(lock) ; while(lock) ; lock = TRUE; lock = TRUE; /* Execute critical sect */ /* Execute critical sect */ balance = balance + amount; balance = balance - amount; /* Release lock */ /* Release lock */ lock = FALSE; lock = FALSE;
Atomic Lock Manipulation enter(lock) { exit(lock) { disableInterrupts(); disableInterrupts(); /* Loop until lock is TRUE */ lock = FALSE; while(lock) { enableInterrupts(); /* Let interrupts occur */ } enableInterrupts(); disableInterrupts(); } lock = TRUE; enableInterrupts(); } • Bound the amount of time that interrupts are disabled • Can include other code to check that it is OK to assign a lock • … but this is still overkill …
Deadlock • Another more subtle problem • Two or more processes/threads get into a state where each is controlling a resource needed by the other
Deadlock (2) shared boolean lock1 = FALSE; shared boolean lock2 = FALSE; shared list L; Code for p1Code for p2 . . . . . . /* Enter CS to delete elt */ /* Enter CS to update len */ enter(lock1); enter(lock2); <delete element>; <update length>; <intermediate computation>; <intermediate computation> /* Enter CS to update len */ /* Enter CS to add elt */ enter(lock2); enter(lock1); <update length>; <add element>; /* Exit both CS */ /* Exit both CS */ exit(lock1); exit(lock2); exit(lock2); exit(lock1); . . . . . .
Processing Two Components shared boolean lock1 = FALSE; shared boolean lock2 = FALSE; shared list L; Code for p1Code for p2 . . . . . . /* Enter CS to delete elt */ /* Enter CS to update len */ enter(lock1); enter(lock2); <delete element>; <update length>; /* Exit CS */ /* Exit CS */ exit(lock1); exit(lock2); <intermediate computation>; <intermediate computation> /* Enter CS to update len */ /* Enter CS to add elt */ enter(lock2); enter(lock1); <update length>; <add element>; /* Exit CS */ /* Exit CS */ exit(lock2); exit(lock1); . . . . . .
Transactions • A transaction is a list of operations • When the system begins to execute the list, it must execute all of them without interruption or • It must not execute any at all • Example: List manipulator • Add or delete an element from a list • Adjust the list descriptor, e.g., length • Too heavyweight – need something simpler
Classic Solution • Recall mechanisms for creating and destroying processes • FORK(), JOIN(), and QUIT() • May be used to synchronize concurrent processes • Adding a synchronization operator is an modern approach
Classic Synchronization (2) (Initial Process) (Initial Process) (a) Create/Destroy (b) Synchronization FORK FORK A A FORK FORK A B A B JOIN Synchronize A B B (waiting) Synchronize FORK A B B C JOIN JOIN B A QUIT QUIT
Classic Solution – cont. • Process creation/destroying tend to be costly operations • Considerable manipulation of process descriptors, protection mechanisms, and memory management mechanisms • Synchronization operation can be thought of as a resource request, and can be implemented much more efficiently • Contemporary OS use synchronization mechanisms – semaphores
Dijkstra Semaphore • Invented in the 1960s – Edsger Dijkstra • Conceptual OS mechanism, with no specific implementation defined (could be enter()/exit()) • Basis of all contemporary OS synchronization mechanisms • A semaphore, s, is a nonnegative integer variable that can only be changed or tested by these two indivisible (atomic) functions: V(s): [s = s + 1] P(s): [while(s == 0) {wait}; s = s - 1]
Semaphore • Dijkstra was Dutch, so • P: short for proberen, “to test” • V: short for verhogen, “to increment” • P is the wait operation • V is the signal operation • Note that some texts use P and V; others use wait and signal
Semaphores • Two types of semaphores • Binary semaphore – can only be either 0 or 1 • Can also be true or false • Counting semaphore – can be any positive number
Solution Constraints • Processes p0 & p1 enter critical sections • Mutual exclusion: Only one process at a time in the critical section (CS) • Only processes competing for a CS are involved in resolving who enters the CS • Once a process attempts to enter its CS, it cannot be postponed indefinitely • After requesting entry, only a bounded number of other processes may enter before the requesting process
Notation • Let fork(proc, N, arg1, arg2, …, argN)be a command to create a process, and to have it execute using the given N arguments • Canonical problem: Proc_0() { proc_1() { while(TRUE) { while(TRUE { <compute section>; <compute section>; <critical section>; <critical section>; } } } } <shared global declarations> <initial processing> fork(proc_0, 0); fork(proc_1, 0);
Solution Assumptions • Memory read/writes are indivisible (simultaneous attempts result in some arbitrary order of access) • There is no priority among the processes • Relative speeds of the processes/processors is unknown • Processes are cyclic and sequential
Solving the Canonical Problem Proc_0() { proc_1() { while(TRUE) { while(TRUE { <compute section>; <compute section>; P(mutex);P(mutex); <critical section>; <critical section>; V(mutex);V(mutex); } } } } semaphore mutex = 1; fork(proc_0, 0); fork(proc_1, 0);