1 / 36

Process Synchronization Solutions & Mechanisms

Exploring critical-section problem solutions & hardware mechanisms for synchronized process execution. Includes Peterson’s Solution, semaphores, and examples. Discusses multitasking and race conditions.

margueritel
Download Presentation

Process Synchronization Solutions & Mechanisms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 5: Process Synchronization

  2. Module 5: Process Synchronization • Background • The Critical-Section Problem • Peterson’s Solution • Synchronization Hardware • Semaphores • Classic Problems of Synchronization • Synchronization Examples

  3. Objectives • To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data • To present both software and hardware solutions of the critical-section problem

  4. Background • Multitasking: multiple cooperating processes running concurrently • Need to access shared data can lead to data inconsistency • OS needs to maintain data consistency • Requires mechanisms to ensure the orderly execution of cooperating processes • We saw one in Chapter 3: the producer-consumer bounded buffer • We’ll modify it to keep track of the count of items produced • #define BUFFER_SIZE 10 • typedef struct { • . . . • } item; • item buffer[BUFFER_SIZE]; • int in = 0; • int out = 0; • int counter = 0;

  5. Background while (true) { while (counter == BUFFER SIZE) ; // do nothing -- no free buffers // Produce an item buffer[in] = item; in = (in + 1) % BUFFER SIZE; counter++; } while (true) { while (counter == 0) ; // do nothing -- nothing to consume // remove an item from the buffer item = buffer[out]; out = (out + 1) % BUFFER SIZE; counter--; return item; }

  6. Background • Producer’s counter++could be implemented as MOV AX, [counter] INC AX MOV [counter], AX • Consumer’s counter--could be implemented asMOV AX, [counter] DEC AX MOV [counter], AX • Concurrent execution, process preempted after 2 CPU commands, counter = 5 MOV AX, [counter] counter = 5 INC AX MOV AX, [counter] counter = 5 DEC AX MOV [counter], AX counter = 6 MOV [counter], AX counter = 4

  7. Background • Race condition: when several processes manipulate the same data, and the outcome depends on the particular (and often unpredictable) execution order • Unavoidable consequence of multitasking and multithreading with shared data and resources • Made worse by multicore systems, where several threads of a process are literally running at the same time using the same global data • Solution: process synchronization, finding ways to coordinate multiple cooperating processes so that they do not interfere with each other

  8. Critical Section • A process’ critical sectionis the segment of code in which it modifies common variables • Solving the race condition requires insuring that no two process can execute their critical section at once • Designing a protocol to do this is the critical section problem • Basic idea: • Before running the critical section,request permission and wait for itin an entry section • After finished running the critical section, release permission in an exit section • Rest of the program after the exitsection is the remainder section • do{ • entry section • critical section • exit section • remainder section • } while (true)

  9. Critical Section • Mutual Exclusion • If a process is executing in its critical section, then no other processes can be executing in their critical sections • Progress • If no process is executing in its critical section and there exist some processes that wish to enter their critical section • The selection of the next process to enter its critical section cannot be postponed indefinitely • Bounded Waiting • When a process requests to enter its critical section • There is a bound on the number of times other processes are allowed to enter before it, before its request is granted

  10. Peterson’s Solution • Software solution • Two processes, P0 and P1 • If one is Pi, the other is Pj • The two processes share two variables: • int turn; • Indicates whose turn it is to enter the critical section. • bool flag[2]; • Indicates if a process is ready to enter the critical section • flag[i] = true means that process Pi is ready do { flag[i] = TRUE; turn = j; while (flag[j] && turn == j) ; //critical section flag[i] = FALSE; //remainder section } while (TRUE);

  11. Peterson’s Solution • Mutual Exclusion • The while condition guarantees that a process can only enter its critical section if the other’s flag is false or it is its turn • If both processes are in their critical section, then both got out of the while and both flags are true, which means turn is both 0 and 1 at once • Progress • Bounded Waiting • Pi can only be prevented from entering its critical section by the while condition • If Pj is no ready to run its critical section, its flag is false and Pi can enter • If Pj is already running its critical section, Pi enters the while loop, until Pj is done and sets its flag to false • If Pi and Pj are simultaneously ready, Pj might run first but Pi will run as soon as Pj is done (waiting bounded at 1)

  12. Synchronization Hardware • Many systems provide hardware support for critical section code • Allow us to create hardware locks to protect critical sections • Single processor – could disable interrupts • cli and sti instructions • Currently running code would execute without preemption • Generally too inefficient on multiprocessor systems • Delays passing message around • Reduces scalability • Modern machines provide special atomic (not interruptable) hardware instructions • Test memory word and set value • Swap contents of two memory words • We will generalize them as two functions do { acquire lock critical section release lock remainder section } while (TRUE);

  13. Synchronization Hardware • Test memory word and set value • Remember: this is an atomic hardware instruction • Can implement lock as a shared memory word lock initialized to FALSE boolean TestAndSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv: } do { while ( TestAndSet (&lock )) ; // do nothing //critical section lock = FALSE; //remainder section } while (TRUE);

  14. Synchronization Hardware • Swap contents of two memory words • Remember: this is an atomic hardware instruction • Can implement lock as shared boolean variable lock initialized to FALSE and swapped with each process’ local key void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: } do { key = TRUE; while (key == TRUE) Swap (&lock, &key ); //critical section lock = FALSE; //remainder section } while (TRUE);

  15. Synchronization Hardware • Both these algorithm respect mutual exclusion and progress, but not bounded waiting • Assuming n processes • We add a shared array of waiting processes • Set to true at entry • Set to false beforerunning • In exit section of Pi, circular scan array and set next process to false • Waiting boundedat n+1 • If no waiting process,release lock (progress) do { waiting[i] = TRUE; key = TRUE; while (waiting[i] && key) key = TestAndSet(&lock); waiting[i] = FALSE; //critical section j = (i + 1) % n; while ((j != i) && !waiting[j]) j = (j + 1) % n; if (j == i) lock = FALSE; else waiting[j] = FALSE; //remainder section } while (TRUE);

  16. Semaphores • SemaphoreS is an integer variable used for synchronization • Can only be accessed via two indivisible (atomic) operations wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

  17. Semaphores • Semaphores are used to limit the number of objects accessing a resource • For example, a critical section • Counting semaphorecan take any value • Set to the maximum number of available resources • Binary semaphore can only be 0 or 1 • Also known as mutex locks • Semaphore mutex; // initialized to 1 • do { • wait (mutex); • // Critical Section • signal (mutex); • // remainder section • } while (TRUE);

  18. Semaphore Implementation • One problem of semaphores shown here (and of the other synchronization methods) is that they use busy waiting: a process has to loop endlessly while it waits • Waste of CPU • Also called spinlock, because process “spins” while waiting for a lock • Alternative: use two process-handling system calls • block(): places the invoking process in the waiting queue • wakeup(P): moves process P from the waiting queue to the ready queue.

  19. Semaphore Implementation wait(semaphore *S) { S->value--; if (S->value < 0) { //add process to S->list; block(); } } signal(semaphore *S) { S->value++; if (S->value <= 0) { //get next process from S->list; wakeup(P); } } typedef struct{ int value; struct process *list; }semaphore

  20. Semaphore Implementation • Must guarantee that no two processes can execute wait() and signal() on the same semaphore at the same time • Thus, implementation becomes the critical section problem where the wait and signal code are placed in the critical section. • Could use busy waiting… which is what we wanted to eliminate! • Moved busy waiting from process critical section to wait() & signal() critical section • Much shorter code, less likely to be occupied • Can be done in less than a second, while a process critical section can take minutes or more • Much less wasted CPU

  21. Semaphore Problems • With multiple semaphores, two or more processes can become deadlocked waiting for the other to release a semaphore they need • Let S and Q be two binary semaphores P0P1 wait (S); wait (Q); wait (Q); wait (S); //critical section //critical section signal (S); signal (Q); signal (Q); signal (S); • If the selection of the next process in the list violates bounded waiting, then a process can suffer from starvationor indefinite blocking • A high-priority process needing a lock held by a low-priority process will be forced it to wait. Worse, if a runnable medium-priority process preempts the low-priority one, the high-priority one is delayed longer; priority inversion • Can be solved by priority inheritance

  22. Classical Problems of Synchronization • Three typical synchronization problems used as benchmarks and tests • Bounded-Buffer Problem • Readers and Writers Problem • Dining-Philosophers Problem

  23. Bounded-Buffer Problem • N buffers, each can hold one item, initially empty • Binary semaphore mutex, for access to the buffer, initialized to 1 • Counting semaphore full, number of full buffers, initialized to 0 • Counting semaphore empty, number of empty buffers, initialized to N do { wait (full); wait (mutex); // remove an item from buffer signal (mutex); signal (empty); // consume the item } while (TRUE); do { // produce an item wait (empty); wait (mutex); // add item to the buffer signal (mutex); signal (full); } while (TRUE);

  24. Readers-Writers Problem • Data is shared among a number of concurrent processes • Readers that only read the data, but never write or update it • Writers that both read and write the data • Problem: multiple readers should be allowed simultaneously, but each writer should have exclusive access • First variation: new readers can read while writer waits (writer might starve) • Second variation: FCFS, new readers wait after writer (reader might starve) • Solution for second one simple with a single semaphore • Solution for first one: • Binary semaphore mutex initialized to 1 • Binary semaphore wrt initialized to 1 • Integer readcount initialized to 0

  25. Readers-Writers Problem • Write process can only go through with wrt semaphore • Read process waits/signals wrt if it is the first/last read process on the data • Integer readcount keeps track of current number of readers • Updating it is part of the critical section, therefore locked by mutex semaphore do { wait (mutex); readcount++; if (readcount == 1) wait (wrt); signal (mutex) // reading is performed wait (mutex); readcount--; if (readcount == 0) signal (wrt); signal (mutex); } while (TRUE); do { wait (wrt); // writing signal (wrt); } while (TRUE);

  26. Dining-Philosophers Problem • Simple representation of problem of allocating limited, shared resources between multiple processes without deadlocks or starvation • Problem: • Bowl of rice: critical data • Chopsticks: Array of binary semaphores chopstick[5] initialized to 1 • Philosophers: concurrent processes

  27. Dining-Philosophers Problem • Simple solutions • Max four philosophers • Only pick up chopsticks if both are available, without giving another a chance to pick them up (i.e. in a critical section) • Asymmetry: odd-numbered philosophers pick up left chopstick first, even-numbered philosophers pick up right chopstick first do { wait( chopstick[i] ); wait( chopstick[(i+1)%5]); // eat signal( chopstick[i] ); signal(chopstick[(i+1)%5]); // think } while (TRUE);

  28. Problems with Semaphores • Incorrect use of semaphore operations: • signal(mutex) … wait(mutex) • wait(mutex) … wait(mutex) • Omitting wait(mutex) or signal(mutex) (or both) • Causes timing and synchronization errors that can be difficult to detect • Errors in the entire system (no mutual exclusion, deadlocks) caused by only one poorly-programmed user process • Might only occur given a specific execution sequence

  29. Synchronization Examples • Solaris • Windows XP • Linux • Pthreads

  30. Solaris Synchronization • Implements a variety of locks to support multitasking, multithreading (including real-time threads), and multiprocessing • Implements semaphores as we studied • Adaptive mutexesused to protect short (<100 instructions) critical data in multi-CPU systems • If process holding lock is currently running on another CPU, spinlock • Otherwise, process holding lock is not currently running, so sleep • Reader-writer locks used to protect long critical data that is often read (best for multithreading) • Turnstilesused to order the list of threads waiting to acquire lock • A thread needs to enter a turnstile for each object it is waiting for • Implemented as one turnstile per kernel thread rather than per object • First thread to lock on an object becomes its turnstile • Since a thread can only be waiting after one object at a time, this is more efficient

  31. Windows XP Synchronization • Multithreaded kernel • On single-processor system, simply masks all interrupts whose interrupt handlers can access the critical resource • On multiprocessor systems, uses spinlocks and prevents preempting of the thread using the resource • Outside the kernel, synchronization done with dispatcher objects • Can make use of mutexes, semaphores, timers • A dispatcher object can be signaled (available to be used by a thread) or nonsignaled (already used, the new thread must wait) • When a dispatcher object moves to signaled state, kernel checks for waiting threads and moves a number of them (depending on the nature of the dispatcher object) to the ready queue

  32. Linux Synchronization • For short-term locks • On single-processor systems: disable kernel preemption • On multi-processor systems: spinlock • For long-term locks: semaphores

  33. Pthreads Synchronization • Pthreads API is an IEEE standard, OS-independent • Pthread standard includes: • mutex locks • reader-writer locks • Certain non-standard extensions add: • semaphores • spinlocks

  34. Review • Any solution to the critical section problem has to respect three properties. What are they and why are they important? • What is a spinlock, and why is it often used in multi-CPU systems but must be avoided in single-CPU systems? • What is priority inheritance; what problem does it solve and how?

  35. Exercises • Read sections 5.1 to 5.7 and 5.9 • If you have the “with Java” textbook, skip the Java sections and subtract 1 to the following section numbers • 5.1 • 5.3 • 5.4 • 5.5 • 5.7 • 5.8 • 5.9 • 5.10 • 5.11 • 5.12 • 5.16 • 5.17 • 5.28

  36. End of Chapter 5

More Related