600 likes | 626 Views
Chapter 13 (I). Concurrency. Chapter 13 Topics. Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Ada Support for Concurrency Java Threads C# Threads Concurrency in Functional Languages Statement-Level Concurrency. Introduction.
E N D
Chapter 13 (I) Concurrency
Chapter 13 Topics • Introduction • Introduction to Subprogram-Level Concurrency • Semaphores • Monitors • Message Passing • Ada Support for Concurrency • Java Threads • C# Threads • Concurrency in Functional Languages • Statement-Level Concurrency
Introduction • Concurrency can occur at four levels: • Machine instruction level • High-level language statement level • Unit level • Program level • Because there are no language issues in instruction- and program-level concurrency, they are not addressed here • Concurrency can occur at four levels: • Machine instruction level • High-level language statement level • Unit level • Program level • Because there are no language issues in instruction- and program-level concurrency, they are not addressed here
Multiprocessor Architectures • Late 1950s - one general-purpose processor and one or more special-purpose processors for input and output operations • Early 1960s - multiple complete processors, used for program-level concurrency • Mid-1960s - multiple processors used for instruction-level concurrency • Single-Instruction Multiple-Data (SIMD) machines • Multiple-Instruction Multiple-Data (MIMD) machines • A primary focus of this chapter is shared memory MIMDmachines (multiprocessors)
SIMD and MIMD • SIMD use multiple processing units to perform the same operation on multiple data points simultaneously Exploit data level parallelism • For example, adjusting the contrast in a digital image • MIMD use multiple processing units to perform different operations on multiple data points simultaneously Exploit control and data level parallelism • For example, A multi-core CPU
Categories of Concurrency • Categories of Concurrency: • Physical concurrency - Multiple independent processors ( multiple threads of control) • Logical concurrency - The appearance of physical concurrency is implemented as the sharing of time of one processor User feels AS IF there were multiple threads of control) • Coroutines (quasi-concurrency) have a single thread of control • Introduced this concept in Chapter 9 [Slides 124-128]
Subroutines and Coroutines • Subroutines • When invoked, execution begins from the begining • When exits, they are finished; • An instance of a subroutine only returns once • Coroutines • They can also “exit” by calling other coroutines • Later, they may return to the point in the original coroutine • They are not exiting, but simply “yield”another coroutine
Coroutines Example varq := new queue coroutine produce loop while q is not full create some new items add the items to q yield to consume coroutine consume loop while q is not empty remove some items from q use the items yield to produce We will re-visit this example later
Coroutines - Chapter 9 • Coroutinedo not have a master-slave relationship • Coroutines call each other • Coroutines follow the symmetric control model • Coroutines may have multiple entry points • They are history-sensitive to keep their state between calls • A coroutine call is named resume • The first resume of a coroutine is at its beginning, but subsequent calls enter at the point just after the last executed statement in the coroutine • Typically, coroutines repeatedly resume each other, possibly forever • Coroutines provide quasi-concurrent execution • Their execution is interleaved but not overlapped
Coroutines- Chapter 9Possible execution control sequences for two coroutines without loops
Coroutines– chapter 9Coroutine execution sequence with loops
Producer and Consumer Problem – chapter 9 intitemCount = 0; procedure producer() { while (true) { item = produceItem(); if (itemCount == BUFFER_SIZE) { sleep(); } putItemIntoBuffer(item); itemCount = itemCount + 1; if (itemCount == 1) { wakeup(consumer); } } } Resume Consumer
Producer and Consumer Problem – chapter 9 procedure consumer() { while (true) { if (itemCount == 0) { sleep(); } item = removeItemFromBuffer(); itemCount = itemCount - 1; if (itemCount == BUFFER_SIZE - 1) { wakeup(producer); } consumeItem(item); } } Resume Producer
Motivations of Using Concurrency Several reasons • Multiprocessor computers equipped with multiple physical processor are now widely used • Even if a machine has just one processor, a program written to use concurrent execution can run faster than the same program written for non-concurrent execution • Many real-world situations involve concurrency • Many program applications are now spread over multiple machines, either locally or over the Internet • Concurrency is needed to collaboratethem together
Introduction to Subprogram-Level Concurrency • A task(processor thread)is a program unit that can be executed concurrently with other program units • Tasks differ from ordinary subprograms in that: • A task can start implicitly, subprogram must be called explicitly • When a program unit starts the execution of a task, it may not necessarily suspended • When a task’s execution is completed, control may not return to the caller; subprogram is the opposite • Tasks usually work collaboratively; subprogram may not
Two Categories of Tasks • Heavyweight tasks (Processes) execute in their own address space • Lightweight tasks(Threads) all run in the same address space – more efficient • A task is disjointif it does not communicate with or does not affect the execution of any other task in the program in any way
Process V.S. Threaded (I) A process essentially execute one single thread per process It can also be considered as a single threaded MS-DOS manages running programs as processes
Process V.S. Threaded (I) The figure on the right illustrates multiple thread A Java run-time environment is an example of a system of one process with multiple threads
States of a Process During the lifetime of a process, it may changes states New: the process is being created, but not yet started Ready: the process is ready to run but not currently running (CPU is not available) The process is waiting to be assigned to a processor Running: the process gets the CPU and executes its own instructions Blocked: The process is waiting for some event to occur/to be complete Exit: The process completes its execution
Task Synchronization • When multiple tasks are executed simultaneously, a mechanism is needed to control the order in which tasks execute • This mechanism is known as synchronization • Two kinds of synchronization • Cooperativesynchronization • Competitivesynchronization • Task communication is necessary for synchronization, provided by: • Shared global variables • Parameters, Pipeline • Message passing
Types Of Synchronization • Cooperative Synchronization: Task A must wait for task B to complete some specific activity before task A can continue its execution, • e.g., the producer-consumer problem • Competitive Synchronization: Two or more tasks must use some resource that cannot be used simultaneously, • e.g., a shared counter • Competition is usually controlled by mutual exclusion (will be discussed later)
Synchronization Example: Accessing Shared Data • Process A and B have access to a shared variable, called “balance”. The initial value of balance is 100. PROCESS A: PROCESS B: balance = balance - 100balance = balance + 200 • What the final result after the execution of process A and B? • Let’s assume that process A and process B are executing concurrentlyin a time-sharing, multi-programmed system.
Process A and B’s Behavior • The statement “balance = balance – 100” is translated into the following machine instructions such as: A1. load $t0, balance//Read the balanceinto $t0 A2. sub $t0,$t0,100 //Subtract $t0 by 100 A3. store $t0, balance//Save the result back to balance • Similarly, “Balance = Balance + 200” can be translated into the following machine instructions B1. load $t1, balance//Read the balanceinto $t1 B2. add $t1,$t1,200 //Add $t1 by 200 B3. store $t1, balance//Save the result back to balance
Let’s Think About The Implication Of The Execution… • What will be the final result of balance after executing process A and B? • Will you get a definite result? Or an indefinite result? • What is your result?
The Result: Race Condition • Result: In a time-shared system the exact instruction executionordercannot be predicted! • This situation is also known as the race condition • Scenario 1: A1. load $t0, balance A2. sub $t0,$t0,100 A3. store$t0, balance Context Switch B1. load $t1, balance B2. add $t1,$t1,200 B3. store$t1, balance • After execution, balance = 200 • Scenario 2: • A1. load $t0, balance A2. sub $t0,$t0,100 Context Switch • B1. load $t1, balance B2. add $t1,$t0,200 • B3. store$t1, balance Context Switch • A3. store$t0, balance • After execution, balance = 0 100 100 0 100 200 200 0
A situation in which multiple threads/processes read and write a shared data item and the final result depends on the relative timing of their execution Race Condition
Design Issues for Concurrency • Competition and cooperation synchronization • Controlling task scheduling • How can an application influence task scheduling • How and when tasks start and end execution • How and when are tasks created
Synchronization Mechanisms • Most OSs provide two important mechanisms that controls process synchronization • Mutex • Semaphore • Before introducing Mutex, let’s first introduce an related concept: • The Critical-Section
Critical Section • A section of code within a process, which requires access to shared resource, and must not be executed while another process is executing that section of code
The Critical-Section • n processes all competing to use some shared data D • Each process has a code segment, called critical section (critical region), in which the shared data Dis accessed. • Problem to be solved • Ensure that when one process is executing in its critical section, no other process is allowed to execute in their critical section. • The execution of the critical sections by the processes must be mutually exclusive.
Mutexand Semaphore • A mutex (mutual exclusion) is a token that must be grabbed, in order to execute the critical section • So, it is a binary variable (0 or 1) • Mutex is used to control competitive synchronization • A semaphore is a general mutex, that multiple threads can access. • It's a variable, whose values are regular integer • Semaphore is used to control collaborative synchronization
Conditions with/without Mutex Since variable is shared by two processes, the data access must be mutual exclusive In the case shown on the left, the final value is 2, as expected However, the outcome could be wrong, if the two threads run simultaneously, but with no lock or synchronization
Attempts to solve mutual exclusion do { …. entry section critical section exit section remainder section } while (1); • Multiple processes are compete for the shared resource • Processes may share some common variables • Assume each statement is executed atomically
Mutex Primitive acquire (){ while (!available);/*busy wait*/ available = false } release(){ available = true } do { acquire lock critical section release lock remainder section } while (1);
Semaphore • Dijkstra - 1965 • A semaphoreis a data structure consisting of a counter and a queue for storing task descriptors • A task descriptor is a data structure that stores all of the relevant information about the execution state of the task • can be used to implement guards on the code that accesses shared data structures • Three Operations • Semaphore smay be initialized to a nonnegativeinteger value • The semWait(s) (or called P operation) decrements the value • The semSignal(s) (or called V operation) increments the value • I may use semWaitor P operation, semSignalor V operation exchangeablly in the rest of the lecture
Mutual Exclusion Mutex is a binary semaphore
Re-visiting the “Simultaneous Balance Update” Problem • Shared data:int Balance; semaphore m;// initially mutex = 1 • Process A: ……. P(m); Balance = Balance – 100; • V(m); …… • Process B: ……. P(m); Balance = Balance + 200; • V(m); ……
Semantics of P and V • The value of the semaphore S is the number of units of the resource that are currently available. • The P operation wastes time or sleeps until a resource protected by the semaphore becomes available, at which time the resource is immediately claimed. • The V operation is the inverse: it makes a resource available again after the process has finished using it.
Semantic Meaning of Semaphore/Mutex • Let’s denote the value of the semaphore lock as s.count • s.count >=0: • s.countis the number of processes that can execute semWait(s) without suspension • s.count <0: • s.countis the number of processes that are suspended in queue for the semaphore lock.
A Synchronization Example: The Producer and Consumer Problem • Two processes: • Producer: produce some number • Consumer: consume the number that was produced by producer • They share the same buffer • Tasks: • Producer generates a list of numbers • Consumer prints out those numbers
Producer and Consumer Problem http://www.ccs.neu.edu/home/kenb/synchronize.html
Producer-Consumer Problem • Shared datasemaphorefull, empty; mutexmu; • Initially:full = 0;/*The number of elements in buffer*/ empty = n;/*The number of empty slots in buffer */ mu = 1; /*Mutex controls the access to the buffer*/ count binary
Producer & Consumer Processes do { … produce an item in p … P(empty); P(mutex); … add p to buffer … V(mutex); V(full); } while (1); do { P(full); P(mutex); … remove an item from buffer … V(mutex); V(empty); … consume the item in c … } while (1); Producer Consumer
Readers-Writers Problem • A data object (e.g. a file) is to be shared among several concurrent processes. • Multiple readerprocesses may access the shared data simultaneously without a problem • A writerprocess must have exclusive access to the data object. • Several variations on this general problem
Readers-Writers Problem • A data object (e.g. a file) is to be shared among several concurrent processes. • A writer process must have exclusive access to the data object. • Multiple reader processes may access the shared data simultaneously without a problem • Shared datasemaphore mutex, wrt; intreadcount; Initiallymutex = 1, readcount = 0, wrt = 1;