500 likes | 525 Views
Chapter 10. Concurrency. Contents. Concept of concurrency Subprogram-level concurrency Semaphores Monitors Message passing Java thread. 10.1: Concept of Concurrency. Concurrency is divided into instruction level, statement level, unit level, and program level.
E N D
Chapter 10 Concurrency SEG2101 Chapter 10
Contents • Concept of concurrency • Subprogram-level concurrency • Semaphores • Monitors • Message passing • Java thread SEG2101 Chapter 10
10.1: Concept of Concurrency • Concurrency is divided into instruction level, statement level, unit level, and program level. • Concurrent execution of program units can be either physically on separate processors or logically in some time-sliced fashion on a single processor computer system. SEG2101 Chapter 10
Multiprocessor Architecture The first computer that had multiple processors in late 1950s • One general-purpose processor and one or more other processors that were used for input and output operations. • Machines that had multiple complete processors. • Several identical partial processors that were fed certain instructions from a single instruction stream. • SIMD (Single-Instruction Multiple-Data) • MIMD (Multiple-Instruction Multiple-Data) • Distributed • Shared memory Early 1960s Mid 1960s SEG2101 Chapter 10
Why Study Concurrency? • It provides a method of conceptualizing program solutions to problems. • Multiple processor computers are now being widely used, and creating the need for software to make effective use of that hardware capability. SEG2101 Chapter 10
10:2: Subprogram-Level Concurrency • A task is a unit of program that can be in concurrent execution with other units of the same program. • Each task in a program can provide one thread of control. • A task can communicate with other tasks through shared nonlocal variables, through message passing, or through parameters. SEG2101 Chapter 10
Synchronization • A mechanism to control the order in which tasks execute. • Cooperation synchronization is required between task A and task B when task A must wait for task B to complete some specific activity before task A can continue its execution. • Producer-consumer problem • Competition synchronization is required between two tasks when both require the use of some resource that cannot be simultaneously used. SEG2101 Chapter 10
The Need for Competition Synchronization SEG2101 Chapter 10
Critical Section • A segment of code, in which the thread may be changing common variables, updating a table, writing a file, and so on. • The execution of critical sections by the threads is mutually exclusive in time. SEG2101 Chapter 10
Task States • New: it has been created, but has not yet begun its execution. • Runnable or ready: it isready to run, but is not currently running. • Running: it iscurrently executing, it has a processor and its code is being executed. • Blocked: it has been running, but its execution was interrupted by one of several different events. • Dead: no longer active in any sense. SEG2101 Chapter 10
A system of sending messages by holding the arms or two flags in certain positions according to an alphabetic code. 10.3: Semaphores • A semaphore is a data structure consisting of an integer and a queue that stores task descriptors. • Semaphore is used to provide limited access to a data structure. • P (proberen, meaning to test and decrement the integer) and V (verhogen, meaning to increment) operation SEG2101 Chapter 10
Semaphores (II) • Operating systems often distinguish between counting and binary semaphores. • The value of a counting semaphore can range over an unrestricted domain. • The value of a binary semaphore can range only between 0 and 1. • The general strategy for using a binary semaphore to control access to a critical section is as follows: Semaphore S; P(S); Critical section(); V(S); SEG2101 Chapter 10
Cooperation Synchronization SEG2101 Chapter 10
Producer and Consumer SEG2101 Chapter 10
Competition Synchronization SEG2101 Chapter 10
Competition Synchronization (II) SEG2101 Chapter 10
sema_example.cpp Example: Semaphore in C • int sema_init(sema_t *sp, unsigned int count, int type, void * arg); • int sema_destroy(sema_t *sp); • int sema_wait(sema_t *sp); • int sema_trywait(sema_t *sp); • int sema_post(sema_t *sp); SEG2101 Chapter 10
Deadlocks • A law passed by the Kansas legislature early in 20th century: “…… When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until other has gone. ……” SEG2101 Chapter 10
Deadlock SEG2101 Chapter 10
Conditions for Deadlock • Mutual exclusion: the act of allowing only one process to have access to a dedicated resource • Hold-and-wait: there must exist a process that is holding at least one resource and is waiting to acquire additional resources that are currently being held by other processes. • No preemption: the lack of temporary reallocation of resources, can be released only voluntarily. • Circular waiting: result of above three conditions,each process involved in the impasse is waiting for another to voluntarily release the resource. SEG2101 Chapter 10
Deadlock example SEG2101 Chapter 10
Deadlock example (cont’) SEG2101 Chapter 10
Deadlock example (cont’) SEG2101 Chapter 10
Strategy for Handling Deadlocks • Prevention: eliminate one of the necessary conditions • Avoidance: avoid if the system knows ahead of time the sequence of resource quests associated with each active processes • Detection: detect by building directed resource graphs and looking for circles • Recovery: once detected, it must be untangled and the system returned to normal as quickly as possible • Process termination • Resource preemption SEG2101 Chapter 10
The Dining-Philosophers Problem SEG2101 Chapter 10
10.4: Monitor • A monitor presents a set of programmer defined operations that provide mutual exclusion with the monitor. • The internal implementation of a monitor type cannot be accessed directly by the various threads. • A procedure defined within a monitor can access only those variables that are declared locally with the monitor and any formal parameters that passed to the procedures. SEG2101 Chapter 10
Monitor (II) • The monitor construct prohibits concurrent access to all procedures defined with the monitor. • Only one thread or process at a time can be active within the monitor at any one time. • The programmer does not need to code this synchronization (competition) explicitly, it is built into the monitor type. SEG2101 Chapter 10
Competition Synchronization • One of the most important features of monitors is that shared data is resident in the monitor rather in any of the client units. • The programmer does not synchronize mutually exclusive access to shared data through the use of semaphores or other mechanism. • All accesses are resident in the monitor, the monitor implementation can be made to guarantee synchronized access by simply allowing only one access at a time. • Calls to monitor procedures are implicitly queued if the monitor is busy at the time of the call. SEG2101 Chapter 10
Cooperation Synchronization • Although mutually exclusive access to shared data is intrinsic with a monitor, cooperation between processes is still the task of the programmer. • Programmer must guarantee that a shared buffer does not experience underflow or overflow. SEG2101 Chapter 10
Example of Using Monitor SEG2101 Chapter 10
Example: A Shared Buffer SEG2101 Chapter 10
Example: A Shared Buffer (II) SEG2101 Chapter 10
Example: A Shared Buffer (III) SEG2101 Chapter 10
10.5: Message Passing • Message passing means that one process sends a message to another process and then continues its local processing. • The message may take some time to get to the other process and may be stored in the input queue of the destination process if the latter is not immediately ready to receive the message. • Then the message is received by the destination process, when the latter arrives at a point in its local processing where it is ready to receive messages. • This is called asynchronous message passing (because sending and receiving is not at the same time). SEG2101 Chapter 10
Asynchronous Message Passing • Blocking send and receive operations: • A receiver will be blocked if it arrives at the point where it may receive messages and no message is waiting. • A sender may get blocked if there is no room in the message queue between the sender and the receiver; however, in many cases, one assumes arbitrary long queues, which means that the sender will never be blocked. • Non-blocking send and receive operations: • Send and receive operations always return immediately, returning a status value which could indicate that no message has arrived at the receiver. • The receiver may test whether a message is waiting and possibly do some other processing. SEG2101 Chapter 10
Synchronous Message Passing • One assumes that sending and receiving takes place at the same time (there is no need for an intermediate buffer). • This is also called rendezvous and implies closer synchronization: the combined send-and-receive operation can only occur if both parties (the sending and receiving processes) are ready to do their part. • The sending process may have to wait for the receiving process, or the receiving process may have to wait for the sending one. SEG2101 Chapter 10
Rendezvous SEG2101 Chapter 10
10.6: Java Threads • The thread class • Priorities • Competition synchronization • Cooperation synchronization • Example SEG2101 Chapter 10
A thread is a flow of execution, with a beginning and an end, of a task in a program. With Java, multiple threads from a program can be launched concurrently. Multiple threads can be executed in multiprocessor systems, and single-processor systems. Multithreading can make program more responsive, interactive, as well as enhance performance. Multiple Threads ? SEG2101 Chapter 10
Demo - thread Creating threads by extending the thread class SEG2101 Chapter 10
Demo - runnable Creating threads by implementing the runnable interface SEG2101 Chapter 10
A Circular Queue SEG2101 Chapter 10
Java Semaphore Implementation SEG2101 Chapter 10
Thread States SEG2101 Chapter 10
Thread Groups • A thread group is a set of threads. • Some programs contain quite a few threads with similar functionality. • We can group them together and perform operations in the entire group. • E.g., we can suspend or resume all of the threads in a group at the same time. SEG2101 Chapter 10
Using Thread Groups • Construct a thread group: ThreadGroup g = new ThreadGroup(“thread group”); • Place a thread in a thread group: Thread t = new Thread(g, new ThreadClass(),”this thread”); • Find out how many threads in a group are currently running: System.out.println(“the number of runnable threads in the group “ + g.activeCount()); • Find which group a thread belongs to: theGroup = myThread.getThreadGroup(); SEG2101 Chapter 10
Synchronization • Example: Showing resource conflict • This program demonstrates the problem of resource conflicts. Suppose that you create and launch 100 threads, each of which adds a penny to a piggy bank. Assume that the piggy bank is initially empty. You create a class named Piggybank to model the piggy bank, a class named AddPennyThread to add a penny to the piggy bank, and a main class that creates and launched threads. • Demo - PiggyBank SEG2101 Chapter 10
Competition Synchronization • Avoiding resource conflicts using synchronized method • keyword: Synchronized • demo - synchronized • Avoiding resource conflicts using synchronized statements • demo - sync_statement SEG2101 Chapter 10
Cooperation Synchronization • Cooperation synchronization in Java is accomplished by using the wait and notify methods that defined in object, the root class of all Java classes. • The wait method is placed in a loop that tests the condition for legal access. If the condition is false, the thread is put in a queue to wait. • The notify method is called to tell one waiting thread that the thing it was waiting for has happened. • wait and notify can only be called from within a synchronized method. SEG2101 Chapter 10
Priorities • The priories of threads need not all be the same. • If main creates a thread, its default priority of the constant NORM_PRIORITY(5). • MAX_PRIORITY(10) and MIN_PRIORITY(0) • setPriority(int): change the priority of this thread • getPriority(): return this thread’s priority • Demo - TestThread_Priority SEG2101 Chapter 10