360 likes | 422 Views
Concurrent Programming. Concurrency. Concurrency means for a program to have multiple paths of execution running at (almost) the same time. Examples: A web server can handle connections from several clients while still listening for new connections.
E N D
Concurrency • Concurrency means for a program to have multiple paths of execution running at (almost) the same time. Examples: • A web server can handle connections from several clients while still listening for new connections. • A multi-player game may allow several players to move things around the screen at the same time. How can we do this? How to the tasks communicate with each other?
Performing Concurrency A computer can implement concurrency by... • parallel execution: run tasks on different CPU in the same computer (requires multi-processor machine) • time-sharing: divide CPU time into slices, and multi-process several tasks on the same CPU • distributed computing: use CPU of several computers in a cluster to run different tasks tasks tasks time
Design for Concurrency If we have multiple processes or threads of execution for a single job, we must decide issues of... • Allocating CPU: previous slide • Memory: do all tasks share the same memory? Or separate memory? • Communication: how can tasks communicate? • Control: • how do we start a task? • stop a task? • wait for a task?
Memory • shared memory: • if one thread changes memory, it effects the others. • problem: how to you share thestack? • separate memory: • each thread has its own memory area • combination: • each thread has a separate stack area • they share a static area, may share the current environment, may compete for same heap.
Shared Memory and Environment /* fork( ) creates new process */ pid = vfork( ); if ( pid == 0 ) { /* child process, pid=0 */ task1( ); task3( ); } else { /* parent process, pid=child */ task2( ); } stack: free space task1() frame SP main Tasks don't share registers. After the parent calls task1() where is the Stack Pointer (SP) of the child process? Where will task2( ) be placed?
Processes and memory • Heavy-weight processes: each process gets its own memory area and own environment. • Unix "process" fits this model. • Light-weight processes: processes share the same memory area; but each has its own context and maybe its own stack. • "threads" in Java, C, and C# fit this model
Heavy-weight Processes • UNIX fork( ) system call: child process gets a copy of parent's memory pages
Example: Web Server /* bind to port 80 */ bind( socket, &address, sizeof(address) ); while ( 1 ) { /* run forever */ /* wait for a client to connect */ client = accept(socket, &clientAddress, &len ); /* fork a new process to handle client */ pid = fork( ); if ( pid == 0 ) handleClient( client, clientAddress ); } Server forks a new process to handle clients, so server can listen for more connections.
Example: fork and wait for child pid = fork( ); if ( pid == 0 ) childProcess( ); else { wait( *status ); // wait for child to exit } wait( ) cause the process to wait for a child to exit.
Threads: light-weight processes • Threads share memory area. • Conserve resources, better communication between tasks. task1 = new Calculator( ); task2 = new AlarmClock( ); Thread thread1 = new Thread( task1 ); Thread thread2 = new Thread( task2 ); thread1.start( ); thread2.start( );
Stack Management for Threads • In some implementations (like C) threads share the same memory, but require their own stack space. • Each thread must be able to call functions separately. stack: Cactus Stack: Dynamic and static links can refer to parent's stack. thread4 stack thread5 stack thread2 stack space thread3 stack space thread1 stack main
Communication between Tasks • Reading and writing to a shared buffer. • Producer - consumer model (see Java Tutorial) • Using an I/O channel called a pipe. • Signaling: exceptions or interrupts. pin = new PipedInputStream( ); pout = new PipedOutputStream( pin ); task1 = new ReaderTask( pin ); task2 = new WriterTask( pout ); Thread thread1 = new Thread( task1 ); Thread thread2 = new Thread( task2 ); thread1.start( ); thread2.start( ); task1 pipe task2
Thread Coordination thread1 thread2 wait( ); processing sleeping notify( );wait( ); processing sleeping notify( ); yield gives other threads a chance to use CPU. yield( ); yield( );
Critical Code: avoiding race conditions Example: one thread pushes data onto a stack, another thread pops data off the stack. Problem: you may have a race condition where one thread starts to pop data off stack. but thread is interrupted (by CPU) and other thread pushes data onto stack. thread1: thread2: push("problem?") { n = top; stack[n] = "problem?"; top=n++; pop() { return stack[top--]; } problem? top race a a this this is is
Exclusive Access to Critical Code programmer control: use a shared flag variable or semaphore to indicate when critical block is free executer control: use synchronization features of the language to restrict access to critical code public void synchronized push(Object value) { if ( top < stack.length ) stack[top++] = value; } public Object synchronized pop( ) { if ( top >= 0 ) return stack[top--]; }
Avoiding Deadlock • Deadlock: when two or more tasks are waiting for each other to release a required resource. • Program waits forever. • Rule for Avoiding Deadlock: exercise for students
Design Patterns and Threads Observer Pattern: one task is a source of events that other tasks are interested in. Each task wants to be notified when an interesting event occurs. Solution: wrap the source task in an Observable object. Other tasks register with Observable as observers. Observable task calls notifyObservers( ) when interesting event occurs
Simple Producer-Consumer Cooperation Using Semaphores Figure 11.2
Multiple Producers-Consumers Figure 11.3
Producer-Consumer Monitor Figure 11.4
States of a Java Thread Figure 11.5
Ball Class Figure 11.6
Initial Application Class Figure 11.7
Final Bouncing Balls init Method Figure 11.8
Final Bouncing Balls paint Method Figure 11.9
Bouncing Balls Mouse Handler Figure 11.10
Bouncing Balls Mouse Handler Figure 11.11
Buffer Class Figure 11.12
Producer Class Figure 11.13
Consumer Class Figure 11.14
Bounded Buffer Class Figure 11.15
Sieve of Eratosthenes Figure 11.16
Test Drive for Sieve of Eratosthenes Figure 11.17