620 likes | 634 Views
Learn about parallel programming with shared memory using Pthreads, thread creation, synchronization, matrix multiplication examples, parallel SOR implementation, and thread management.
E N D
ECE1747 Parallel Programming Shared Memory Multithreading Pthreads
Shared Memory • All threads access the same shared memory data space. Shared Memory Address Space proc1 proc2 proc3 procN
Shared Memory (continued) • Concretely, it means that a variable x, a pointer p, or an array a[] refer tothe same object, no matter what processor the reference originates from. • We have more or less implicitly assumed this to be the case in earlier examples.
Shared Memory a proc1 proc2 proc3 procN
Distributed Memory - Message Passing The alternative model to shared memory. mem1 mem2 mem3 memN a a a a proc1 proc2 proc3 procN network
Shared Memory vs. Message Passing • Same terminology is used in distinguishing hardware. • For us: distinguish programming models, not hardware.
Programming vs. Hardware • One can implement • a shared memory programming model • on shared or distributed memory hardware • (also in software or in hardware) • One can implement • a message passing programming model • on shared or distributed memory hardware
Portability of programming models shared memory programming message passing programming shared memory machine distr. memory machine
Shared Memory Programming: Important Point to Remember • No matter what the implementation, it conceptually looks like shared memory. • There may be some (important) performance differences.
Multithreading • User has explicit control over thread. • Good: control can be used to performance benefit. • Bad: user has to deal with it.
Pthreads • POSIX standard shared-memory multithreading interface. • Provides primitives for process management and synchronization.
What does the user have to do? • Decide how to decompose the computation into parallel parts. • Create (and destroy) processes to support that decomposition. • Add synchronization to make sure dependences are covered.
General Thread Structure • Typically, a thread is a concurrent execution of a function or a procedure. • So, your program needs to be restructured such that parallel parts form separate procedures or functions.
Example of Thread Creation (contd.) main() pthread_ create(func) func()
Thread Joining Example void *func(void *) { ….. } pthread_t id; int X; pthread_create(&id, NULL, func, &X); ….. pthread_join(id, NULL); …..
Example of Thread Creation (contd.) main() pthread_ create(func) func() pthread_ join(id) pthread_ exit()
Matrix Multiply for( i=0; i<n; i++ ) for( j=0; j<n; j++ ) { c[i][j] = 0.0; for( k=0; k<n; k++ ) c[i][j] += a[i][k]*b[k][j]; }
Parallel Matrix Multiply • All i- or j-iterations can be run in parallel. • If we have p processors, n/p rows to each processor. • Corresponds to partitioning i-loop.
Matrix Multiply: Parallel Part void mmult(void* s) { int slice = (int) s; int from = (slice*n)/p; int to = ((slice+1)*n)/p; for(i=from; i<to; i++) for(j=0; j<n; j++) { c[i][j] = 0.0; for(k=0; k<n; k++) c[i][j] += a[i][k]*b[k][j]; } }
Matrix Multiply: Main int main() { pthread_t thrd[p]; for( i=0; i<p; i++ ) pthread_create(&thrd[i], NULL, mmult,(void*) i); for( i=0; i<p; i++ ) pthread_join(thrd[i], NULL); }
Sequential SOR for some number of timesteps/iterations { for (i=0; i<n; i++ ) for( j=1, j<n, j++ ) temp[i][j] = 0.25 * ( grid[i-1][j] + grid[i+1][j] grid[i][j-1] + grid[i][j+1] ); for( i=0; i<n; i++ ) for( j=1; j<n; j++ ) grid[i][j] = temp[i][j]; }
Parallel SOR • First (i,j) loop nest can be parallelized. • Second (i,j) loop nest can be parallelized. • Must wait to start second loop nest until all processors have finished first. • Must wait to start first loop nest of next iteration until all processors have second loop nest of previous iteration. • Give n/p rows to each processor.
Pthreads SOR: Parallel parts (1) void* sor_1(void *s) { int slice = (int) s; int from = (slice*n)/p; int to = ((slice+1)*n)/p; for( i=from; i<to; i++) for( j=0; j<n; j++ ) temp[i][j] = 0.25*(grid[i-1][j] + grid[i+1][j] +grid[i][j-1] + grid[i][j+1]); }
Pthreads SOR: Parallel parts (2) void* sor_2(void *s) { int slice = (int) s; int from = (slice*n)/p; int to = ((slice+1)*n)/p; for( i=from; i<to; i++) for( j=0; j<n; j++ ) grid[i][j] = temp[i][j]; }
Pthreads SOR: main for some number of timesteps { for( i=0; i<p; i++ ) pthread_create(&thrd[i], NULL, sor_1, (void *)i); for( i=0; i<p; i++ ) pthread_join(thrd[i], NULL); for( i=0; i<p; i++ ) pthread_create(&thrd[i], NULL, sor_2, (void *)i); for( i=0; i<p; i++ ) pthread_join(thrd[i], NULL); }
Summary: Thread Management • pthread_create(): creates a parallel thread executing a given function (and arguments), returns thread identifier. • pthread_exit(): terminates thread. • pthread_join(): waits for thread with particular thread identifier to terminate.
Summary: Program Structure • Encapsulate parallel parts in functions. • Use function arguments to parameterize what a particular thread does. • Call pthread_create() with the function and arguments, save thread identifier returned. • Call pthread_join() with that thread identifier.
Pthreads Synchronization • Create/exit/join • provide some form of synchronization, • at a very coarse level, • requires thread creation/destruction. • Need for finer-grain synchronization • mutex locks, • condition variables.
Use of Mutex Locks • To implement critical sections (as needed, e.g., in en_queue and de_queue in TSP). • Pthreads provides only exclusive locks. • Some other systems allow shared-read, exclusive-write locks.
Condition variables (1 of 5) pthread_cond_init( pthread_cond_t *cond, pthread_cond_attr *attr) • Creates a new condition variable cond. • Attribute: ignore for now.
Condition Variables (2 of 5) pthread_cond_destroy( pthread_cond_t *cond) • Destroys the condition variable cond.
Condition Variables (3 of 5) pthread_cond_wait( pthread_cond_t *cond, pthread_mutex_t *mutex) • Blocks the calling thread, waiting on cond. • Unlocks the mutex.
Condition Variables (4 of 5) pthread_cond_signal( pthread_cond_t *cond) • Unblocksone thread waiting on cond. • Which one is determined by scheduler. • If no thread waiting, then signal is a no-op.
Condition Variables (5 of 5) pthread_cond_broadcast( pthread_cond_t *cond) • Unblocks all threads waiting on cond. • If no thread waiting, then broadcast is a no-op.
Use of Condition Variables • To implement signal-wait synchronization discussed in earlier examples. • Important note: a signal is “forgotten” if there is no corresponding wait that has already happened.
Use of Wait/Signal (Pipelining) • Sequential • Parallel (Pattern -- picture; horiz. line -- processor).
PIPE P1:for( i=0; i<num_pics, read(in_pic); i++ ) { int_pic_1[i] = trans1( in_pic ); signal( event_1_2[i] ); } P2: for( i=0; i<num_pics; i++ ) { wait( event_1_2[i] ); int_pic_2[i] = trans2( int_pic_1[i] ); signal( event_2_3[i] ); }
PIPE Using Pthreads • Replacing the original wait/signal by a Pthreads condition variable wait/signal will not work. • signals before a wait are forgotten. • we need to remember a signal.
How to remember a signal (1 of 2) semaphore_signal(i) { pthread_mutex_lock(&mutex_rem[i]); arrived [i]= 1; pthread_cond_signal(&cond[i]); pthread_mutex_unlock(&mutex_rem[i]); }
How to Remember a Signal (2 of 2) semaphore_wait(i) { pthreads_mutex_lock(&mutex_rem[i]); if( arrived[i] = 0 ) { pthreads_cond_wait(&cond[i], mutex_rem[i]); } arrived[i] = 0; pthreads_mutex_unlock(&mutex_rem[i]); }
PIPE with Pthreads P1:for( i=0; i<num_pics, read(in_pic); i++ ) { int_pic_1[i] = trans1( in_pic ); semaphore_signal( event_1_2[i] ); } P2: for( i=0; i<num_pics; i++ ) { semaphore_wait( event_1_2[i] ); int_pic_2[i] = trans2( int_pic_1[i] ); semaphore_signal( event_2_3[i] ); }
Note • Many shared memory programming systems (other than Pthreads) have semaphores as basic primitive. • If they do, you should use it, not construct it yourself. • Implementation may be more efficient than what you can do yourself.
Parallel TSP process i: while( (p=de_queue()) != NULL ) { for each expansion by one city { q = add_city(p); if complete(q) { update_best(q) }; else en_queue(q); } }
Parallel TSP • Need critical section • in update_best, • in en_queue/de_queue. • In de_queue • wait if q is empty, • terminate if all processes are waiting. • In en_queue: • signal q is no longer empty.
Parallel TSP: Mutual Exclusion en_queue() / de_queue() { pthreads_mutex_lock(&queue); …; pthreads_mutex_unlock(&queue); } update_best() { pthreads_mutex_lock(&best); …; pthreads_mutex_unlock(&best); }
Parallel TSP: Condition Synchronization de_queue() { while( (q is empty) and (not done) ) { waiting++; if( waiting == p ) { done = true; pthreads_cond_broadcast(&empty, &queue); } else { pthreads_cond_wait(&empty, &queue); waiting--; } } if( done ) return null; else remove and return head of the queue; }
Pthreads SOR: main for some number of timesteps { for( i=0; i<p; i++ ) pthread_create(&thrd[i], NULL, sor_1, (void *)i); for( i=0; i<p; i++ ) pthread_join(thrd[i], NULL); for( i=0; i<p; i++ ) pthread_create(&thrd[i], NULL, sor_2, (void *)i); for( i=0; i<p; i++ ) pthread_join(thrd[i], NULL); }
Pthreads SOR: Parallel parts (1) void* sor_1(void *s) { int slice = (int) s; int from = (slice*n)/p; int to = ((slice+1)*n)/p; for(i=from;i<to;i++) for( j=0; j<n; j++ ) temp[i][j] = 0.25*(grid[i-1][j] + grid[i+1][j] +grid[i][j-1] + grid[i][j+1]); }
Pthreads SOR: Parallel parts (2) void* sor_2(void *s) { int slice = (int) s; int from = (slice*n)/p; int to = ((slice+1)*n)/p; for(i=from;i<to;i++) for( j=0; j<n; j++ ) grid[i][j] = temp[i][j]; }
Reality bites ... • Create/exit/join is not so cheap. • It would be more efficient if we could come up with a parallel program, in which • create/exit/join would happen rarely (once!), • cheaper synchronization were used. • We need something that makes all threads wait, until all have arrived -- a barrier.