260 likes | 523 Views
CS 684. Message Passing. Based on multi-processor Set of independent processors Connected via some communication net All communication between processes is done via a message sent from one to the other. MPI. Message Passing Interface Computation is made of: One or more processes
E N D
Message Passing • Based on multi-processor • Set of independent processors • Connected via some communication net • All communication between processes is done via a message sent from one to the other
MPI • Message Passing Interface • Computation is made of: • One or more processes • Communicate by calling library routines • MIMD programming model • SPMD most common.
MPI • Processes use point-to-point communication operations • Collective communication operations are also available. • Communication can be modularized by the use of communicators. • MPI_COMM_WORLD is the base. • Used to identify subsets of processors
MPI • Complex, but most problems can be solved using the 6 basic functions. • MPI_Init • MPI_Finalize • MPI_Comm_size • MPI_Comm_rank • MPI_Send • MPI_Recv
MPI Basics • Most all calls require a communicator handle as an argument. • MPI_COMM_WORLD • MPI_Init and MPI_Finalize • don’t require a communicator handle • used to begin and end and MPI program • MUST be called to begin and end
MPI Basics • MPI_Comm_size • determines the number of processors in the communicator group • MPI_Comm_rank • determines the integer identifier assigned to the current process • zero based
MPI Basics #include <stdio.h> #include <mpi.h> main(int argc, char *argv[]) { int iproc, nproc; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &nproc); MPI_Comm_rank(MPI_COMM_WORLD, &iproc); printf("I am processor %d of %d\n", iproc, nproc); MPI_Finalize(); }
MPI Communication • MPI_Send • Sends an array of a given type • Requires a destination node, size, and type • MPI_Recv • Receives an array of a given type • Same requirements as MPI_Send • Extra parameter • MPI_Status variable.
MPI Basics • Made for both FORTRAN and C • Standards for C • MPI_ prefix to all calls • First letter of function name is capitalized • Returns MPI_SUCCESS or error code • MPI_Status structure • MPI data types for each C type • OUT parameters passed using & operator
Using MPI • Based on rsh or ssh • requires a .rhosts file or ssh key setup • hostname login • Path to compiler (CS open labs) • MPI_HOME /users/faculty/snell/mpich • MPI_CC MPI_HOME/bin/mpicc • use mpcc on marylou10 • Use mpicc on marylou4 • Use cc prog.c -o prog -lmpi on marylou & marylou2
Using MPI • Write program • Compile using mpicc or mpcc • Write process file (linux cluster) • host nprocs full_path_to_prog • 0 for nprocs on first line, 1 for all others • Run program (linux cluster) • prog -p4pg process_file args • mpirun –np #procs –machinefile machines prog • Run program (scheduled on marylou4 using pbs) • mpirun -np #procs -machinefile $PBS_NODEFILE prog
Example • HINT benchmark • Found at /users/faculty/snell/CS584/HINT or ~qos/Hint
#include “mpi.h” #include <stdio.h> #include <math.h> #define MAXSIZE 1000 void main(int argc, char *argv) { int myid, numprocs; int data[MAXSIZE], i, x, low, high, myresult, result; char fn[255]; char *fp; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); if (myid == 0) { /* Open input file and initialize data */ strcpy(fn,getenv(“HOME”)); strcat(fn,”/MPI/rand_data.txt”); if ((fp = fopen(fn,”r”)) == NULL) { printf(“Can’t open the input file: %s\n\n”, fn); exit(1); } for(i = 0; i < MAXSIZE; i++) fscanf(fp,”%d”, &data[i]); } /* broadcast data */ MPI_Bcast(data, MAXSIZE, MPI_INT, 0, MPI_COMM_WORLD); /* Add my portion Of data */ x = n/nproc; low = myid * x; high = low + x; for(i = low; i < high; i++) myresult += data[i]; printf(“I got %d from %d\n”, myresult, myid); /* Compute global sum */ MPI_Reduce(&myresult, &result, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) printf(“The sum is %d.\n”, result); MPI_Finalize(); }
MPI • Message Passing programs are non-deterministic because of concurrency • Consider 2 processes sending messages to third • MPI only guarantees that 2 messages sent from a single process to another will arrive in order. • It is the programmer's responsibility to ensure computation determinism
MPI & Determinism • MPI • A Process may specify the source of the message • A Process may specify the type of message • Non-Determinism • MPI_ANY_SOURCE or MPI_ANY_TAG
Example for (n = 0; n < nproc/2; n++) { MPI_Send(buff, BSIZE, MPI_FLOAT, rnbor, 1, MPI_COMM_WORLD); MPI_Recv(buff, BSIZE, MPI_FLOAT, MPI_ANY_SOURCE, 1, MPI_COMM_WORLD, &status); /* Process the data */ }
Global Operations • Coordinated communication involving multiple processes. • Can be implemented by the programmer using sends and receives • For convenience, MPI provides a suite of collective communication functions. • All participating processes must call the same function.
Collective Communication • Barrier • Synchronize all processes • Broadcast • Gather • Gather data from all processes to one process • Scatter • Reduction • Global sums, products, etc.
Distribute Problem Size Distribute Input data Exchange Boundary values Find Max Error Collect Results
MPI_Reduce MPI_Reduce(inbuf, outbuf, count, type, op, root, comm)
MPI_Allreduce MPI_Allreduce(inbuf, outbuf, count, type, op, comm)
Other MPI Features • Asynchronous Communication • MPI_ISend • MPI_Wait and MPI_Test • MPI_Probe and MPI_Get_count • Modularity • Communicator creation routines • Derived Datatypes