140 likes | 152 Views
Learn about MPI basics, communication operations, and programming models in multi-processor systems. Dive into MPI functions, algorithms, and practical implementations.
E N D
CS 584 Lecture 8 • Assignment? • I will be adding a program to it today.
Message Passing • Very close to Task-Channel model • Based on multi-processor • Set of independent processors • Connected via some communication net • All communication between processes is done via a message sent from one to the other
MPI • Message Passing Interface • Computation is made of: • One or more processors • Communicate by calling library routines • MIMD programming model • SPMD most common.
MPI • Processes use point-to-point communication operations • Collective communication operations are also available. • Communication can be modularized by the use of communicators. • MPI_COMM_WORLD is the base. • Used to identify subsets of processors
MPI • Complex, but most problems can be solved using the 6 basic functions. • MPI_Init • MPI_Finalize • MPI_Comm_size • MPI_Comm_rank • MPI_Send • MPI_Recv
MPI Basics • Most all calls require a communicator handle as an argument. • MPI_COMM_WORLD • MPI_Init and MPI_Finalize • don’t require a communicator handle • used to begin and end and MPI program • MUST be called to begin and end
MPI Basics • MPI_Comm_size • determines the number of processors in the communicator group • MPI_Comm_rank • determines the integer identifier assigned to the current process • zero based
MPI Basics #include <stdio.h> #include <mpi.h> main(int argc, char *argv[]) { int iproc, nproc; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &nproc); MPI_Comm_rank(MPI_COMM_WORLD, &iproc); printf("I am processor %d of %d\n", iproc, nproc); MPI_Finalize(); }
MPI Communication • MPI_Send • Sends an array of a given type • Requires a destination node, size, and type • MPI_Recv • Receives an array of a given type • Same requirements as MPI_Send • Extra parameter • MPI_Status variable.
MPI Basics • Made for both FORTRAN and C • Book attempts to show both • Standards for C • MPI_ prefix to all calls • First letter of function name is capitalized • Returns MPI_SUCCESS or error code • MPI_Status structure • MPI data types for each C type
Using MPI • Based on rsh • requires a .rhosts file • hostname login • Path to compiler • MPI_HOME /u2/faculty/snell/mpich • MPI_ARCH hpux • MPI_BIN MPI_HOME/lib/MPI_ARCH/ch_p4
Using MPI • Write program • Compile using mpicc in: • /u2/faculty/snell/mpich/lib/hpux/ch_p4 • Write process file • host nprocs full_path_to_prog • 0 for nprocs on first line • Run program • prog -p4pg process_file args
Example • 2D finite differences code • Incomplete • In book on page 281
Assignment • Paper Review • MPI program output • Copy the files in • ~snell/CS584/HINT • run make • create a process file • run HINT • Turn in what is printed to stdout