1 / 14

Understanding Message Passing Interface (MPI) in Multi-Processor Systems

Learn about MPI basics, communication operations, and programming models in multi-processor systems. Dive into MPI functions, algorithms, and practical implementations.

lrusso
Download Presentation

Understanding Message Passing Interface (MPI) in Multi-Processor Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 584 Lecture 8 • Assignment? • I will be adding a program to it today.

  2. Message Passing • Very close to Task-Channel model • Based on multi-processor • Set of independent processors • Connected via some communication net • All communication between processes is done via a message sent from one to the other

  3. MPI • Message Passing Interface • Computation is made of: • One or more processors • Communicate by calling library routines • MIMD programming model • SPMD most common.

  4. MPI • Processes use point-to-point communication operations • Collective communication operations are also available. • Communication can be modularized by the use of communicators. • MPI_COMM_WORLD is the base. • Used to identify subsets of processors

  5. MPI • Complex, but most problems can be solved using the 6 basic functions. • MPI_Init • MPI_Finalize • MPI_Comm_size • MPI_Comm_rank • MPI_Send • MPI_Recv

  6. MPI Basics • Most all calls require a communicator handle as an argument. • MPI_COMM_WORLD • MPI_Init and MPI_Finalize • don’t require a communicator handle • used to begin and end and MPI program • MUST be called to begin and end

  7. MPI Basics • MPI_Comm_size • determines the number of processors in the communicator group • MPI_Comm_rank • determines the integer identifier assigned to the current process • zero based

  8. MPI Basics #include <stdio.h> #include <mpi.h> main(int argc, char *argv[]) { int iproc, nproc; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &nproc); MPI_Comm_rank(MPI_COMM_WORLD, &iproc); printf("I am processor %d of %d\n", iproc, nproc); MPI_Finalize(); }

  9. MPI Communication • MPI_Send • Sends an array of a given type • Requires a destination node, size, and type • MPI_Recv • Receives an array of a given type • Same requirements as MPI_Send • Extra parameter • MPI_Status variable.

  10. MPI Basics • Made for both FORTRAN and C • Book attempts to show both • Standards for C • MPI_ prefix to all calls • First letter of function name is capitalized • Returns MPI_SUCCESS or error code • MPI_Status structure • MPI data types for each C type

  11. Using MPI • Based on rsh • requires a .rhosts file • hostname login • Path to compiler • MPI_HOME /u2/faculty/snell/mpich • MPI_ARCH hpux • MPI_BIN MPI_HOME/lib/MPI_ARCH/ch_p4

  12. Using MPI • Write program • Compile using mpicc in: • /u2/faculty/snell/mpich/lib/hpux/ch_p4 • Write process file • host nprocs full_path_to_prog • 0 for nprocs on first line • Run program • prog -p4pg process_file args

  13. Example • 2D finite differences code • Incomplete • In book on page 281

  14. Assignment • Paper Review • MPI program output • Copy the files in • ~snell/CS584/HINT • run make • create a process file • run HINT • Turn in what is printed to stdout

More Related