110 likes | 221 Views
MPI Chapter 3. More Beginning MPI. MPI Philosopy. One program for all processes Starts with init Get my process number Process 0 is usually the “Master” node (One process to bind them all – apologies to J.R.R. Tolkien.) Big if/else statement to do master stuff verses slave stuff.
E N D
MPI Chapter 3 More Beginning MPI
MPI Philosopy • One program for all processes • Starts with init • Get my process number • Process 0 is usually the “Master” node (One process to bind them all – apologies to J.R.R. Tolkien.) • Big if/else statement to do master stuff verses slave stuff. • Master could also do some slave stuff • Load balancing issues
C++ MPI at WU on Herot • #include “mpi.h” • int main(int argc, char *argv[]) • MPI::Init(argc, argv) • Typically –np # to set up COMM_WORLD • mpic++ - to compile mpi programs • mpirun –np # • Plus stuff in Josh’s handout about system stuff
Bcast • MPI::COMM_WORLD.Bcast(buf, count, datatype, root) • EVERY PROCESS executes this function. It is BOTH a send and receive. • Root is the “sender”, all other processes are receivers.
Reduce • MPI::COMM_WORLD.Reduce(sendbuf, recvbuf, count, datatype, op, root) • Executed by ALL processes (somewhat of a send and receive). • EVERYONE sends sendbuf where op is performed on all those items and the answer appears in recvbuf of process root. • Op is specified by one of many constants (ex. MPI::SUM, MPI::PROD, MPI::MAX, MPI::MIN)
Timing MPI Programs • double MPI::Wtime() • Time in seconds since some arbitrary point in time • Call twice, once at beginning, once at end of code to time • Difference is elapsed time • double MPI::Wtick() • Granularity, in seconds, of MPI Wtime function
Receive revisited • Recall • MPI::COMM_WORLD.Recv(buf, count, datatype, source, tag, status) • Source and/or tag could be a wildcard (MPI::ANY_TAG, MPI::ANY_SOURCE) • Status type MPI::Status • int MPI::Status.Get_source() • int MPI::Status.Get_tag()
Communicators • MPI_COMM_WORLD – has everything • Can create different communicators so can do operations with subgroups of processors
Creating Communicators • MPI_Comm – data type for a communicator • MPI_Group – data type for a group • Can assign communicators (com1=com2) • Use a group of processors to create a communicator. • MPI_Comm_Group – gets the group from a communicator • MPI_Comm_create – create communicator from a group
Communicator Manipulation • MPI_Group_excl – exclude a process from a group • MPI_Comm_free – releases communicator • MPI_Group_free
Allreduce • Equivalent to reduce + Bcast.