100 likes | 124 Views
Explore MPI collective communication patterns, functions, and optimizations for efficient data sharing among processes. Learn about common collective operations like Broadcast, Reduce, Scatter, and Gather.
E N D
Collective Communication with MPI Hector Urtubia
Introduction • Collective communication is a communication pattern that involves all processes in a communicator. • For collective communication, important optimizations can be done. • The MPI API defines some common collective communications functions.
Frequently Used Collective Operations • Broadcast. • One process transmits to others. • Reduce • Many processes transmit to one. • Scatter • Distributes data among processes. • Gather • Gathers data stored in many processes.
Broadcast • Is a communication pattern where a single process transmits the same data to many processes. int MPI_Bcast(void* message, int count, MPI_Datatype datatype, int root, MPI_Comm comm);
Reduce • Collective communication where all processes contribute data and is combined using a binary operation. int MPI_Reduce(void* operand, void* result, int count, MPI_Datatype datatype, MPI_Op operator, int root, MPI_Comm comm);
Reduce (cont) • Reduction operations: MPI_MAX Maximum MPI_MIN Minimum MPI_SUM Sum MPI_PROD Product MPI_LAND Logical and MPI_BAND Bitwise and MPI_LOR Logical or MPI_BOR Bitwise or MPI_LXOR Logical exclusive or MPI_BXOR Bitwise exclusive or MPI_MAXLOC Maximum and location of maximum MPI_MINLOC Minimum and location of minimum
Gather • Collects data from each process in a communicator. MPI_Gather(void* send_data, /* data to be sent */ int send_count, MPI_Datatype recv_type, void *recv_data, int recv_count, MPI_Datatype recv_type, int root, /* root process */ MPI_Comm comm); /*communicator*/
Scatter • It splits the data on one process and distributes it to all the other processes. int MPI_Scatter(void *send_data, int send_count, MPI_Datatype send_type, void* recv_data, int recv_count, MPI_Datatype recv_type, int root, MPI_Comm communicator);