190 likes | 253 Views
More MPI. Point-to-point Communication. Involves a pair of processes One process sends a message Other process receives the message. Send/Receive Not Collective. Function MPI_Send. int MPI_Send ( void *message, int count, MPI_Datatype datatype , int dest ,
E N D
Point-to-point Communication • Involves a pair of processes • One process sends a message • Other process receives the message
Function MPI_Send intMPI_Send ( void *message, int count, MPI_Datatypedatatype, intdest, int tag, MPI_Commcomm )
Function MPI_Recv intMPI_Recv ( void *message, int count, MPI_Datatypedatatype, int source, int tag, MPI_Commcomm, MPI_Status *status )
MPI_Send MPI_Recv Inside MPI_Send and MPI_Recv Sending Process Receiving Process Program Memory System Buffer System Buffer Program Memory
Return from MPI_Send • Function blocks until message buffer free • Message buffer is free when • Message copied to system buffer, or • Message transmitted • Typical scenario • Message copied to system buffer • Transmission overlaps computation
Return from MPI_Recv • Function blocks until message in buffer • If message never arrives, function never returns • Which leads us to …
Example float a, b, c; int id; MPI_Status status; … if (id == 0) { MPI_Recv (&b, 1, MPI_FLOAT, 1, 0, MPI_COMM_WORLD, &status); MPI_Send (&a, 1, MPI_FLOAT, 1, 0, MPI_COMM_WORLD); c = (a + b) / 2.0; } else if (id == 1) { MPI_Recv (&a, 1, MPI_FLOAT, 0, 0, MPI_COMM_WORLD, &status); MPI_Send (&b, 1, MPI_FLOAT, 0, 0, MPI_COMM_WORLD); c = (a + b) / 2.0; } CS 491 – Parallel and Distributed Computing
Example float a, b, c; int id; MPI_Status status; … if (id == 0) { MPI_Send (&a, 1, MPI_FLOAT, 1, 0, MPI_COMM_WORLD); MPI_Recv (&b, 1, MPI_FLOAT, 1, 1, MPI_COMM_WORLD, &status); c = (a + b) / 2.0; } else if (id == 1) { MPI_Send (&b, 1, MPI_FLOAT, 0, 0, MPI_COMM_WORLD); MPI_Recv (&a, 1, MPI_FLOAT, 0, 1, MPI_COMM_WORLD, &status); c = (a + b) / 2.0; } CS 491 – Parallel and Distributed Computing
Deadlock • Deadlock: process waiting for a condition that will never become true • Easy to write send/receive code that deadlocks • Two processes: both receive before send • Send tag doesn’t match receive tag • Process sends message to wrong destination process
Coding Send/Receive … if (ID == j) { … Receive from i … } … if (ID == i) { … Send to j … } … Receive is before Send. Why does this work?
Distributing Data Scatter / Gather CS 491 – Parallel and Distributed Computing
Getting Data Places • There are many interesting ways to arrange and distribute data for parallel use. • Many of these follow some fairly common “patterns” – basic structures • MPI Standards group wanted to provide flexible ways to distribute data • Uses variation on the concepts of “scatter” and “gather” CS 491 – Parallel and Distributed Computing
Collective Communications Broadcast the coefficients to all processors. Scatter the vectors among N processors as zpart, xpart, and ypart. Gather the results back to the root processor when completed. Calls can return as soon as their participation is complete.
Scatter intMPI_Scatter(void* sendbuf, intsendcount, MPI_Datatypesendtype, void* recvbuf, intrecvcount, MPI_Datatyperecvtype, introot, MPI_Commcomm) CS 491 – Parallel and Distributed Computing
Gather intMPI_Gather(void* sendbuf, intsendcount, MPI_Datatypesendtype, void* recvbuf, intrecvcount, MPI_Datatyperecvtype, introot, MPI_Commcomm) CS 491 – Parallel and Distributed Computing
Scatterv intMPI_Scatterv(void* sendbuf, int*sendcounts, int*displs, MPI_Datatypesendtype, void* recvbuf, intrecvcount, MPI_Datatyperecvtype, introot, MPI_Commcomm) CS 491 – Parallel and Distributed Computing
Gatherv intMPI_Gatherv(void* sendbuf, intsendcount, MPI_Datatypesendtype, void* recvbuf, int*recvcounts, int*displs, MPI_Datatyperecvtype, introot, MPI_Commcomm) CS 491 – Parallel and Distributed Computing