90 likes | 235 Views
MPI: Returns. By Camilo A Silva. Topics. Asynchronous Communication. Asynchronous Communication. Accessing elements of a shared data structure in an unstructured manner HOW? --BY distributing the shared data structure among the computational processes. Asynchronous Functions.
E N D
MPI: Returns By Camilo A Silva
Topics • Asynchronous Communication
Asynchronous Communication • Accessing elements of a shared data structure in an unstructured manner • HOW? --BY distributing the shared data structure among the computational processes
Asynchronous simple program int count, *buf, source; MPI_Probe(MPI_ANY_SOURCE, 0, comm, &status); source = status.MPI_SOURCE; MPI_Get_count(status, MPI_INT, &count); buf = malloc(count*sizeof(int)); MPI_Recv(buf, count, MPI_INT, source, 0, comm, &status);
/* Data task */ /* Computation task */ while(done != TRUE) { receive(request); reply_to(request); } while(done != TRUE) { identify_next_task(); generate_requests(); process_replies(); } Example…
MPI_Status In C: MPI_Status is a structure • status.MPI_TAG is tag of incoming message (useful if MPI_ANY_TAG was specified) • status.MPI_SOURCE is source of incoming message (useful if MPI_ANY_SOURCE was specified) • How many elements of given datatype were received MPI_Get_count(IN status, IN datatype, OUT count) In Fortran: status is an array of integer integer status(MPI_STATUS_SIZE) status(MPI_SOURCE) status(MPI_TAG) In MPI-2: Will be able to specify MPI_STATUS_IGNORE
Next topics… • Modularity • Data Types + Heterogeneity • Buffer + Performance Issues • Compilation + Other important topics (topologies, MPI objects, tools for evaluating programs, and multiple program connection)