320 likes | 495 Views
Hardware Environment. VIA cluster - 8 nodes Two 1.0 GHz VIA-C3 processors each node Connected with Gigabit Ethernet Linux kernel – 2.6.8-1-smp Blade Server – 5 nodes two 3.0 GHz Intel Xeon processors each node each Xeon processor is Hyper-Threading Connected with Gigabit Ethernet
E N D
Hardware Environment • VIA cluster - 8 nodes • Two 1.0 GHz VIA-C3 processors each node • Connected with Gigabit Ethernet • Linux kernel – 2.6.8-1-smp • Blade Server – 5 nodes • two 3.0 GHz Intel Xeon processors each node • each Xeon processor is Hyper-Threading • Connected with Gigabit Ethernet • Linux kernel – 2.6.8-1-smp
MPI – message passing interface • Basic data types • MPI_CHAR – char • MPI_UNSIGNED_CHAR – unsigned char • MPI_BYTE – like unsigned char • MPI_SHORT – short • MPI_LONG – long • MPI_INT – int • MPI_FLOAT – float • MPI_DOUBLE – double • ……
MPI – message passing interface • 6 basic MPI functions • MPI_Init – initialize MPI environment • MPI_Finalize – shutting down MPI environment • MPI_Comm_size – determine number of processes • MPI_Comm_rank – determine process rank • MPI_Send – blocking data send • MPI_Recv – blocking data receive
MPI – message passing interface • Initialize MPI • MPI_Init(&argc, &argv) • First MPI function called by each process • Allow system to do any necessary setup • Not necessarily first executable statement in your code
MPI – message passing interface • Communicators • Communicators: opaque object that provides message- passing environment for processes • MPI_COMM_WORLD • Default communicator • Includes all processes • Create new communicators • MPI_Comm_create() • MPI_Group_incl()
Communicator Name Communicator Processes Ranks • Communicators MPI_COMM_WORLD 0 5 2 1 4 3
MPI – message passing interface • Shutting Down MPI environment • MPI_Finalize() • Call after all MPI function calls • Allow system to free any resources
MPI – message passing interface • Determine Number of Processes • MPI_Comm_size(MPI_COMM_WORLD,&size) • First argument is communicator • Number of processes returned through second argument
MPI – message passing interface • Determine Process Rank • MPI_Comm_rank(MPI_COMM_WORLD,&myid) • First argument is communicator • Process rank (in range 0, 1, 2, …, P-1) returned through second argument
Example - hello.c (con’t) • Compile MPI Programs • mpicc –o foo foo.c • mpicc – script to compile and link MPI library • example: mpicc –o hello hello.c
Example - hello.c (con’t) • Execute MPI Programs • mpirun –np <p> <exec> <argc1> … • -np <p> - number of processes • <exec> - executable filename • <argc1> … - argument passing to <exec> • example: mpirun –np 4 hello
Example – hello.c (con’t) hello hello hello hello
Example – hello.c (con’t) hello hello hello hello rank = 0 rank = 1 rank = 2 rank = 3
Example – hello.c (con’t) hello hello hello hello rank = 0 size = 4 rank = 1 size = 4 rank = 2 size = 4 rank = 3 size = 4
Example – hello.c (con’t) hello hello hello hello rank = 0 size = 4 rank = 1 size = 4 rank = 2 size = 4 rank = 3 size = 4
MPI – message passing interface • Specify Host Processors • machine file describes machines to run your program • # of MPI processes > physical machines ? • avoid login with password • mpirun –np <p> -machinefile <filename> <exec> • example: in machines.LINUX # machines.LINUX # put machine hostname below node01 node02 node03
MPI – message passing interface • Blocking Send and Receive • MPI_send(&buf,count,datatype,dest,tag,MPI_COMM_WORLD) • MPI_recv(&buf,count,datatype,src,tag,MPI_COMM_WORLD,status) • Argument datatype must be MPI_CHAR, MPI_INT….. • For each send-recv pair, tag must be the same
MPI – message passing interface • Other program notes • variables and functions except for MPI_XXXare local • messages dumped are not in order • example: send_recv.c
Odd-Even Sort • Operation in two phases, even phase and odd phase • Even phase • Even-numbered processes exchange numbers with their right neighbor • Odd phase • Odd-numbered processes exchange numbers with their right neighbor
How to solve this 8-number sorting? • Sequential program – easy • MPI • one number for one MPI process • start MPI program • master sends data to other process • start odd_even sorting • master collects result from other processes • end MPI program
Other problem? • # of unsorted numbers is not power of 2 ? • # of unsorted numbers is large ? • # of unsorted numbers can not be divided by nprocs ?
MPI – message passing interface • Advanced MPI functions • MPI_Bcast – broadcast a msg from source to other processes • MPI_Scatter – scatter values to a group of processes • MPI_Gather – gather values from a group of processes • MPI_Allgather – gather data from all tasks an distribute it to all • MPI_Barrier – block until all processes reach this routine
MPI_Bcast MPI_Bcast (void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm)
MPI_Scatter MPI_Scatter (void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm)
MPI_Gather MPI_Gather (void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm)
MPI_Allgather MPI_Allgather (void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, MPI_Comm comm)
MPI_Barrier MPI_Barrier (MPI_Comm comm)
Extension of MPI_Recv • MPI_Recv(void *buffer, int count,MPI_Datatype datatype, int source, int tag, • MPI_Comm comm, MPI_Status *status) • source is don’t-care – MPI_ANY_SOURCE • tag is don’t-care – MPI_ANY_TAG • to retrieve sender’s information typedef struct { int count; int MPI_SOURCE; int MPI_TAG; int MPI_ERROR; } MPI_Status; • use status->MPI_SOURCE to get sender’s id • use status->MPI_TAG to get message