1.07k likes | 1.25k Views
מבוא לעיבוד מקבילי. הרצאה מס' 10 24/12/2001. תרגיל בית מס' 3. ניתן להגיש עד ליום ה' ה- 27/12/2001. פרוייקטי גמר. קבוצות 1-10 מתבקשות להכין את המצגות שלהן לשיעור בעוד שבועיים. נא להעביר את קבצי המצגות בפורמט Point Power לפני ההרצאה או לבוא לשיעור עם CDROM צרוב. הבחנים.
E N D
מבוא לעיבוד מקבילי הרצאה מס' 10 24/12/2001
תרגיל בית מס' 3 • ניתן להגיש עד ליום ה' ה- 27/12/2001
פרוייקטי גמר • קבוצות 1-10 מתבקשות להכין את המצגות שלהן לשיעור בעוד שבועיים. • נא להעביר את קבצי המצגות בפורמט PointPower לפני ההרצאה או לבוא לשיעור עם CDROM צרוב.
הבחנים • בדיקת הבחנים תסתיים עד ליום ו'. • התוצאות יפורסמו בשיעור הבא.
נושאי ההרצאה • Today’s topics: • Shared Memory • Cilk, OpenMP • MPI – Derived Data Types • How to Build a Beowulf
Shared Memory • Goto PDF presentation: Chapter 8 from Wilkinson & Allan’s book. “Programming with Shared Memory”
Summary • Process creation • The thread concept • Pthread routines • How data can be created as shared • Condition Variables • Dependency analysis: Bernstein’s conditions
Cilk http://supertech.lcs.mit.edu/cilk
Cilk • A language for multithreaded parallel programming based on ANSI C. • Cilk is designed for general-purpose parallel programming language • Cilk is especially effective for exploiting dynamic, highly asynchronous parallelism.
A parallel Cilk program to compute the nth Fibonacci number.
Cilk - continue • Compiling: $ cilk -O2 fib.cilk -o fib • Executing: $ fib --nproc 4 30
OpenMP Next 5 slides taken from the SC99 tutorial Given by: Tim Mattson, Intel Corporation and Rudolf Eigenmann, Purdue University
לקריאה נוספת High-Performance Computing Part III Shared Memory Parallel Processors
Collective Communication Broadcast
Collective Communication Reduce
Collective Communication Gather
Collective Communication Allgather
Collective Communication Scatter
Collective Communication There are more collective communication commands…
נושאים מתקדמים ב- MPI • MPI – Derived Data Types • MPI-2 – Parallel I/O
User Defined Types • מלבד ה- types המוגדרים מראש, יכול המשתמש ליצור טיפוסים חדשים • Compact pack/unpack.
Predefined Types MPI_DOUBLE double MPI_FLOAT float MPI_INT signed int MPI_LONG signed long int MPI_LONG_DOUBLE long double MPI_LONG_LONG_INT signed long long int MPI_SHORT signed short int MPI_UNSIGNED unsigned int MPI_UNSIGNED_CHAR unsigned char MPI_UNSIGNED_LONG unsigned long int MPI_UNSIGNED_SHORT unsigned short int MPI_BYTE
Motivation • What if you want to specify: • non-contiguous data of a single type? • contiguous data of mixed types? • non-contiguous data of mixed types? Derived datatypes save memory, are faster, more portable, and elegant.
3 Steps • Construct the new datatype using appropriate MPI routines:MPI_Type_contiguous, MPI_Type_vector, MPI_Type_struct, MPI_Type_indexed, MPI_Type_hvector, MPI_Type_hindexed • Commit the new datatypeMPI_Type_commit • Use the new datatype in sends/receives, etc.Use
#include<mpi.h> void main(int argc, char *argv[]) { int rank; MPI_status status; struct{ int x; int y; int z; }point; MPI_Datatype ptype; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Type_contiguous(3,MPI_INT,&ptype); MPI_Type_commit(&ptype); if(rank==3){ point.x=15; point.y=23; point.z=6; MPI_Send(&point,1,ptype,1,52,MPI_COMM_WORLD); } else if(rank==1) { MPI_Recv(&point,1,ptype,3,52,MPI_COMM_WORLD,&status); printf("P:%d received coords are (%d,%d,%d) \n",rank,point.x,point.y,point.z); } MPI_Finalize(); }
User Defined Types • MPI_TYPE_STRUCT • MPI_TYPE_CONTIGUOUS • MPI_TYPE_VECTOR • MPI_TYPE_HVECTOR • MPI_TYPE_INDEXED • MPI_TYPE_HINDEXED
MPI_TYPE_STRUCT is the most general way to construct an MPI derived type because it allows the length, location, and type of each component to be specified independently. int MPI_Type_struct (int count, int *array_of_blocklengths, MPI_Aint *array_of_displacements, MPI_Datatype *array_of_types, MPI_Datatype *newtype)
Struct Datatype Example count = 2 array_of_blocklengths[0] = 1 array_of_types[0] = MPI_INT array_of_blocklengths[1] = 3 array_of_types[1] = MPI_DOUBLE
MPI_TYPE_CONTIGUOUS is the simplest of these, describing a contiguous sequence of values in memory. For example, MPI_Type_contiguous(2,MPI_DOUBLE,&MPI_2D_POINT); MPI_Type_contiguous(3,MPI_DOUBLE,&MPI_3D_POINT); int MPI_Type_contiguous(int count, MPI_Datatype oldtype, MPI_Datatype *newtype)
MPI_TYPE_CONTIGUOUS creates new type indicators MPI_2D_POINT and MPI_3D_POINT. These type indicators allow you to treat consecutive pairs of doubles as point coordinates in a 2-dimensional space and sequences of three doubles as point coordinates in a 3-dimensional space.
MPI_TYPE_VECTOR describes several such sequences evenly spaced but not consecutive in memory. MPI_TYPE_HVECTOR is similar to MPI_TYPE_VECTOR except that the distance between successive blocks is specified in bytes rather than elements. MPI_TYPE_INDEXED describes sequences that may vary both in length and in spacing.
MPI_TYPE_VECTOR int MPI_Type_vector(int count, int blocklength, int stride, MPI_Datatype oldtype, MPI_Datatype *newtype) count = 2, blocklength = 3, stride = 5
תכנית לדוגמא: #include<mpi.h> void main(int argc, char *argv[]) { int rank,i,j; MPI_status status; double x[4][8]; MPI_Datatype coltype; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Type_vector(4,1,8,MPI_DOUBLE,&coltype); MPI_Type_commit(&coltype);
if(rank==3){ for(i=0;i<4;++i) for(j=0;j<8;++j) x[i][j]=pow(10.0,i+1)+j; MPI_Send(&x[0][7],1,coltype,1,52,MPI_COMM_WORLD); } else if(rank==1) { MPI_Recv(&x[0][2],1,coltype,3,52,MPI_COMM_WORLD,&status); for(i=0;i<4;++i) printf("P:%d my x[%d][2]=%1f\n",rank,i,x[i][2]); } MPI_Finalize(); }
הפלט: P:1 my x[0][2]=17.000000 P:1 my x[1][2]=107.000000 P:1 my x[2][2]=1007.000000 P:1 my x[3][2]=10007.000000
Committing a datatype int MPI_Type_commit (MPI_Datatype *datatype)
Obtaining Information About Derived Types • MPI_TYPE_LB and MPI_TYPE_UB can provide the lower and upper bounds of the type. • MPI_TYPE_EXTENT can provide the extent of the type. In most cases, this is the amount of memory a value of the type will occupy. • MPI_TYPE_SIZE can provide the size of the type in a message. If the type is scattered in memory, this may be significantly smaller than the extent of the type.
MPI_TYPE_EXTENT MPI_Type_extent (MPI_Datatype datatype, MPI_Aint *extent) Correction: Deprecated. Use MPI_Type_get_extent instead!
MPI-2 MPI-2 is a set of extensions to the MPI standard. It was finalized by the MPI Forum in June, 1997.
MPI-2 • New Datatype Manipulation Functions • Info Object • New Error Handlers • Establishing/Releasing Communications • Extended Collective Operations • Thread Support • Fault Tolerant
MPI-2 Parallel I/O • Motivation: • The ability to parallelize I/O can offer significant performance improvements. • User-level checkpointing is contained within the program itself.