720 likes | 915 Views
How to make PC Cluster Systems?. Tomo Hiroyasu Doshisha University Kyoto Japan tomo@is.doshisha.ac.jp. Cluster. clus · ter n.
E N D
How to make PC Cluster Systems? Tomo Hiroyasu Doshisha University Kyoto Japan tomo@is.doshisha.ac.jp
Cluster • clus·tern. • A group of the same or similar elements gathered or occurring closely together; a bunch: “She held out her hand, a small tight cluster of fingers” (Anne Tyler). • Linguistics. Two or more successive consonants in a word, as cl and st in the word cluster. A Cluster is a type of parallel or distributed processing system, which consists of a collection of interconnected stand alone/complete computers cooperatively working together as a single, integrated computing resource.
Evolutionary Computation Features It simulates the mechanism of creatures’ heredity and evolution. It can apply to several types of problems. It needs a huge computational costs. There are several individuals. Tasks can be divided into sub tasks. High Performance Computing
http://www.top500.org Top500 Ranking Name # Proc Rmax (Gflops) 1 8192 ASCI White 4938 9632 2 ASCI Red 2379 5808 3 ASCI Blue Pacific 2144 1608 4 ASCI Blue 6144 5 SP Power III 1417 1336 Parallel Computers
Commodity Hardware Networking Internet Lan Wan Gigabit cable less etc. CPU Pentium Alpha Power etc. PCs + Networking PC Clusters
Why PC Cluster? High ability Low Cost Easy to setup Easy to use Possession hardware Commodity Off-the-shelf Software Open source Free ware Peopleware University students and staff Lab nerds
http://www.top500.org Top500 Ranking Name # Proc Rmax (Gflops) 60 512 Los Lobos 237 232.6 84 CPlant Cluster 580 528 126 CLIC PIII 800 MHz 143.3 196 215 Kepler PIII 650 MHz 96.2 396 SCore II/PIII 800 MHz 132 64.7
Contents of this tutorial Concept of PC Clusters Small Cluster Advanced Cluster Hardware Software Books, Web sites, … Conclusions
Beowulf Cluster http://beowulf.org/ A Beowulf is a collection of personal computers (PCs) interconnected by widely available networking running any one of several open-source Unix-like operating systems. Some Linux clusters are built for reliability instead of speed. These are not Beowulfs. The Beowulf Project was started by Donald Becker when he moved to CESDIS in early 1994. CESDIS was located at NASA's Goddard Space Flight Center, and was operated for NASA by USRA.
Avalon http://cnls.lanl.gov/Frames/avalon-a.html Los Alamos National Laboratory Alpha(140)+Myrinet Beowulf First Beowulf in the ranking of Top 500
The Berkeley NOW project http://now.cs.berkeley.edu/ The Berkeley NOW project is building system support for using a network of workstations (NOW) to act as a distributed supercomputer on a building-wide scale. April 30, 1997: NOW makes LINPACK Top 500! June 15, 1998: NOW Retreat Finale
Cplant Cluster http://www.cs.sandia.gov/cplant/ Sandia National Laboratory Alpha(580) + Myrinet
RWCP Cluster http://pdswww.rwcp.or.jp/ Japanese typical cluster Score, Open MP Myrinet
Doshisha Cluster http://www.is.doshisha.ac.jp/cluster/index.html Pentium III 0.8G (256) + Fast Ethernet Pentium III 1.0 G (2*64) + Myrinet 2000
Simple Cluster 8nodes + gateway(file server) Fast Ethernet Switching Hub $10000
What do we need? Normal PCs Hardware CPU memory motherboard hard disc case network card cable hub
What do we need? Software OS tools Editor Compiler Parallel Library
Message Passing Libraries PVM (Parallel Virtual Machine) http://www.epm.ornl.gov/pvm/pvm_home.html PVM was developed at Oak Ridge National Laboratory and the University of Tennessee. MPI (Message Passing Interface) http://www-unix.mcs.anl.gov/mpi/index.html MPI is an API of message passing. 1992: MPI forum 1994 MPI 1 1887 MPI 2
Implementations of MPI Free Implementation MPICH : LAM: WMPI : Windows 95,NT CHIMP/MPI MPI Light Bender Implementation Implementations of parallel computers MPI/PRO :
Procedure of constructing clusters Prepare several PCs Connected PCs Install OS and tools Install developing tools and parallel library
Installing MPICH/LAM # rpm –ivh lam-6.3.3b28-1.i386.rpm # rpm –ivh mpich-1.2.0-5.i386.rpm # dpkg –i lam2_6.3.2-3.deb # dpkg –i mpich_1.1.2-11.deb # apt-get install lam2 # apt-get install mpich
Parallel programming (MPI) Massive parallel computer gateway Jobs Tasks user PC-Cluster
Initialization Communicator Acquiring number of process Acquiring rank Termination Programming style sheet # include “mpi.h” int main( int argc, char **argv ) { MPI_Init(&argc, &argv ) ; MPI_Comm_size( …… ); MPI_Comm_rank( …… ) ; /* parallel procedure */ MPI_Finalize( ) ; return 0 ; }
Communications One by one communication Group communication Process A Process B Receive/send data Receive/send data
One by one communication [Sending] MPI_Send( void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) void *buf:Sending buffer starting address (IN) int count:Number of Data (IN) MPI_ Datatype datatype:data type (IN) int dest:receiving point (IN) int tag:message tag (IN) MPI_Comm comm:communicator(IN)
One by one communication [Receiving] MPI_Recv( void *buf, int count, MPI_Datatypedatatype, int source, int tag, MPI_Commcomm, MPI_Statusstatus) void *buf:Receiving buffer starting address (OUT) int source:sending point (IN) int tag:Message tag (IN) MPI_Status *status:Status (OUT)
~Hello.c~ #include <stdio.h> #include "mpi.h" void main(int argc,char *argv[]) { int myid,procs,src,dest,tag=1000,count; char inmsg[10],outmsg[]="hello"; MPI_Status stat; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myid); count=sizeof(outmsg)/sizeof(char); if(myid == 0){ src = 1; dest = 1; MPI_Send(&outmsg,count,MPI_CHAR,dest,tag,MPI_COMM_WORLD); MPI_Recv(&inmsg,count,MPI_CHAR,src,tag,MPI_COMM_WORLD,&stat); printf("%s from rank %d\n",&inmsg,src); }else{ src = 0; dest = 0; MPI_Recv(&inmsg,count,MPI_CHAR,src,tag,MPI_COMM_WORLD,&stat); MPI_Send(&outmsg,count,MPI_CHAR,dest,tag,MPI_COMM_WORLD); printf("%s from rank %d\n",&inmsg,src); } MPI_Finalize(); }
One by one communication MPI_Recv(&inmsg,count,MPI_CHAR,src, tag,MPI_COMM_WORLD,&stat); MPI_Send(&outmsg,count,MPI_CHAR,dest, tag,MPI_COMM_WORLD); MPI_Sendrecv(&outmsg,count,MPI_CHAR,dest, tag,&inmsg,count,MPI_CHAR,src, tag,MPI_COMM_WORLD,&stat);
4 3.5 3 2.5 y 2 1.5 1 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x Calculation of PI (approximation) -Parallel conversion- Integral calculus is divided in to sub sections. Each subsection is allotted to processors. Results of calculation are assembled.
Group communication Broadcast MPI_Bcast( void *buf, int count, MPI_Datatype datatype, int root, MPI_Comm comm ) Rank of sending point Data
Group Communication • Communication and operation (reduce) MPI_Reduce( void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Opop, int root, MPI_Comm comm ) Operation handle Rank of receiving point MPI_SUM, MPI_MAX, MPI_MIN, MPI_PROD Operation
Hardware CPU Intel Pentium III, IV AMD Athlon Transmeta Crusoe http://www.intel.com/ http://www.amd.com/ http://www.transmeta.com/
Hardware Network Ethernet Gigabit Ethernet Myrinet QsNet Giganet SCI Atoll VIA Infinband Gigabit Wake On LAN
Hardware Hard disc SCSI IDE Raid Diskless Cluster http://www.linuxdoc.org/HOWTO/Diskless-HOWTO.html
Hardware Case Box inexpensive Rack compact maintenance
Software Software
OS Linux Kernels Open source network Free ware Features The /proc file system Loadable kernel modules Virtual consoles Package management
OS Linux Kernels http://www.kernel.org/ Linux Distributions Red Hat www.redhat.com Debian GNU/Linux www.debian.org S.u.S.E. www.suse.com Slackware www.slackware.org
client server client client Administration software NFS(Network File System) NIS (Network Information System) NTP (Network Time Protocol)
Resource Management and Scheduling Process distribution Load balance Job scheduling of multiple tasks CONDOR http://www.cs.wisc.edu/condor/ DQS http://www.scri.fsu.edu/~pasko/dqs.html LSF http://www.platform.com/index.html The Sun Grid Engine http://www.sun.com/software/gridware/
Tools for Program Development GNU http://www.gnu.org/ NAG http://www.nag.co.uk PGI http://www.pgroup.com/ VAST http://www.psrv.com/ Absoft http://www.absoft.com/ Fujitsu http://www.fqs.co.jp/fort-c/ Intel http://developer.intel.com/software/ products/compilers/index.htm Editor Emacs Language C, C++, Fortran, Java Compiler
Tools for Program Development Make CVS Debugger Gdb Total View http://www.etnus.com