200 likes | 303 Views
Beowulf Computing. ASHWINI 1313-09-862-124 MCA III YEAR.
E N D
Beowulf Computing ASHWINI 1313-09-862-124 MCA III YEAR
“A Beowulf cluster is a group of identical computers, which are running a Free and Open Source Software (FOSS), Unix-like operating system. They are networked into a small TCP/IP LAN, and have libraries and programs installed which allow processing to be shared among them.”
The cluster consists of the following hardware parts: Network Server / Head / Master Node Compute Nodes Gateway
All nodes (including the master node) run the following software: GNU/Linux OS Network File System (NFS) Secure Shell (SSH) Message Passing Interface (MPI)
Configuring the Nodes Add the nodes to the hosts file All nodes should have a static local IP address set. Edit the hosts file (sudo vim /etc/hosts) like below and remember that you need to do this for all nodes, 127.0.0.1 localhost 192.168.1.6 node0 192.168.1.7 node1 192.168.1.8 node2 192.168.1.9 node3
Once saved, you can use the host names to connect to the other nodes, $ ping node0 PING node0 (192.168.1.6) 56(84) bytes of data. 64 bytes from node0 (192.168.1.6): icmp_req=1 ttl=64 time=0.032 ms 64 bytes from node0 (192.168.1.6): icmp_req=2 ttl=64 time=0.033 ms 64 bytes from node0 (192.168.1.6): icmp_req=3 ttl=64 time=0.027 ms
Defining a user for running MPI jobs All nodes can use the same username and password. MPICH2 uses SSH for communication between nodes The NFS directory can be made accessible for the MPI users only $ sudoaddusermpiuser --uid 999
Setup passwordless SSH for communication between nodes Install the SSH server on all nodes: $ sudo apt-get install ssh Generate a SSH key for the MPI user on all nodes: $ sumpiuser $ ssh-keygen -t rsa When asked for a passphrase, leave it empty (hence passwordless SSH).
Master node needs to be able to automatically login to the compute nodes. mpiuser@node0:~$ ssh-copy-id node1 mpiuser@node0:~$ ssh-copy-id node2 mpiuser@node0:~$ ssh-copy-id node3
Install and setup the Network File System To install NFS, $ sudo apt-get install nfs-kernel-server To create the directory, $ sudomkdir /mirror Make sure this directory is owned by the MPI user, $ sudochownmpiuser:mpiuser /mirror
Share the contents of the /mirror directory on the master node . file /etc/exports on the master node needs to be edited. /mirror *(rw,sync,no_subtree_check) Restart the NFS server to load the new configuration, node0:~$ sudo /etc/init.d/nfs-kernel-server restart
Run the following command to allow incoming access from a specific subnet,node0:~$ sudo ufw allow from 192.168.1.0/24 /mirror directory can be automatically mounted when the nodes are booted. For this the file /etc/fstab needs to be edited. Add the following line to the fstab file of all compute nodes, node0:/mirror /mirror nfs List the contents of the /mirror directory $ ls /mirror
Setting up MPDmpiuser@node0:~$ touch ~/mpd.hostsmpiuser@node0:~$ touch ~/.mpd.conf Add the names of all nodes to the mpd.hosts file, node0 node1 node2 node3
The configuration file .mpd.conf needs to be accessible to the MPI user only , mpiuser@node0:~$ chmod 600 ~/.mpd.conf Then add a line with a secret passphrase to the configuration file , secretword=random_text_here
All nodes need to have a .mpd.conf file in the home folder of mpiuser with the same passphrase mpiuser@node0:~$ scp -p .mpd.conf node1:/home/mpiuser/ mpiuser@node0:~$ scp -p .mpd.conf node2:/home/mpiuser/ mpiuser@node0:~$ scp -p .mpd.conf node3:/home/mpiuser/
To start the mpd deamon on all nodes,mpiuser@node0:~$ mpdboot -n 4 To check if all nodes entered the ring (and are thus running the mpd daemon), mpiuser@node0:~$ mpdtrace -l
Running jobs on the cluster Compilation $ cd mpich2-1.3.2p1/ $ ./configure $ make $ cd examples/
Run a MPI job using the example application cpi mpiuser@node0:~$ mpiexec -n 4 /mirror/bin/cpi