1 / 20

Beowulf Computing

Beowulf Computing. ASHWINI 1313-09-862-124 MCA III YEAR.

bessie
Download Presentation

Beowulf Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Beowulf Computing ASHWINI 1313-09-862-124 MCA III YEAR

  2. “A Beowulf cluster is a group of identical computers, which are running a Free and Open Source Software (FOSS), Unix-like operating system. They are networked into a small TCP/IP LAN, and have libraries and programs installed which allow processing to be shared among them.”

  3. The cluster consists of the following hardware parts: Network Server / Head / Master Node Compute Nodes Gateway

  4. All nodes (including the master node) run the following software: GNU/Linux OS Network File System (NFS) Secure Shell (SSH) Message Passing Interface (MPI)

  5. Configuring the Nodes Add the nodes to the hosts file All nodes should have a static local IP address set. Edit the hosts file (sudo vim /etc/hosts) like below and remember that you need to do this for all nodes, 127.0.0.1 localhost 192.168.1.6 node0 192.168.1.7 node1 192.168.1.8 node2 192.168.1.9 node3

  6. Once saved, you can use the host names to connect to the other nodes, $ ping node0 PING node0 (192.168.1.6) 56(84) bytes of data. 64 bytes from node0 (192.168.1.6): icmp_req=1 ttl=64 time=0.032 ms 64 bytes from node0 (192.168.1.6): icmp_req=2 ttl=64 time=0.033 ms 64 bytes from node0 (192.168.1.6): icmp_req=3 ttl=64 time=0.027 ms

  7. Defining a user for running MPI jobs All nodes can use the same username and password. MPICH2 uses SSH for communication between nodes The NFS directory can be made accessible for the MPI users only $ sudoaddusermpiuser --uid 999

  8. Setup passwordless SSH for communication between nodes Install the SSH server on all nodes: $ sudo apt-get install ssh Generate a SSH key for the MPI user on all nodes: $ sumpiuser $ ssh-keygen -t rsa When asked for a passphrase, leave it empty (hence passwordless SSH).

  9. Master node needs to be able to automatically login to the compute nodes. mpiuser@node0:~$ ssh-copy-id node1 mpiuser@node0:~$ ssh-copy-id node2 mpiuser@node0:~$ ssh-copy-id node3

  10. Install and setup the Network File System To install NFS, $ sudo apt-get install nfs-kernel-server To create the directory, $ sudomkdir /mirror Make sure this directory is owned by the MPI user, $ sudochownmpiuser:mpiuser /mirror

  11. Share the contents of the /mirror directory on the master node . file /etc/exports on the master node needs to be edited. /mirror *(rw,sync,no_subtree_check) Restart the NFS server to load the new configuration, node0:~$ sudo /etc/init.d/nfs-kernel-server restart

  12. Run the following command to allow incoming access from a specific subnet,node0:~$ sudo ufw allow from 192.168.1.0/24 /mirror directory can be automatically mounted when the nodes are booted. For this the file /etc/fstab needs to be edited. Add the following line to the fstab file of all compute nodes, node0:/mirror /mirror nfs List the contents of the /mirror directory $ ls /mirror

  13. Setting up MPDmpiuser@node0:~$ touch ~/mpd.hostsmpiuser@node0:~$ touch ~/.mpd.conf Add the names of all nodes to the mpd.hosts file, node0 node1 node2 node3

  14. The configuration file .mpd.conf needs to be accessible to the MPI user only , mpiuser@node0:~$ chmod 600 ~/.mpd.conf Then add a line with a secret passphrase to the configuration file , secretword=random_text_here

  15. All nodes need to have a .mpd.conf file in the home folder of mpiuser with the same passphrase mpiuser@node0:~$ scp -p .mpd.conf node1:/home/mpiuser/ mpiuser@node0:~$ scp -p .mpd.conf node2:/home/mpiuser/ mpiuser@node0:~$ scp -p .mpd.conf node3:/home/mpiuser/

  16. To start the mpd deamon on all nodes,mpiuser@node0:~$ mpdboot -n 4 To check if all nodes entered the ring (and are thus running the mpd daemon), mpiuser@node0:~$ mpdtrace -l

  17. Running jobs on the cluster Compilation $ cd mpich2-1.3.2p1/ $ ./configure $ make $ cd examples/

  18. Run a MPI job using the example application cpi mpiuser@node0:~$ mpiexec -n 4 /mirror/bin/cpi

  19. THANK YOU

More Related