230 likes | 592 Views
Cheat sheet for HPC User environment. 2 nd draft – for discussion. How to login. ssh penguin.memphis.edu One of the 4 login nodes will be used round-robin You are required to change password You will be placed at /home/usrname On the Panasas system.
E N D
Cheat sheet forHPC User environment 2nd draft – for discussion
How to login • ssh penguin.memphis.edu • One of the 4 login nodes will be used round-robin • You are required to change password • You will be placed at /home/usrname • On the Panasas system
Available Global and Local File Systems For secured daily back-up, be sure to follow Donnie’s instruction
Primary HPC Applications Most installed at /opt
How to submit jobs • The batch queue software is MOAB+TORQUE • It has a single queue with multiple pools of resources (called featured) • Input/output files will be at your WORKDIR • Screen output will be in file JNAME.oJID
Script to submit a serial job #!/bin/sh #PBS -j oe #PBS -l nodes=1:default #PBS -N serial_job cd $PBS_O_WORKDIR ls -tl /usr/bin/time serial.exe • Replace default with • other resource as needed
Additional job control parameters • Beyond default
Script to submit an SMP job #!/bin/sh #PBS -l nodes=1:default:ppn=NUM #PBS -N stream_c #PBS -j oe Export OMP_NUM_THREADS=NUM cd $PBS_O_WORKDIR ./stream_c.exe > stream_c.out-omp2 • Change NUM to what you need • Replace default with • other resource as needed
Sample output from qstat -f Job Id: 1134.scyld.localdomain Job_Name = hello_c Job_Owner = wdchen@login0-storage.localdomain job_state = R queue = batch server = head0.localdomain Checkpoint = u ctime = Wed May 13 08:31:48 2009 Error_Path = login0:/home/wdchen/stream/hello_c.e1134 exec_host = n48/4+n48/3+n48/2+n48/1+n48/0 Hold_Types = n Join_Path = oe Keep_Files = n Mail_Points = a mtime = Wed May 13 08:31:51 2009 Output_Path = login0:/home/wdchen/stream/hello_c.o1134 Priority = 0 qtime = Wed May 13 08:31:48 2009 Rerunable = True Resource_List.nodect = 1 Resource_List.nodes = 1:scratch:ppn=5 session_id = 1130 Variable_List = PBS_O_HOME=/home/wdchen,PBS_O_LANG=en_US.UTF-8, PBS_O_LOGNAME=wdchen, PBS_O_PATH=./:/usr/kerberos/bin:/opt/intel/Compiler/11.0/081/bin/inte l64:/opt/intel/Compiler/11.0/081/bin/intel64:/usr/local/bin:/bin:/usr/ bin:/usr/X11R6/bin:/usr/share/pvm3/lib:/home/wdchen/bin:/opt/intel/Com piler/11.0/081/bin/intel64:/opt/intel/mpi/3.1/bin/, PBS_O_MAIL=/var/spool/mail/wdchen,PBS_O_SHELL=/bin/bash, PBS_SERVER=login0,PBS_O_HOST=login0,PBS_O_WORKDIR=/home/wdchen/stream, PBS_O_QUEUE=batch etime = Wed May 13 08:31:48 2009 submit_args = run.sh start_time = Wed May 13 08:31:50 2009 start_count = 1 • exec_host shows the cores • assigned to your job
How to submit an MPI job • This will depend on • Compute network • infiniBand or Gigi-bit ethernet (not recommended) • Compiler • GNU, Intel, PGI – will be available on HPC cluster • Others – no plan to support • MPI library • MPICH, MVAPICH – default on HPC cluster • OpenMPI, IntelMPI – optional in the future • Others – no plan to support
Script to submit an MPI job If your app was compiled with GNU #!/bin/sh #PBS -j oe #PBS -l nodes=8:default:ppn=8 this is a 64-way MPI job cd $PBS_O_WORKDIR mpirun -machine vapi ./job-mpi.exe • Replace default with • other resource as needed If your app was compiled with Intel Compiler #!/bin/sh #PBS -j oe #PBS -l nodes=8:default:ppn=8 this is a 64-way MPI job cd $PBS_O_WORKDIR source /opt/intel/Compiler/11.0/081/bin/intel64/ifortvars_intel64.sh mpirun -machine vapi ./job-mpi.exe To use InfiniBand (and MVAPICH): -machine vapi To use Gigabit-Enet (and MPICH) : -machine p4 (NOT recommended)
Compilers for serial and OpenSMP apps * For OpenMP cc and F90, Intel compiler will be used. @ Exact path: /opt/intel/Compiler/11.0/081/bin/intel64 How to choose between compilers? - GNU compilers are the default (with exceptions) - be sure to check PATH and LD_LIBRARY_PATH
Compiling an MPI application • To be covered later.
Operating Systems CentOS 4.7 is based on RHEL 4.7, released Sept.2008
MPI Libraries - Build MPI support via each compiler - How to select MPI drivers?
Compilers for MPI apps (to confirm) * For F90, Intel compiler will be used.