1 / 16

Introduction to TAMNUN server and basics of PBS usage

Introduction to TAMNUN server and basics of PBS usage. Yulia Halupovich CIS, Core Systems Group. TAMNUN LINKS. Registration: http://reg-tamnun.technion.ac.il Documentation and Manuals: http ://tamnun.technion.ac.il/doc /

goro
Download Presentation

Introduction to TAMNUN server and basics of PBS usage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to TAMNUN server and basics of PBS usage YuliaHalupovich CIS, Core Systems Group

  2. TAMNUN LINKS • Registration: http://reg-tamnun.technion.ac.il • Documentation and Manuals: http://tamnun.technion.ac.il/doc/ • Help Pages & Important Documents http://tamnun.technion.ac.il/doc/Local-help/ • Accessible from external Network http://www.technion.ac.il/doc/tamnun/

  3. Tamnun Cluster inventory – admin nodes Login node (Intel 2 E5645 6core@2.4GHz, 96GB ) • user login • PBS • compilations, • YP master Admin node (Intel 2 E5-2640 6core@2.5GHz, 64GB ) • SMC NAS node (NFS, CIFS) (Intel 2 E5620 4core@2.4GHz,48GB ) • 1st enclosure – 60 slots, 60 x 1TB drives • 2nd enclosure - 60 slots, 10 x 3TB drives Network Solution: -14 QDR Infiniband switches with 2:1 blocking topology - 4 GiGE switches for the management network

  4. Tamnun Cluster inventory – compute nodes (1) Tamnun consists of public cluster, available for general Technion users and private sub-clusters purchased by Technion researchers Public Cluster Specifications: • 80 Compute Nodes consisting of two 2.40 GHz six core Xeon Intel processors: 960 cores with 8GB DDR3 memory per core • 4 Graphical Processing Units (GPU): 4 servers with NVIDIA TeslaM2090 GPU Computing Modules, 512 CUDA cores • Storage: 36 nodes with 500 GB and 52 nodes with 1 TB Sata Drives, 4 nodes with fast 1200 GB SAS drives, raw NAS storage capacity is 50 TB.

  5. Tamnun Cluster inventory – compute nodes (2) • Nodes n001 – n028 - RBNI (public) • Nodes n029 – n080 - Minerva (public) • Nodes n097 – n100 - “Gaussian” nodes with large and fast drive (public) • Nodes gn001 – gn004 - GPU (public) • Nodes gn005 – gn007 - GPU (private nodes of HagaiPerets) • Nodes n081 – n096, sn001 - private cluster (Dan Mordehai) • Nodes n101 – n108 - private cluster (Oded Amir) • Nodes n109 – n172 - private cluster (Steven Frankel) • Nodes n173 – n180 - private cluster (Omri Barak) • Nodes n181 – n184 - private cluster (RimonArieli) • Node sn002 - private node (Fabian Glaser) • Nodes n193 – n216 - private cluster (MaytalCaspary)

  6. TAMNUN connection - general guidelines • Connection via server TX and GoGlobal (also from abroad) http://tx.technion.ac.il/doc/tamnun/TAMNUN_Connection_from_Windows.txt 2. Interactive usage: compiling, debugging and tuning only! 3. Interactive CPU time limit = 1 hour 4. Tamnun login node: use to submit jobs via PBS batch queues to Compute nodes (see next pages on PBS) 5. Default quota = 50 GB, check with quota –vs username http://tx.technion.ac.il/doc/tamnun/Quota_enlargement_policy.txt 6. Secure file transfer: outside Technion: scp to TX , WinScp to/from PC, inside Technion: use WinScp 7. dropbox usage is not allowed on Tamnun! 8. No automatic data backup is available on Tamnun, see http://tx.technion.ac.il/doc/tamnun/Tamnun-Backup/

  7. Portable Batch System – Definition and 3 Primary Roles • Definition: PBS is a distributed workload management system. It handles the management and monitoring of the computational workload on a set of computers • Queuing: Users submit tasks or “jobs” to the resource management system where they are queued up until the system is ready to run them. • Scheduling: The process of selecting which jobs to run, when, and where, according to a predetermined policy. Aimed at balance competing needs and goals on the system(s) to maximize efficient use of resources • Monitoring: Tracking and reserving system resources, enforcing usage policy. This includes both software enforcement of usage limits and user or administrator monitoring of scheduling policies

  8. Important PBS Links on Tamnun • PBS User Guide http://tamnun.technion.ac.il/doc/PBS-User-Guide.pdf • Basic PBS Usage Instructions http://tamnun.technion.ac.il/doc/Local-help/TAMNUN_PBS_Usage.pdf • Detailed Description of the Tamnun PBS Queues http://tamnun.technion.ac.il/doc/Local-help/TAMNUN_PBS_Queues_Description.pdf • PBS scripts examples http://tamnun.technion.ac.il/doc/Local-help/PBS-scripts/

  9. Current Public Queues on TAMNUN

  10. Submitting jobs to PBS:qsubcommand • qsub command is used to submit a batch job to PBS. Submitting a PBS job specifies a task, requests resources and sets job attributes, which can be defined in an executable scriptfile. The syntax of qsubrecommended on TAMNN : > qsub [options] scriptfile • PBS script files ( PBS shell scripts, see the next page) should be created in the user’s directory • To obtain detailed information about qsub options, please use the command: > man qsub • Job Identifier (JOB_ID) Upon successful submission of a batch job PBS returns a job identifier in the following format: > sequence_number.server_name > 12345.tamnun

  11. The PBS shell script sections • Shell specification: #!/bin/sh • PBS directives: used to request resources or set attributes. A directive begins with the default string “#PBS”. • Tasks (programs or commands) - environment definitions - I/O specifications - executable specifications NB! Other lines started with # are comments

  12. PBS script example for multicore user code #!/bin/sh #PBS -N job_name #PBS -q queue_name #PBS -M user@technion.ac.il #PBS -l select=1:ncpus=N #PBS -l select=mem=P GB #PBS -l walltime=24:00:00 PBS_O_WORKDIR=$HOME/mydir cd $PBS_O_WORKDIR ./program.exe < input.file > output.file Other examples see at http://tamnun.technion.ac.il/doc/Local-help/PBS-scripts/

  13. Checking the job status: qstatcommand • qstat command is used to request the status of batch jobs, queues, or servers • Detailed information: > man qstat • qstat output structure (see on Tamnun) • Useful commands > qstat –a all users in all queues (default) > qstat -1n all jobs with node names > qstat -1nu username all user jobswith node names > qstat –f JOB_ID extended output for the job > qstat –1Gn queue_nameall jobs in queue with nodes > qstat –Qfqueue_nameextended queue details

  14. Removing job from a queue: qdelcommand • qdelused to delete queued or running jobs. The job's running processes are killed. A PBS job may be deleted by its owner or by the administrator • Detailed information: > man qdel • Useful commands > qdel JOB_ID deletes job from a queue > qdel -W force JOB_ID force delete job

  15. Checking ajob results and Troubleshooting • Save the JOB_ID for further inspection • Check error and output files: job_name.eJOB_ID; job_name.oJOB_ID • Inspect job’s details (also after N days ) : > tracejob [-n N]JOB_ID • Job in E state - occupies resources, will be deleted • Running interactive batch job: > qsub –I pbs_script Job sent to execution node, PBS directives executed, job awaits user’s command • Checking the job on an execution node: > sshnode_name > hostname > top /u user - showsuser shows processes ; /1 – CPU usage > kill -9 PID remove job from the node > ls –rtl /gtmpcheckerror, output and other files under user ownership • Output can be copied from the node to the home directory

  16. Monitoring the system • pbsnodesused to query the status of hosts • Syntax: > pbsnodesnode_name/node_list Shows extended information on a node: resources available, resources used queues list, busy/free status, jobs list • xpbsmon & provides a way to graphically display the various nodes that run jobs. With this utility, you can see what job is running on what node, who owns the job, how many nodes assigned to a job, status of each node (color-coded and the colors are user-modifiable), how many nodes are available, free, down, reserved, offline, of unknown status, in use running multiple jobs or executing only 1 job. • Detailed information and tutorials: > man xpbsmon

More Related