1 / 139

Essential Cluster OS Commands

Essential Cluster OS Commands. Class 3. SSH.

blancaf
Download Presentation

Essential Cluster OS Commands

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Essential Cluster OS Commands Class 3

  2. SSH • ssh (SSH client) is a program for logging into a remote machine and for executing commands on a remote machine. It is intended to replace rlogin and rsh, and provide secure encrypted communications between two untrusted hosts over an insecure network. • Usage: • ssh [-l login_name] hostname | user@hostname [command] • Example: ssh -l peter tdgrocks.sci.hkbu.edu.hk ssh peter@tdgrocks.sci.hkbu.edu.hk

  3. Common Linux Command • Getting Help • man [command] - manual pages • apropos [keyword] - Searches the manual pages for the keyword • Directory Movement • pwd - current directory path • cd - change directory

  4. Common Linux Command • File/Directory Viewing • ls - list • cat - display entire file • more - page through file • less - page forward and backward through file • head - view first ten lines of file • tail - view last ten lines of file

  5. Common Linux Command • File/Directory Control • cp - copy • mv - move/rename • rm - remove • mkdir - make directory • rmdir - remove directory • ln - create pseudonym (link) • chmod - change permissions • touch - update access time (or create blank file)

  6. Common Linux Command • Searching • locate - list files in filename database • find - recursive file search • grep - search file (also see "egrep" & "fgrep") • Text Editors • vim – text editor • pico - another text editor • emacs - another text editor • nano - and another text editor

  7. Common Linux Command • Compression • tar - tape archiver • gzip - GNU compression utility • bzip2 - compression and package utility • unzip - uncompress zip files • Session and Terminal • history - command history • clear - clear screen

  8. Common Linux Command • User Information • yppasswd - change user password (not available in our cluster) • finger - display user(s) data, includes full name • who - display user(s) data • w - display user(s) current activity • System Usage • ps - show processes • kill - kill process • uptime - system usage & uptime

  9. Common Linux Command • Misc. • ftp - simple File Transfer Protocol client • sftp - Secure File Transfer Protocol client • ssh - Secure Shell • ispell - interactively check spelling against system dictionary • date - display date and time • cal - display calendar • wget - web content retriever (mirror)

  10. Cluster-fork • Rocks provides a simple tool for this purpose called cluster-fork. For example, to list all your processes on the compute nodes of the cluster: cluster-fork ps -U$USER • Cluster-fork is smart enough to ignore dead nodes. Usually the job is "blocking": cluster-fork waits for the job to start on one node before moving to the next.

  11. Cluster-fork • The following example lists the processes for the current user on 1-5, 7, 9 nodes. cluster-fork --nodes="cp0-%d:1-5 cp0-%d:7,9" ps -U$USER

  12. Table of Contents Page • Open a web browser, type http://tdgrocks.sci.hkbu.edu.hk at the location bar. • If you can successfully connect to the cluster's web server, you will be greeted with the Rocks Table of Contents page. This simple page has links to the monitoring services available for this cluster.

  13. Table of Contents Page

  14. Cluster Status (Ganglia) • The web pages available from this link provide a graphical interface to live cluster information provided by Ganglia monitors running on each cluster node. • The monitors gather values for various metrics such as CPU load, free Memory, disk usage, network I/O, operating system version, etc. • In addition to metric parameters, a heartbeat message from each node is collected by the ganglia monitors. • When a number of heartbeats from any node are missed, this web page will declare it "dead". These dead nodes often have problems which require additional attention, and are marked with the Skull-and-Crossbones icon, or a red background. • This page has many options, most of which are hopefully somewhat self explanitory. • The data is very fresh (usually only a few seconds old), and is updated with each page load. • See the ganglia website for more information about this powerful tool.

  15. Cluster Status (Ganglia)

  16. Cluster Status (Ganglia)

  17. Cluster Top • This page is a version of the standard "top" command for your cluster. This page presents process information from each node in the cluster. It is useful for monitoring the precise activity of your nodes. • The Cluster Top differs from standard top in several respects. Most importantly, each row has a "HOST" designation and a "TN" attribute that specifies its age. Since taking a process measurement itself requires resources, compute nodes report process data only once every 60 seconds on average. A process row with TN=30 means the host reported information about that process 30 seconds ago.

  18. Cluster Top

  19. Cluster Top • Process Columns • TN • The age of the information in this row, in seconds. • HOST • The node in the cluster on which this process is running. • PID • The Process ID. A non-negative integer, unique among all processes on this node. • USER • The username of this processes. • CMD • The command name of this process, without arguments. • %CPU • The percentage of available CPU cycles occupied by this process. This is always an approximate figure, which is more accurate for longer running processes.

  20. Cluster Top • %MEM • The percentage of available physical memory occupied by this process. • SIZE • The size of the "text" memory segment of this process, in kilobytes. This approximately relates the size of the executable itself (depending on the BSS segment). • DATA • Approximately the size of all dynamically allocated memory of this process, in kilobytes. Includes the Heap and Stack of the process. Defined as the "resident" - "shared" size, where resident is the total amount of physical memory used, and shared is defined below. Includes the text segment as well if this process has no children. • SHARED • The size of the shared memory belonging to this process, in kilobytes. Defined as any page of this process' physical memory that is referenced by another process. Includes shared libraries such as the standard libc and loader. • VM • The total virtual memory size used by this process, in kilobytes.

  21. OpenPBS

  22. Features • Job Priority • Users can specify the priority of their jobs. • Job-Interdependency • OpenPBS enables the user to define a wide range of interdependencies between batch jobs such as execution order, synchronization, and execution conditioned on the success or failure of a specified other job. • Automatic File Staging • OpenPBS provides users with the ability to specify any files that need to be copied onto the execution host before the job runs, and any that need to be copied off after the job completes. • Single or Multiple Queue Support • OpenPBS can be configured with as many queues. • Multiple Scheduling Algorithms • With OpenPBS you can select the standard first-in, first-out scheduling, or a more sophisticated scheduling algorithm.

  23. OpenPBS Components

  24. OpenPBS Components • Commands • There are three command classifications: user commands, which any authorized user can use, operator commands, and manager (or administrator) commands. • Job Server • The Server’s main function is to provide the basic batch services such as receiving/creating a batch job, modifying the job, protecting the job against system crashes, and running the job. Typically there is one Server managing a given set of resources.

  25. OpenPBS Components • Job Executor (MOM) • The Job Executor is the daemon which actually places the job into execution. This daemon is informally called MOM as it is the mother of all executing jobs. • MOM places a job into execution when it receives a copy of the job from a Server. MOM creates a new session that is as identical to a user login session as is possible. • MOM also has the responsibility for returning the job’s output to the user when directed to do so by the Server. • Job Scheduler • The Job Scheduler daemon implements the site’s policy controlling when each job is run and on which resources. • The Scheduler communicates with the various MOMs to query the state of system resources and with the Server for availability of jobs to execute. • Note that the Scheduler interfaces with the Server with the same privilege as the PBS manager.

  26. Submit a PBS Job

  27. A Sample PBS Job • Example PBS job: #!/bin/sh #PBS -l walltime=1:00:00 #PBS -l mem=400mb #PBS -l ncpus=4 #PBS -j oe ./subrun

  28. A Sample PBS Job • In our example above, lines 2-4 specify the “-l” resource list option, followed by a specific resource request. Specifically, lines 2-4 request 1 hour of wall-clock time, 400 megabytes (MB) of memory, and 4 CPUs. • Line 5 is not a resource directive. Instead it specifies how PBS should handle some aspect of this job. (Specifically, the “-j oe” requests that PBS join the stdout and stderr output streams of the job into a single stream.) • Finally line 7 is the command line for executing the program we wish to run.

  29. Submitting a PBS Job • Let’s assume the above example script is in a file called “mysubrun”.We submit this script using the qsub command: % qsub mysubrun 16387.cluster.pbspro.com • You can also specify the option or directive on the qsub command line. This is particularly useful if you just want to submit a single instance of your job, but you don’t want to edit the script. For example: % qsub -l ncpus=16 -l walltime=4:00:00 mysubrun 16388.cluster.pbspro.com • In this example, the 16 CPUs and 4 hours of wallclock time will override the values specified in the job script.

  30. Submitting a PBS Job • Note that you are not required to use a separate “-l” for each resource you request. You can combine multiple requests by separating them with a comma, thusly: % qsub -l ncpus=16,walltime=4:00:00 mysubrun 16389.cluster.pbspro.com • The same rule applies to the job script as well, as the next example shows. #!/bin/sh #PBS -l walltime=1:00:00,mem=400mb #PBS -l ncpus=4 #PBS -j oe ./subrun

  31. How PBS Parses a Job Script • An initial line in the script that begins with the characters "#" or the character ":" will be ignored and scanning will start with the next line. • A line in the script file will be processed as a directive to qsub if and only if the string of characters starting with the first non white space character on the line and of the same length as the directive prefix matches the directive prefix (i.e. “#PBS”). • The option character is to be preceded with the "-" character.

  32. PBS System Resources • Resources are specified using the “-l resource_list” option to qsub or in your job script. • The resource_list argument is of the form: • resource_name[=[value]][,resource_name[=[value]],...]

  33. PBS System Resources • The resource values are specified using the following units: • node_spec (Node Specification Syntax) • a job with a -l nodes=nodespec resource requirement may now run on a set of nodes that includes time-shared nodes • and a job without a -l nodes=nodespec may now run on a cluster node • syntax for node_spec is any combination of the following separated by colons ':‘ • number {if it appears, it must be first} • node name • property • ppn=number • cpp=number • number:any other of the above[:any other] • where ppn is the number of processes (tasks) per node (defaults to 1) and cpp is the number of CPUs (threads) per process (also defaults to 1). • The 'node specification' value is one or more node_spec joined with the '+' character. For example, node_spec[+node_spec...][#suffix • The node specification can be followed by one or more global modifiers. E.g. "#shared" (requesting shared access to a node)

  34. PBS System Resources • resc_spec (Boolean Logic in Resource Requests) • It offers the ability to use boolean logic in the specification of certain resources (such as architecture, memory, wallclock time, and CPU count) within a single node. • Note that at this time, this feature controls the selection of single • nodes, not multiple hosts within a cluster, with the meaning • of “give me a node with the following properties”. • For example, say you wanted to submit a job that can run on either the Solaris or Irix operating system, and you want PBS to run the job on the first available node of either type. You could add the following “resc” specification to your qsub command line (or your job).

  35. PBS System Resources • Example % qsub -l resc="(arch=='solaris7') || (arch=='irix')" mysubrun % qsub -l resc="((arch=='solaris7') || (arch=='irix')) && (mem=100MB) &&(ncpus=4)" #!/bin/sh #PBS -l resc="(arch=='solaris7')||(arch=='irix')" #PBS -l mem=100MB #PBS -l ncpus=4 ... • The following example shows requesting different memory amounts depending on the architecture that the job runs on: %qsub -l resc="( (arch=='solaris7') && (mem=100MB)||((arch=='irix')&&(mem=1GB) )"

  36. PBS System Resources • Time • [[hours:]minutes:]seconds[.milliseconds] • Size • specifies the maximum amount in terms of bytes (default) or words • b or w bytes or words. • kb or kw Kilo (1024) bytes or words. • mb or mw Mega (1,048,576) bytes or words. • gb or gw Giga (1,073,741,824) bytes or words. • String • comprised of a series of alpha-numeric characters containing no white space, beginning with an alphabetic character. • Unitary • expressed as a simple integer

  37. PBS Resources Available

  38. PBS Resources Available

  39. Job Submission Options

  40. Job Submission Options

  41. Job Submission Options

  42. Specifying Queue and/or Server • If the -q option is not specified, the qsub command will submit the script to the default queue at the default server. The destination specification takes the following form: • -q [queue[@host]] • Examples % qsub -q queue mysubrun % qsub -q @server mysubrun % qsub -q queueName@serverName mysubrun % qsub -q queueName@serverName.domain.com mysubrun #!/bin/sh #PBS -q queueName ...

  43. Redirecting output and error files • The “-o path” and “-e path” options to qsub allows you to specify the name of the files to which the standard output (stdout) and the standard error (stderr) file streams should be written. • The path argument is of the form: [hostname:]path_name • Examples % qsub -o myOutputFile mysubrun % qsub -o /u/james/myOutputFile mysubrun % qsub -o myWorkstation:/u/james/myOutputFile mysubrun #!/bin/sh #PBS -o /u/james/myOutputFile #PBS -e /u/james/myErrorFile ...

  44. Exporting environment variables • The “-V” option declares that all environment variables in the qsub command’s environment are to be exported to the batch job. • Examples % qsub -V mysubrun #!/bin/sh #PBS -V ...

  45. Expanding environment variables • The “-v variable_list” option to qsub expands the list of environment variables that are exported to the job. • The variable_list is a comma separated list of strings of the form variable or variable=value. These variables and their values are passed to the job. % qsub -v DISPLAY,myvariable=32 mysubrun

  46. Specifying e-mail notification • The “-m MailOptions” defines the set of conditions under which the execution server will send a mail message about the job. • MailOptions • “a” send mail when job is aborted by batch system • “b” send mail when job begins execution • “e” send mail when job ends execution • “n” do not send mail % qsub -m ae mysubrun #!/bin/sh #PBS -m b ...

  47. Setting e-mail recipient list • The “-M user_list” option declares the list of users to whom mail is sent by the execution server when it sends mail about the job. The user_list argument is of the form: • user[@host][,user[@host],...] • If unset, the list defaults to the submitting user at the qsub host, i.e. the job owner. • Example % qsub -M james@pbspro.com mysubrun

  48. Specifying a job name • The “-N name” option declares a name for the job. The name specifiedmay be up to and including 15 characters in length. It must consist of printable, non white space characters with the first character alphabetic. • If the -N option is not specified, the job name will be the base name of the job script file specified on the command line. • If no script file name was specified and the script was read from the standard input, then the job name will be set to STDIN. • Example % qsub -N myName mysubrun #!/bin/sh #PBS -N myName ...

  49. Marking a job as “rerunnable” or not • The “-r y|n” option declares whether the job is rerunable. • To rerun a job is to terminate the job and requeue it in the execution queue in which the job currently resides. • Example % qsub -r n mysubrun #!/bin/sh #PBS -r n ...

  50. Specifying which shell to use • The “-S path_list” option declares the shell that interprets the job script. • The option argument path_list is in the form: path[@host][,path[@host],...] • If no matching host is found, then the path specified without a host will be selected, if present. • If the -S option is not specified, the option argument is the null string, or no entry from the path_list is selected, then PBS will use the user’s login shell on the execution host. • Example % qsub -S /bin/tcsh mysubrun % qsub -S /bin/tcsh@mars,/usr/bin/tcsh@jupiter mysubrun

More Related