290 likes | 392 Views
Special Jobs. Valeria Ardizzone INFN - Catania 12th EELA Tutorial Lima, 26.09.2007. Outline. Outline. Overview Job with Data Requirements Overview MPI - How to create a MPI job. - MPI job in middleware. Overview DAG. Overview Jobs with Data.
E N D
Special Jobs Valeria Ardizzone INFN - Catania 12th EELA Tutorial Lima, 26.09.2007
Outline Outline • Overview Job with Data Requirements • Overview MPI - How to create a MPI job. - MPI job in middleware. • Overview DAG Lima, 12th EELA Tutorial, 25.09.2007
Overview Jobs with Data • User can stage input files from the UI to the WN using the InputSandbox attribute InputSandbox = {"codesa.i686", "start_root.sh", "./Korba/atmbc.const", "./Korba/bctran-window.3", "./Korba/codesa3d.fnames", ".rootrc", "convert.C", "GraphCODESA3D.C"}; The upper limit for InputSandbox is 10Mbyte!
..the question What can I do if my job requires huge data to be processed ? Lima, 12th EELA Tutorial, 25.09.2007
..the answer /1 InputData (optional) This is a string or a list of strings representing the Logical File Name (LFN) or Grid Unique Identifier (GUID) needed by the job as input. The list is used by the RB to find the CE from which the specified files can be better accessed and schedules the job to run there. InputData = {“lfn:cmstestfile”, “guid:135b7b23-4a6a-11d7-87e7-9d101f8c8b70”}; Lima, 12th EELA Tutorial, 25.09.2007
..the answer /2 DataAccessProtocol (mandatory if InputData has been specified) The protocol or the list of protocols which the application is able to “speak” with for accessing files listed in InputData on a given SE. Supported protocols in gLite are currently gsiftp, and file. DataAccessProtocol = {“file”,“gsiftp”}; Lima, 12th EELA Tutorial, 25.09.2007
pds2jpg-MERIS-Etna.jdl [ JobType = "normal"; Type = "Job"; Executable = "/bin/bash"; Arguments = "pds2jpg_install.sh \ MER_FR__2PNUPA20030504_092534_000000502016_00079_06145_0033"; StdOutput = "pds2jpg.out"; StdError = "pds2jpg.err"; InputSandbox = {"./pds2jpg_install.sh","./beam20.tar.gz"}; InputData = {"lfn:/grid/gilda/MER_FR__2PNUPA20030504_092534_000000502016_00079_06145_0033.N1"}; DataAccessProtocol = {"gridftp","rfio","gsiftp"}; OutputSandbox = { "MER_FR__2PNUPA20030504_092534_000000502016_00079_06145_0033.jpg", "ENVISAT_Product_courtesy_of_European_Space_Agency", "pds2jpg.out", "pds2jpg.err" }; RetryCount = 3; ] Lima, 12th EELA Tutorial, 25.09.2007
pds2jpg_install.sh #!/bin/sh echo Staging Input Data \(Courtesy of European Space Agency\); #Skip the "/" from the argument. file=`echo $1 | awk -F '/' '{print $2}'` echo lcg-cp --vo gilda lfn:/grid/gilda/MER_FR__2PNUPA20030504_092534_000000502016_00079_06145_0033.N1 file:`pwd`/${file}.N1 lcg-cp --vo gilda lfn:/grid/gilda/MER_FR__2PNUPA20030504_092534_000000502016_00079_06145_0033.N1 file:`pwd`/${file}.N1 echo Staging Application; ls -al gunzip beam20.tar.gz; tar xvf beam20.tar; cd beam-2.0/bin; echo Starting Application; echo "./pds2jpg-run.sh $file;" ./pds2jpg-run.sh $file; echo "mv $file.jpg ../.." mv $file.jpg ../.. touch ../../ENVISAT_Product_courtesy_of_European_Space_Agency echo "Input ENVISAT Product courtesy of European Space Agency">../../ENVISAT_Product_courtesy_of_European_Space_Agency echo No Output Packaging; Lima, 12th EELA Tutorial, 25.09.2007
the output.. Input ENVISAT Product courtesy of European Space Agency Lima, 12th EELA Tutorial, 25.09.2007
..another couple of questions The output produced by my job must be processed, as input, by some other jobs Q.1) How can I make accessible this data for other computation ? Q.2) Can I upload the data to be processed later automatically ? Lima, 12th EELA Tutorial, 25.09.2007
Output data OutputData (optional) This attribute allows the user to ask for the automatic upload and registration of datasets produced by the job on the Worker Node (WN). This attribute contains the following three attributes: • OutputFile • StorageElement • LogicalFileName Lima, 12th EELA Tutorial, 25.09.2007
Output data OutputFile (mandatory if OutputData has been specified) This is a string attribute representing the name of the output file, generated by the job on the WN, which has to be automatically uploaded and registered by the WMS. StorageElement (optional) This is a string representing the URI of the Storage Element where the output file specified in the OutputFile has to be uploaded by the WMS. LogicalFileName (optional) This is a string representing the LFN user wants to associate to the output file when registering it to the Catalogue. Automatic uploading mechanism NOT yet supported in gLite Lima, 12th EELA Tutorial, 25.09.2007
Overview MPI • Execution of parallel jobs is an essential issue for modern conceptions of informatics and applications. • Most used library for parallel jobs support is (Message Passing Interface) MPI • At the state of the art, parallel jobs can run inside single Computing Elements (CE) only; • several projects are involved into studies concerning the possibility of executing parallel jos on Worker Nodes (WNs) belonging to differents CEs. Lima, 12th EELA Tutorial, 25.09.2007
How to create a MPI Job Lima, 12th EELA Tutorial, 25.09.2007
Requirements & Settings • In order to garantee that MPI job can run, the following requirements MUST BE satisfied: • the MPICH software must be installed and placed in the PATH environment variable, on all the WNs of the CE. • The Executable that is specified in the JDL must not be the MPI application directly, but a wrapper script that invokes the MPI applications by calling mpirun command. Lima, 12th EELA Tutorial, 25.09.2007
For the user’s point of view, jobs to be run as MPI are specified setting the JDL JobType attribute to MPICH and specifying the NodeNumber attribute as well. • E.g.: • JobType = “MPICH”; NodeNumber = 4; This attribute define the required number of CPUs needed for the application. Lima, 12th EELA Tutorial, 25.09.2007
When these two attributes are included in a JDL the User Interface (UI) automatically add the following expression (other.GlueCEInfoTotalCPUs >= NodeNumber) && Member (“MPICH”,other.GlueHostApplicationSoftwareRunTimeEnvironment) to the JDL requirements expression in order to find out the appropriate resources where the job can be executed. Lima, 12th EELA Tutorial, 25.09.2007
the problem ... • Unfortunately LCG project was not synchronized with the latter requirement avoiding to share disk space with nodes inside the same CE. • This drove us to spend our time in providing a ad-hoc solution in order to find an efficent workaround to this problem. • The solution adopted bypasses the problem by putting some intelligence inside the script passed in Inputsandbox. Lima, 12th EELA Tutorial, 25.09.2007
… the solution • In detail each job has to mirror, via scp, its files on all nodes dedicated to it. ssh hostbased authentication MUST BE well configured between all the WNs. Lima, 12th EELA Tutorial, 25.09.2007
mpi.jdl [ Type = "Job"; JobType = "MPICH"; Executable = "MPItest.sh"; NodeNumber = 5; Arguments = "cpi 5"; StdOutput = "test.out"; StdError = "test.err"; InputSandbox = {"MPItest.sh","cpi"}; OutputSandbox = {"test.err","test.out","executable.out"}; Requirements = other.GlueCEInfoLRMSType == "PBS" || other.GlueCEInfoLRMSType == "LSF"; ] The number of threads specified with NodeNumber attribute agrees with the second Argument. It will be used during the invoking of mpirun command. Actually the Local Resource Manager supported are PBS and LSF only. Lima, 12th EELA Tutorial, 25.09.2007
MPItest.sh for i in `cat $HOST_NODEFILE` ; do echo "Mirroring via SSH to $i" # creates the working directories on all the nodes allocated for parallel execution. ssh $i mkdir -p `pwd` # copies the needed files on all the nodes allocated for parallel execution. /usr/bin/scp -rp ./* $i:`pwd` # checks that all files are present on all the nodes allocated for parallel execution. ssh $i ls `pwd` done # execute the parallel job with mpirun. echo "Executing $EXE" chmod 755 $EXE mpirun -np $CPU_NEEDED -machinefile $HOST_NODEFILE `pwd`/$EXE > executable.out The Environment variable $HOST_NODEFILE contains the list of WNs allocated for the parallel execution. Lima, 12th EELA Tutorial, 25.09.2007
DAG job nodeA nodeB nodeC NodeF nodeD • A DAG job is a set of jobs where input, output, or execution of one or more jobs can depend on other jobs • Dependencies are represented through Directed Acyclic Graphs, where the nodes are jobs, and the edges identify the dependencies Lima, 12th EELA Tutorial, 25.09.2007
JDL structure Lima, 12th EELA Tutorial, 25.09.2007
Attribute: Nodes Lima, 12th EELA Tutorial, 25.09.2007
Attribute: Dependencies Lima, 12th EELA Tutorial, 25.09.2007
DAG jdl [ type = "dag"; max_nodes_running = 4; nodes = [ nodeA = [ file ="nodes/nodeA.jdl" ; ]; nodeB = [ file ="nodes/nodeB.jdl" ; ]; nodeC = [ file ="nodes/nodeC.jdl" ; ]; nodeD = [ file ="nodes/nodeD.jdl"; ]; dependencies = { {nodeA, nodeB}, {nodeA, nodeC}, { {nodeB,nodeC}, nodeD } } ]; ] Node description could also be done here, instead of using separate files Lima, 12th EELA Tutorial, 25.09.2007
References • Job Description Language https://edms.cern.ch/file/555796/1/EGEE-JRA1-TEC-555796-JDL-Attributes-v0-8.pdf • GILDA wiki: • Job with Data https://grid.ct.infn.it/twiki/bin/view/GILDA/JobData • MPI Job https://grid.ct.infn.it/twiki/bin/view/GILDA/MPIJobs-withedgcommands Lima, 12th EELA Tutorial, 25.09.2007
Thank you very much for your kind attention! Lima, 12th EELA Tutorial, 25.09.2007