620 likes | 731 Views
ATLAS Distributed Analysis. A. Zalite / PNPI. Overview. Why? Goal ADA Model First steps Demo example More examples Conclusion. Why?. Huge amount of data Atlas experiment is expected to record several petabytes of data per year
E N D
ATLAS Distributed Analysis A. Zalite / PNPI
Overview • Why? • Goal • ADA Model • First steps • Demo example • More examples • Conclusion
Why? • Huge amount of data • Atlas experiment is expected to record several petabytes of data per year • Atlas offline system will produce similar amount of data (ESD, AOD, …) • Globally-distributed members of Atlas collaboration • Over 1000 physicists from all over the world will take part in data analysis • The data have to be available to all members of the collaboration
Goal • Provide to globally distributed users • Access to globally distributed data • Tools to perform globally distributed processing on this data • Easy to use and access from analysis environment • Flexible to adopt to environment • Enable effective use of all ATLAS computing resources • Trace information about processing with any data • Where did this data (event or analysis) come from?
ADA Model Components: • Data described by a Datasets(collection of data) • Location of the data (e.g. files) • Content (e.g. list of event ID’s and the type of the data for each event) • Transformation describes an operation that can act on a dataset to produce a new dataset • Application scripts used to run job to build task or process data • Task carries user parameters or code (E.g. atlas release, job options, and/or algorithm code) • Job is an instance of the transformation acting on a dataset
ADA Model • Many ATLAS-specific transformations have been defined • Atlasopt: user provides ATLAS release and job options • Aodhisto: atlasopt plus code to build in UserAnalysis package • Atlasdev: atlasopt plus local development directory • Atlasdev-src: same as atlasdev except development area is tarred up and will be rebuilt if platform changes • All these transformations run Athena
ADA Model Transformation
ADA Model This view enables distributed processing: • Split input dataset • Along event, file, or sub-dataset boundaries • Create separate sub-job for each sub-dataset • Implies post-processing stage to merge results (output datasets) Users carry out processing by • Defining a job • Application, task and dataset • Submitting this definition to a scheduler • Typically an analysis service • Polling for status • Job state (and sub-job states) • Result dataset
ADA Model On receiving a job request, the scheduler • Builds the task (or locates an existing build) • Split the dataset into sub-datasets • Create and submits a sub-job for each sub-dataset • Merge the results (output datasets) from each sub-job into overall result
ADA ADA uses DIAL framework. Release 1.20 of DIALis thebasis for the current ADA system. To use ADA it is necessary • To have Grid certificate • Certificate from Russian CA is OK • To be member of Atlas VO • Takes some time
First Steps • Working node - LXPLUS at CERN • Setup grid environment: . /afs/cern.ch/project/gd/LCG-share/sl3/etc/profile.d/grid_env.sh • Certification proxy initialization: grid-proxy-init • DIAL setup (setup script that defines a few environmental variables and aliases) at CERN: DIALSETDIR=/afs/cern.ch/user/d/dial/apps/dial/setup • Verify user certificate and check the status of the unique ID service by issuing the command "uidtest" after setting up dial.
First Steps • The best way to start with DIAL is to run the demos inside ROOT • These demos define a job • application (papp) • task (ptsk) • dataset (pdst) • and submit it to the current scheduler (msch) • Start: dialroot –i flag –i means that any missing DIAL configuration, example or demo files will be copied into the local directory (necessary only 1st time)
Demo Example • Distributed analysis is an iterative process where a physicist defines a job, submits it to a processing system, examines the result and then repeats the sequence. • Demo selects an application, task and dataset which are then submitted to a scheduler to define a job. root [0] .x demos/demo4.C This defines papp, ptsk and pdst root [1] submit() Submit a job based on papp, ptsk and pdst root [2] get_results() Get job status and partial result root [3] TBrowser br Check ouput ntuples and histgrams
Demo Example • A job is specified by defining a transformation and selecting a dataset to process with this transformation. The transformation is specified by an application and a task. The application carries the scripts that do the processing and the task carries user configuration data. • Demo4 uses aodhisto to create histograms and ntuples from user source code • The demo identifies objects by name, extract the corresponding ID from a selection catalog and use this ID to extract the object from a repository.
Demo Example void demo4() { string aname = "aodhisto"; string tname = "aodhisto_zll_aod"; string dname = "hma.dc2.003007.digit.A1_z_ee.aod-1000.10files"; aid = asc.id(aname); tid = tsc.id(tname); did = dsc.id(dname); papp = ar.extract(aid); ptsk = tr.extract(tid); pdst = dr.extract(did); }
Demo Example • Objects: papp - pointer to the current application ptsk - pointer to the current task pdst - pointer to the current dataset • Can be displayed root [4] pprint(papp) Display the application root [5] pprint(ptsk) Display the task root [6] pprint(pdst) Display the dataset
More Examples There are more examples: • Demo5 uses esd2aod to create AOD from ESD using the prodsys transformation • Demo6 uses atlasopt to run a job with provided job options • Demo7 uses atlasdev to run a job based on a users atlas development area • Demo8 uses atlasdev-src to run a job based on a tarball of a user development area
More Examples • Displaying the status of all catalogs to verify connection and see the size of each: root [4] show_catalogs()
More Examples • A list of available datasets may be obtainedby querying the DSC (dataset selection catalog, object dsc). The DSC is the primary user interface to datasets and it plays a role of what is often called a metadata catalog. • Limit the query to 100 results (received 12). • The query resticts the selection to TOP level datasets, i.e. complete samples intended for user access and then uses the name to select Rome samples with v10 reconstruction, SUSY data using all AOD data avaialble at BNL. • AOD-bnl replaced with AOD to get samples available at both CERN and BNL. • Counting datasets matching a query with the query_count method
More Examples • DCS supports list of parameters which can be used in selection of Datasets
More Examples • List attributes for given Dataset • Record ID and fetch the Dataset from repository
More Examples • Select an application in a similar way
More Examples • Select a task in a similar way
More Examples • The application usually is not modified, but necessity of task modification is very likely • Extract the files from the task
More Examples • The list of jobOptions can be found in CVS repository at atlas/PhysicsAnalysis/AnalysisCommon/AnalysisExamples/share/
More Examples • Now it is possibly to build a new task from the modified files: ptsk = new dial::Task("atlas_release jo.py output_content", "mytask"); The list of files used to construct the task may be replaced with "*" if you want all the files from the directory • Now papp, ptsk and pdst are defined, and job can be submited
More on More Examples • It is not necessary to do a lot of typing (as we did before) to perform previous analysis • There is simple way to avoid this – job definition script that defines the application, task and dataset (variables papp, ptsk and pdst). • Sample script can be found here: http://www.usatlas.bnl.gov/~dladams/dial/releases/1.20/jobdef.C • The sample script is copied into the local directory when the dialroot files are installed (dialroot -i). • Edit the top part of this script to specify the application, task and dataset of interest. • Run: root [0] .x jobdef.C root [1] submit() ...
More on More Examples void jobdef() { // Specify names for the application, task and dataset. // Typical job definition is created by changing these values. // Depending on the following code, a name may be intepreted as // one or more of the following. // 1. ID: Object identifier. // 2. name: Object name in the default selection catalog. // 3. directory: Name of a directory holding files to be used // construct the object. // 4. xml: Name of a file holding the XML description of the object. // Application: directory, name, or ID. string aname = "atlasopt"; // Task: directory, xml, name, or ID. string tname = "atlasopt_example_zll-10.0.1"; // Dataset: ID or name. string dname = "hma.dc2.003007.digit.A1_z_ee.aod-1000.10files"; …….
More on More Examples • There is web-interface “DIAL CATALOG QUERY PAGE” http://www.atlasgrid.bnl.gov/dialds/dlShowMain-new.pl
More on More Examples • This interface permit to switch to: • Dataset Selection Catalog (DSC) • TaskSelection Catalog (TSC) • ApplicationSelection Catalog (ASC) • A list of available datasets may be obtained from DSC query page • Some useful applications and example tasks are cataloged as well. The application and task catalogs may also be examined using the ASC query page and TSC query page