160 likes | 426 Views
ATLAS and GridPP. GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University. ATLAS Needs. Long term, ATLAS needs a fully Grid-enabled Reconstruction, Analysis and Simulation environment
E N D
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5th November 2001 RWL Jones, Lancaster University
ATLAS Needs • Long term, ATLAS needs a fully Grid-enabled Reconstruction, Analysis and Simulation environment • Short-term, the first ATLAS priority is a Monte Carlo production system, building towards the full system • ATLAS has an agreed program of Data Challenges (based in MC data) to develop and test the computing model RWL Jones, Lancaster University
Data Challenge 0 • Runs from October-December 2001 • Continuity test of MC code chain. • Only modest samples 105 event samples, and essentially all in flat file format. • All the Data Challenges will be run on Linux systems • compilers distributed with the code if not already installed locally in the correct version. RWL Jones, Lancaster University
Data Challenge 1 • Runs in the first half of 2002 • Several sets of 107 events (high level trigger studies, physics analysis). • Intend to generate and store 8Tbytes in the UK, • 1-2Tbytes in Objectivity. • Will use of M9 DataGrid deliverables and as many other Grid tools as time permits. • Tests of distributed reconstruction and analysis • Test of database technologies RWL Jones, Lancaster University
Data Challenge 2 • Runs for the first half of 2003 • Will generate several samples of 108 events • Mainly in OO-databases • Full use of the Testbed 1 and Grid tools • Complexity and scalability tests of the distributed computing system • Large-scale distributed physics analysis using Grid tools, calibration and alignement RWL Jones, Lancaster University
Lab m Uni x USA Brookhaven Uni a UK USA FermiLab Lab a France Tier 1 Physics Department Uni n CERN Tier2 ………. Italy Desktop Lab b Germany NL Lab c Uni y Uni b LHC Computing Model (Cloud) The LHC Computing Centre RWL Jones, Lancaster University
Implications of Cloud Model • Internal: need cost sharing between global regions within collaboration • External (on Grid services): Need authentication/accounting/priority on the basis of experiment/region/team/local region/user • Note: The NW believes this is a good model for tier-2 resources as well. RWL Jones, Lancaster University
ATLAS Software • Late in moving to OO as physics TDR etc given a high priority • Generations and reconstruction now done in C++/OO Athena framework • Detector simulation still in transition to OO/C++/Geant4; DC1 will still use G3 • Athena common framework with LHCb Gaudi RWL Jones, Lancaster University
Simulation software for DC1. ATHENA HepMc ATHENA Fast det.simulation Particle lev. simulation GeneratorModules C++, linux ---------------- Py6 +code dedicated to B-physics ---------------- PYJETS->HepMc --------------- EvtGen BaBar package ( later). Detector simulation Atlfast++ reads HepMc produce Dice: slug+geant3 fortran produce GENZ+KINE bank Ntuples Reconstruction ZEBRA C++ reads GENZ +kine convert to HepMc produce Ntuples RWL Jones, Lancaster University
Requirement Capture • Extensive use case studies:“ATLAS Grid Use Cases and Requirements” 15/X/01 • Many more could be developed, especially in the monitoring areas • Short-term use case centred on immediate MC production needs • Obvious overlaps with LHCb – joint projects • Three main projects defined, “Proposed ATLAS UK Grid Projects” 26/X/01 RWL Jones, Lancaster University
Grid User interface for Athena • Completely common project with LHCb • Obtains resource estimates and applies quota and security policies • Query installation tools • Correct software installed? Install if not • Job submission guided by resource broker • Run-time monitoring and job deletion • Output to MSS and bookkeeping update RWL Jones, Lancaster University
Installation Tools • Tools to automatically generate installation kits, deploy using Grid tools and install at remote sites via Grid job • Should be integrated with a remote autodetection service for installed software • Initial versions should cope with pre-built libraries and executables • Should later deploy development environment • ATLAS and LHCb build environments converging on CMT – some commonality here RWL Jones, Lancaster University
MC Production System • For DC1, will use existing MC production system (G3), integrated with M9 tools • (Aside: M9/WP8 validation and DC kit development in parallel) • Decomposition of MC system into components: Monte Carlo job submission, bookkeeping services, metadata catalogue services, monitoring and quality-control tools • Bookkeeping and data-management projects already ongoing – will work in close collaboration, good link with US projects • Close link with Ganga developments RWL Jones, Lancaster University
Allow regional management of large productions • Job script and steering generated • Remote installation as required • Production site chosen by resource broker. • Generate events and store locally • Write log to web • Copy data to local/regional store through interface with Magda (data management). • Copy data from local storage to remote MSS • Update book-keeping database RWL Jones, Lancaster University
Work Area PMB Allocation (FTE) Previously Allocated (FTE) Total Allocation (FTE) ATLAS/LHCb 2.0 0.0 2.0 ATLAS 1.0 1.5 2.5 LHCb 1.0 1.0 2.0 This will just allow us to cover the three projects Additional manpower must be found for monitoring tasks, testing the computing model in DC2, and the simple running of the Data Challenges RWL Jones, Lancaster University
WP8 M9 Validation • WP8 M9 Validation now beginning • Glasgow, Lancaster(, RAL?) involved in the ATLAS M9 validation • Validation is exercises the tools using the ATLAS kit • The software used is behind the current version • This is likely to be the case in all future tests (decouples software changes from tool tests) • Previous test of MC production using Grid tools a success • DC1 validation (essentially of ATLAS code); Glasgow, Lancaster (Lancaster is working on tests of standard generation and reconstruction quantities to be deployed as part of kit) Cambridge to contribute RWL Jones, Lancaster University