1 / 9

ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC o n behalf of ADC

ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC o n behalf of ADC. The Challenges of Run2. LHC operation Trigger rate 1 kHz (~400) Pile-up up above 30 (~20) 25 ns bunch spacing (~50) Centre-of-mass energy x ~2 Different detector

umika
Download Presentation

ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC o n behalf of ADC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC

  2. The Challenges of Run2 • LHC operation • Trigger rate 1 kHz (~400) • Pile-up up above 30 (~20) • 25 ns bunch spacing (~50) • Centre-of-mass energy x ~2 • Different detector • Constraints of ‘flat budget’ • Both for hardware and for operation and development • Data from Run1 Simone.Campana@cern.ch – WLCG Workshop

  3. Where to optimize? CPU Consumption Disk usage at T1s & T2s • Simulation • CPU • Reconstruction • CPU, Memory • Analysis • CPU, Disk Space Simone.Campana@cern.ch – WLCG Workshop

  4. Simulation • More events per 12h job, larger output files, less transfers/merging, less I/O • Or shorter, more granular jobs for opportunistic resources • Simulationis CPU intensive • Integrated Simulation Framework (ISF) • Mixing of full GEANT & fast simulation within an event • Baseline for MC production Simone.Campana@cern.ch – WLCG Workshop

  5. Reconstruction • Reconstruction is memory eager • And requires non negligible CPU • AthenaMP default from 2014 • (Almost) all production run on multicore • Analysis on single core • Optimization in code and algorithms Simone.Campana@cern.ch – WLCG Workshop

  6. Analysis Model Organized (trains) Chaotic Organized (reprocessing) • Common analysis data format: xAOD • replacement of AOD & group ntuple of any kind • Readable both by Athena & ROOT • Data reduction framework • AthenaMP to produce group data sample • Centrally via Prodsys • Based on train model • one input, N outputs • from PB to TB Simone.Campana@cern.ch – WLCG Workshop

  7. Computing Model: data processing • Optional extension of first pass processing from T0 to T1s • More flexible usage of resources • Reduce the different between T1s and T2s in the model • Loosen the job-to-data colocation • Still one/two full reprocessing from RAW / year, but multiple AOD2AOD reprocessings/year • Derivation Framework (train model) for analysis datasets Need More Flexibility Some T2s are equivalent to T1s in term of disk storage & CPU power Simone.Campana@cern.ch – WLCG Workshop

  8. Computing Model: Run-2 data placement T0/T1 DATADISK • Dynamic data placement and reduction of secondary (cache) replicas will continue • More data will be stored on tape beside disk • Only popular primary (pinned) dataset will be kept on disk after some time • All datasets will have a lifetime (from both disk and tape) • Access to data on tape still “organized” and not “chaotic” pinned T2 DATADISK pinned cache Simone.Campana@cern.ch – WLCG Workshop

  9. Preparation for Run-2 • Alessandro will now present the work being done in preparation for Run-2 • We will have a 3 days Jamboree (Dec 3-5) at CERN focused on sites related aspects • Placeholder agenda at https://indico.cern.ch/event/276502/ Simone.Campana@cern.ch – WLCG Workshop

More Related