1 / 54

WLCG LHCC Comprehensive Review: Experiment Status Overview

This review provides a detailed overview of the current status and activities of ALICE, ATLAS, LHCb, and CMS experiments, including data generation, calibration processes, analysis workflows, and computing models. It discusses user analysis on the Grid, resource overview, data reduction in ALICE, software versions handling, storage strategy, quotas, monitoring, and ongoing challenges. The review outlines the experiment phases, goals, and testing plans to exercise computing models and software distribution.

jfarnsworth
Download Presentation

WLCG LHCC Comprehensive Review: Experiment Status Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WLCG LHCC Comprehensive Review: Experiment Status Overview Matthias Kasemannrelying on input from ALICE, ATLAS, LHCb and CMS WLCG LHCC CR: Experiment Status Overview

  2. Overview • Experiment activities and outlook: • ALICE • LHCB • ATLAS • CMS WLCG LHCC CR: Experiment Status Overview

  3. end-user analysis CERN T0 Generationof calibrationparameters ordered analysis CAF analysis RAW To T1s To tape To Grid FC Tape T0 MC Data Calibration Alien FC First pass Reco T1s T2s Disk buffer T0 Computing model – pp Pass 1& 2 reco

  4. CERN T0 Generationof calibrationparameters CAF analysis RAW To tape To Grid FC Tape T0 Part exp to T1s Calibration Alien FC Pilot Reco T1s Disk buffer T0 Computing model – AA HI data taking

  5. end-user analysis CERN T0 MC data Generationof calibrationparameters ordered analysis CAF analysis RAW From tape Tape T0 Part exp to T1s Full exp to T1s Calibration Alien FC First pass Reco Pilot Reco T1s T2s T1s Disk buffer T0 Computing model – AA LHC shutdown Pass 1& 2 reco

  6. Publish agent T1’s FTS GDC LDC HLT xrootd DCS CAF WN SRM Monit.Calib. SRM DCSFXS HLTFXS DAQFXS DCSDB Castor cache Shuttle DAQ Logbook DB CASTOR High-level dataflow Online Offline ALICE File Catalogue DDL Publish in AliEn Data files DAQ Network Condition files Run info Data file 240TB Condition files 5

  7. Resource overview

  8. ALICE Analysis • Three main analysis modes • Prompt data processing (calib, align, reco, analysis) @CERN with PROOF • Analysis with local PROOF clusters • Batch Analysis on the grid GRID infrastructure • Access GRID via AliEn or ROOT UIs • GRID API class TAliEn • Analysis à la PAW • ROOT + at most a small library (into a par-file) Ξ→πΛ→pπ

  9. Organised vs chaotic • Organised analysis will be done via an “analysis train” • This is being tested now • Jobs go to data, and that we can do • Chaotic analysis • The model here is unknown before being untested • It is a “resource hungry” activity that we are just beginning to appreciate

  10. User analysis on the Grid • Very important to get more users on the Grid • The Grid batch analysis is now rather efficient and stable 62% of jobs: <3 min between submission and start of execution Time the job waits in the Task queue before landing on a WN Execution time of the analysis jobs Majority of analysis jobs: ESD analysis of 100K to 10M events Long waiting times tail – same user jobs waiting for free resources (occupied by other users with higher priority)

  11. AODs 300kB/ev S-AOD 300kB/ev RAW 1.1MB/ev RAW 14MB/ev ESD 40kB/ev ESD 3MB/ev S-AOD 5kB/ev AODs 5kB/ev Tag 2kB/ev Tag 2kB/ev Tag 2kB/ev Tag 2kB/ev Cond Data Reco T0/T1s Analysis T0/T1s/T2/ laptop Requires AliRoot+Cond+AliEn (once) Has to run on a disconnected laptop Data reduction in ALICE

  12. WNPROOFXROOTD WN PROOFXROOTD WN PROOFXROOTD WN PROOFXROOTD WN PROOFXROOTD WN PROOF XROOTD PROOF master xrootd CASTOR CAF The whole CAF becomes a xrootd cluster • S/W versions handling now in PROOF • Quotas / load balancing prototype ready • Data access at the moment “by hand”

  13. xrootd (worker) xrootd (worker) xrootd (worker) xrootd (manager) xrootd emulation (worker) VOBOX::SA WN Castor DPM MSS MSS dCache SRM SRM SRM Disk Storage strategy Available Being deployed SRM Being deployed DPM, CASTOR, dCache are LCG-developed SEs Being deployed

  14. Challenge - PDC’06/07 • Average of 1800 CPUs running continuously since April 2006 • In the past 4 months – saturating all available resources • Maximum attained 7500 jobs! 65 sites, contribution: 50% T2, 50%T0+T1

  15. T0 Reco Monitoring Quotas Simulated RAW GDC MC production ESD friends? Disk quotas xrootd DA QA ESD / Reco 2 pass calibration CEs T2 Train analysis Analysis DA 2 pass alignment rfcp Prompt analysis Shuttle DAQ Monitoring CPU quotas fts Monitoring Quotas CEs AliEn FC Amanda QA CASTOR rfcp DA T1 DAQ FXS Monitoring Quotas DA QA xrootd DA DCS FXS MC QA QA HLT FXS QA CAF QA Full Dress Rehearsal Train analysis LDC ECS DCS HLT

  16. Full Dress Rehearsal • Phase I (Oct-Dec 2007) • Registration of RAWin CASTOR2 and Grid catalogue • Replication to T1s, automatic reconstruction to ESD • Phase II (Feb-Mar 2008) • All Phase I tasks • HLT, DCS and DAQ conditions data with Shuttle from ALICE comissioning exercise • Phase III (Apr-Data taking 2008) • All Phase I + II tasks • Online Detector Algorithms + Quality Assurance

  17. Status of DC06 • Reminder: • Two-fold goal: produce and reconstruct useful data, exercise the LHCb Computing model, DIRAC and ganga • To be tested: • Software distribution • Job submission and data upload (simulation: no input data) • Data export from CERN (FTS) using MC raw data (DC06-SC4) • Job submission with input data (reconstruction and re-reconstruction) • For staged and non-staged files • Data distribution (DSTs to Tier1s T0D1 storage) • Batch analysis on the Grid (data analysis and standalone SW) • Datasets deletion • LHCb Grid community solution • DIRAC (WMS, DMS, production system) • ganga (for analysis jobs) WLCG LHCC CR: Experiment Status Overview

  18. DC06 phases • 2006: simulation and distribution to Tier1s • February 2007 onwards • Background events reconstruction at Tier1s • Uses 20 MC raw files as input • were no longer on cache, hence had to be recalled from tape • output rDST uploaded locally to Tier1 • June 2007 onwards • Background events stripping at Tier1s • Uses 2 rDST as input • Accesses the 40 corresponding MC raw files for full reconstruction of selected events • DST distributed to Tier1s • Originally 7 Tier1s, then CERN+2 • need to clean up datasets from sites to free space WLCG LHCC CR: Experiment Status Overview

  19. Simulation jobs • Up to 10,000 jobs running simultaneously • Continuous requests from physics teams WLCG LHCC CR: Experiment Status Overview

  20. Simulation jobs • Up to 10,000 jobs running simultaneously • Continuous requests from physics teams • Problems encountered • SE unavailability for output data upload • Implemented a fail-over mechanism in the DIRAC DMS • Final data transfer filed in one of the VOBOXes • Had to develop multithreaded transfer agent • too large backlog of transfers • Had to develop an lcg-cp able to transfer to SURL • Request to support SURL in lcg-cp • Took 10 months to be in production (2 weeks to implement) • Handling of full disk SEs • Handled by VOBOXes • Cleaning SEs: painful as no SRM tool (mail to SE admin) WLCG LHCC CR: Experiment Status Overview

  21. Reconstruction jobs • Needs files to be staged • Easy for first prompt processing, painful for reprocessing • Developed a DIRAC stager agent • Jobs are put in the central queue only when files are staged • File access problems • Some files are not retrievable from tape • Some files are temporarily unavailable • Staging at some sites extremely slow • Storage resources • 3 out of 7 sites didn’t provide the necessary disk space • many instabilities on SEs (SRM) • need to perform several technology migration • PIC (Castor->dCache for tape), RAL (dCache->Castor), CNAF (Castor->StoRM for disk) WLCG LHCC CR: Experiment Status Overview

  22. SRM v2.2 tests (on 10,000 files) WLCG LHCC CR: Experiment Status Overview

  23. Plans for coming months • Stripping of MC events for analysis • from now until end of the year • Re-engineering of DIRAC • DIRAC3 to be put into production in January • based on experience of many year with DIRAC • should make more effective use of middleware (e.g. python API) • use WN client software deployed in the LCG-AA • better control of versions • DM based on SRM v2.2 • also back-ported to DIRAC2 • Preparation of CCRC’08 • February: 40 TB of data over 2 weeks from pit to Tier0, Tier1 • reconstructed, stripped • May: twice February • add chaotic analysis, Conditions DB replication and access WLCG LHCC CR: Experiment Status Overview

  24. ATLAS Computing Overview: Outline • Recent news from ATLAS M5 cosmic ray run • Database distribution • Simulation production • End 2007 DDM tests and activities • Plan of activities until LHC turn-on

  25. M5: Tier-0 Summary • Tier-0 ready for data processing at M5 start-up (Fri, Oct 26) • CASTOR setup: usage of new “t0atlas” pool • Agreed extension 50TB  150TB in place only Mon, Oct 29 • Problems during “too successful” weekend run (see later slide) • Offline DQM integrated and running since Fri, Nov 2 • DQM s/w and executables came late; subsequent adaptations of Tier-0 s/w necessary • Few hiccups with not fully tested release patches • E.g.: too long MuonBoy execution time; 13.0.30.14 • Data export to Tier-1s and further • Operational since Tue, Oct 30 • Requested change in DDM registration procedure required code modifications by the DQ2 developers (“Tier-0 plugin”)

  26. M5: Tier-0 Summary • Data export to Tier-1s and further (cont) • Tier-1s were served according to MoU shares as defined in October 2006 for 2007 • In addition, full RAW data to BNL, LYON, TRIUMF • Full ESD and CBNT to BNL • Worked basically fine to all 10 Tier-1s • Network configuration problems at SARA (NL) was found and sorted out • Further export from Tier-1s to associated Tier-2s • Some intrinsic problems and inefficiencies still have to be investigated • ~15% of the total datasets affected

  27. 15-20 TB free at M5 start-up Pool almost filled up again after the weekend Pool extension to 150 TB by CASTOR team (agreed before) Pool runs full SFOCASTOR transfers and Tier-0 activities halt Manual staging-out of M4 data frees ~15TB disk space M5: “t0atlas” Pool • “t0atlas” pool ran full during the first weekend run • Only serious Tier-0 related problem • Tape migration could not keep up with file arrival rate • SFO cleanup does not rely any more on tape migration status • After M5: need to agree on a stage-out policy • Pool has to be cleaned up manually

  28. M5: Tier-0 Monitoring Snapshots

  29. Monitoring by the ARDA Dashboard Total throughput (MB/s) Oct 30 – Nov 6 Data transferred (GB) Oct 30 – Nov 7 Completed file transfers Oct 30 – Nov 7 Total number of errors Oct 30 – Nov 7 M5: Tier-0  Tier-1 Export

  30. Distribution of M5 datasets to Tier-1s Distribution of M5 datasets to Tier-2s M5 data to Tier-1s and Tier-2s

  31. Atlas Offline RAC Tier 1s PVSS2COOL application ATLAS_COOL_DCS3D schema - the COOL tables are with PKs only ATLAS_COOL_DCS3D schema - the COOL tables are with PKs only Atlas Online RAC Refresh MVs on demand Refresh MVs on demand Atlas PVSS Oracle archive ATLAS_COOL_DCS schema - Mat. views with full set of indexes ATLAS_COOL_DCS schema - Mat. views with full set of indexes COOL online sub - detector accounts ATLAS_COOLONL_xxx - with all indexes ATLAS_COOLONL_xxx - with all indexes ATLAS_COOLONL_xxx - with all indexes ATLAS_COOLOFL_xxx - with all indexes ATLAS_COOLOFL_xxx - with all indexes Database Replication • Oracle Streams in production use: ATONR -> ATLR -> all Tier1s • Used for almost everything

  32. Database Replication - Status

  33. WCT (days) from 1 January to 14 November 2007 6000 0 Distributed Simulation Production • Simulation production continues all the time on the 3 Grids (EGEE, OSG and NorduGrid) and reached 1M events/day recently • The rate is limited by the needs and by the availability of data storage more than by resources • Validation of simulation and reconstruction with release 13 is in progress • Large-scale reconstruction will start soon for the detector paper and the FDR

  34. Production at Tier-0/1/2/3…

  35. Data Distribution Tests • The throughput tests will continue (a few days/month) until all data paths are shown to perform at nominal rates • This includes: • Tier-0  Tier-1s  Tier-2s for real data distribution • Tier-2  Tier-1  Tier-1s  Tier-2s for simulation production • Tier-1  Tier-1 for reprocessing • Test a) is now OK almost everywhere; next rounds will concentrate on b) and c) • The Functional Test will also be run in the background approximately once/month in an automatic way • The FT consists in low rate tests of all data flows, including performance measurements of the completion of dataset subscriptions • The FT is run in the background, without requiring any special attention from site managers • It checks the response of the ATLAS DDM and Grid m/w components as experienced by most end users

  36. Global schedule: M*, FDR & CCRC’08 • FDR must test the full ATLAS data flow system, end to end SFO  Tier-0  calib/align/recon  Tier-1s  Tier-2s  analyseStage-in (Tier-1s)  reprocess  Tier-2s  analyseSimulate (Tier-2s)  Tier-1s  Tier-2s  analyse • The SFOTier-0 tests interfere with cosmic data-taking • We must decouple these tests from the global data distribution and distributed operation tests as much as possible • CCRC’08 must test the full distributed operations at the same time for all LHC experiments As requested by Tier-1 centres to check their own infrastructure • Proposal: Decouple CCRC’08 from M* and FDR CCRC’08 has to have fixed timescales as many people are involvedCCRC’08 can use any datasets prepared for the FDR, starting from Tier-0 disksCCRC’08 can then run in parallel with cosmic data-taking Tier-0 possible interference and total load has to be checkedCosmic data distribution can be done in parallel as data flow is irregular and on average much lower than nominal rates

  37. ATLAS Plans • Software releases: • 13.0.40 • Week of 19-23 Nov • 13.1.0 • Week of 26 Nov • 14.0.X • Staged release build starts week of 17-21 Dec; base release 14.0.0 available Mid-end Feb 2008 • 15.0.X (tentative) • Mid 2008 • DDM tests: • SRM tests: • Now • Throughput and Functional Tests: • Early December • Cosmic runs: • M6 • End of February 2008 • Continuous mode • Start late April 2008 (depends on LHC schedule) • FDR: • Phase I • February 2008 (before M6) • Phase II • April 2008 (just before start of continuous data-taking mode) • CCRC’08 • Phase I • February 2008 (coincides with FDR/I) • Phase II • May 2008 (in parallel with cosmic data-taking activities)

  38. CMS Schedule 2) Preparation of Software, Computing & Physics Analysis 1) Detector Installation, Commissioning & Operation CSA07 S/w Release 1_7 (CCR_0T, HLT Validation) Tracker Inserted 2007 Physics Analyses Completed Test Magnet at low current Last Heavy Element Lowered Tracker cabled S/w Release 1_8 (Lessons of ‘07) CMS Cosmic Run CCR_0T (defined periods Dec-Mar) (Several short periods Dec-Mar) CCRC08 functional tests (in series) S/w Release 2_0 (CCR_4T, Production of startup MC samples) Beam-pipe Closed and Baked-out 1 EE endcap Installed, Pixels installed MC Production for Startup Cosmic Run CCR_4T CCRC08 = CSA08[CMS] WLCG LHCC CR: Experiment Status Overview

  39. CSA07 Goals • Test and validate the components the CMS Computing Model in a simultaneous exercise • the Tier-0, Tier-1 and Tier-2 workflows • Test the CMS software: • particularly the reconstruction and HLT packages • Test the CMS production systems at 50% scale of expected 2008 operation • workflow management, data management, facilities, transfers • Test the computing facilities and mass storage systems. • Demonstrate that data will transfer between production and analysis sites in a timely way. • Test the Alignment and Calibration stream (AlcaReco) • Produce, deliver and store AODs + skims for analysis by physics groups WLCG LHCC CR: Experiment Status Overview

  40. WLCG LHCC CR: Experiment Status Overview

  41. CSA07 Workflows Prompt Reconstruction HLT TIER-0 CASTOR CAF CalibrationExpress-Stream Analysis 300MB/s Re-Reco Skims TIER-1 TIER-1 TIER-1 TIER-1 20-200MB/s ~10MB/s Simulation Analysis TIER-2 TIER-2 TIER-2 TIER-2 WLCG LHCC CR: Experiment Status Overview

  42. CMS MC Production WLCG LHCC CR: Experiment Status Overview

  43. WLCG LHCC CR: Experiment Status Overview

  44. MC Production summary (… more resources used) WLCG LHCC CR: Experiment Status Overview

  45. Commissioning + Monitoring the sites CMS specific, improved SAM-tests are performed regularly ensure that sites are ready to run CMS jobs (software installed, etc) and that the links between the sites are commissioned. Some sites did not have the software installed due to incompatibilities between SL3/SL4 installations. Improved test for all required software versions installed --> have to improve installation procedure, have to invest work with T2’s WLCG LHCC CR: Experiment Status Overview

  46. Debugging Data Transfer for CSA07 • “Debugging Data Transfers” taskforce launched in June • Huge progress commissioning data transfers(A lot of work to do still on many links) • Debugging Transfers is a *hard* task • It covers storage system - to storage system transfers • A central coordination has helped to improve the status • Can only be successful in collaboration between central coordination team and sites • DDT Task-Force will end after CSA07, deliverables are: • Much improved list of commissioned links • Problem-solving documentation • Discussion of networking status and data transfer experience is part of CMS T1 visits during Nov/Dec 07. WLCG LHCC CR: Experiment Status Overview

  47. Commissioning of Links • A link gets commissioned, when it demonstrats routine data transfers for several days (well defined metric) • It is easier to remain commissioned, than to commission…. • Links, which are not commissioned are disabled in the main Prod instance of PhEDEx. • This means that there will be no production transfers allowed on these links • …only commissioned links are usable for MC production and physics analysis. • To guarantee enough data volume to meet targets, artificial transfers (LoadTest’07) can be used WLCG LHCC CR: Experiment Status Overview

  48. Data transfers achieved Data transfers - T0 <->T1 links are critical for operations. Data is transferred to T1 centers for custodial storage. T2 -> “nearby T1” links needed for MC production T1 -> T2 links (the mesh) needed for analysis transfers. T1 <-> T1 links required for AOD transfers since AOD’s will be available on all T1 sites. Huge increase in CMS transfers performed for commissioning and production for all sites. 800 Load Tests --> CSA06 --> -->Jun 07 1400 Oct 07 --> WLCG LHCC CR: Experiment Status Overview

  49. Data transfers during CSA07 Transfer Quality from CERN during CSA07 Observations: Transfer quality during the challenge was an issue. Interference with production tasks observed. No problems for CMS transfers seen during Atlas transfer tests. CMS Oct 07 |--> WLCG LHCC CR: Experiment Status Overview

  50. CSA07: Tier-0 Processing (ongoing) • At Tier-0 the pre-challenge processing steps were run: • dividing the samples into primary datasets as well as doing the prompt reconstruction of events • 3 intermediate steps were required to produce the data samples During the challenge preparations we wrote ~1PB of data • Similar to the 2008 run • Heavy load on Castor • Without stream splitting the 100Hz target was frequently exceeded • During the periods when data was being split into samples the farm did not achieve the target-> IO limitations • Stable operations wereachieved for the last week T0 job statistics - last week 2.0k WLCG LHCC CR: Experiment Status Overview

More Related