1 / 16

Phase 2 of the Physics Data Challenge ‘04: Task Overview and Achievements

Detailed outline of tasks, challenges, and improvements in the Physics Data Challenge '04 Phase 2 for the ALICE DC team. Highlights job structure, stats, experiences with AliEn and LCG, and plans for phase 3.

sawyerj
Download Presentation

Phase 2 of the Physics Data Challenge ‘04: Task Overview and Achievements

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Phase 2 of the Physics Data Challenge ‘04 Peter Hristov For the ALICE DC team Russia-CERN Joint Group on Computing CERN, September 20, 2004

  2. Outline • Purpose and conditions of Phase 2 • Job structure • Experiences and improvements: AliEn and LCG • Statistics (up to today) • Toward phase 3 • Conclusions Status of PDC04

  3. Phase 2 purpose and tasks • Merging of signal events with different physics content into the underlying Pb+Pb events (underlying events are reused several times) • Test of: • Standard production of signal events • Stress test of network and file transfer tools • Storage at remote SEs, stability (crucial for phase 3) • Conditions, jobs …: • 62 different conditions • 340K jobs, 15.2M events • 10 TB produced data • 200 TB data transfer from CERN • 500 MSI2K hours CPU Status of PDC04

  4. Repartition of tasks (physics signals): Status of PDC04

  5. Central servers Register in AliEn FC: LCG SE: LCG LFN = AliEn PFN Master job submission, Job Optimizer (N sub-jobs), RB, File catalogue, processes monitoring and control, SE… • Structure of event production in phase 2: Sub-jobs Sub-jobs AliEn-LCG interface Storage Underlying event input files CERN CASTOR: underlying events RB Storage CEs CEs CERN CASTOR: backup copy Job processing Job processing Output files Output files zip archive of output files Local SEs Local SEs File catalogue Primary copy Primary copy edg(lcg) copy&register Status of PDC04

  6. Experience: AliEn • AliEn system improvements: • AliEn processes tables – split in “running” (lightweight) and “done” (archive) – allows for faster process tracking • Implemented symbolic links and event groups (through sophisticated search algorithms): • Number of underlying events are grouped (through symbolic links) in a directory for a specific signal event type – example 1660 underlying events will be used for each jet signal condition. Another 1660 will be used for the next and so on up to 20000 in total (12 conditions) • Implemented zip archiving, mainly to overcome the limitations of the taping systems (less files, large size) • Fast resubmission of failed jobs – in this phase all jobs must finish • New job monitoring tools, including singe job trace logs from start to finish with logical steps and timing Status of PDC04

  7. AliEn problems: • Proxy server – out of memory due to a spiralling number of proxy connections: attempt to introduce a schema with pre-forked and limited number of proxies was not successful and the problem has to be studied further: • Not a show-stopper – we know what to monitor and how to avoid it • JobOptimizer – due to the very complex structure of the jobs (many files in the input box) the time needed to prepare one job for submission is large and the service sometimes cannot supply enough jobs to fill the available resources: • Not a show stopper now – we are mixing jobs with different execution time length, thus load-balancing the system • Has to be fixed for phase 3, where the input boxes of the jobs will be even larger and the processing time is very short – clever ideas how to speed-up the system already exist Status of PDC04

  8. Experience: LCG • A document on Data Challenges on LCG is being finalised by the GAG, with contributions by ALICE, ATLAS, LHCb • LCG problems and solutions: • General • Problem -> reporting -> fixing -> green light – but no feedback • Same problem somewhere else… • Direct contact with site managers can be useful • Job Management • On average, it works fairly well • Maximum number of CPU served by a RB: • Average Job duration/Submission time • Max submission rate to LCG: 720 jobs/hour • For us it’s less as we do more than just submission • One entry point does not scale to the whole system size… • No multiple job management tools • Ranking: [1 – (jobs waiting)/(total cpus)] works well, but it’s not the default… • Jobs reported as “Running” by LCG fail to report to AliEn that they started – so they stay “queued” forever • Jobs stay Running forever, even if site managers report their completion • Jobs reported as “cancelled by user” even if they were not Status of PDC04

  9. LCG problems and solutions (cont’d) • Data Management • “Default SE” vs. “Close SE” • Edg-rm commands • Lcg-cr: lack of diagnostic information • Possible fix for temporarily unavailable SEs • Sites/Configuration • “Black-hole” effect: jobs fail and more and more are attracted • “alicesgm” not allowed to write in the SW installation area • Environment variables • VO_ALICE_SW_DIR not set • Misconfiguration • FZK with INFN certificates • Cambridge: bash not supported • “Default SE” vs. “Close SE” – see above • Library configuration – CNAF (solved, how?), CERN (?) • NFS not working: multiple job failures – see “black-hole” effect • Stability • Behaviour is all but uniform with time – but the general picture is improving Status of PDC04

  10. Some results (19/09/04) • Phase 2 statistics (start July 2004 – end September 2004): • Jet signals: unquenched and quenched, cent 1: complete • Jet signals: unquenched per1: complete • Jet signals: quenched per1: 30% complete • Special TRD production at CNAF: phase 1 running • Number of jobs: 85K (number of done jobs/day is accelerating) • Number of output files: 422K data, 390K log • Data volume: 3.4 TB at local SEs, 3.4 TB at CERN (backup) • Job duration: 2h 30min cent1, 1h 20min per1: • Careful profiling of AliRoot and cleaning up of the code has reduced the processing time by a factor of 2! Status of PDC04

  11. LCG Contribution to Phase II(15/09) • Mixing + Reconstruction • “more difficult”: large input to be transferred to the CE, output to a SE local to the CE that executes the job • Jobs (last month, 15 k jobs sent): • DONE 5990 • ERROR_IB 1411 (error in staging input) • ERROR_V 3090 (insufficient memory on the WN or AliRoot failure) • ERROR_SV 2195 (Data Management or Storage Element failure) • ERROR_E 1277 (typically NFS failures, so the executable is not found) • KILLED 219 (jobs that fail to contact the AliEn Server when started and stay QUEUED forever while they are Running – also forever – in LCG) • RESUB 851 • FAILED 330 • Test of: • Data Management Services • Storage Element • Remarks • Up to 400 jobs Running on LCG on a single interface • No more use of Grid.it (avoid management of too many sites for phase III Status of PDC04

  12. Individual sites: CPU contribution • AliEn direct control: 17 CEs, each with a SE; CERN-LCG is encompassing the LCG resources worldwide (also with local/close SEs) Status of PDC04

  13. Individual sites: jobs successfully done Status of PDC04

  14. Toward Phase 3 • Purpose: distributed analysis of the processed in Phase 2 data • AliEn analysis prototype already exists: • Designated experts are trying to work with it, but it’s difficult with the production running… • We want to use gLite during this phase as much as possible (and provide feedback) • Service requirements: • In both Phase 1 and 2 the service quality of the computing centres has been excellent with very short response times in case of problems • Phase 3 will continue until the end of the year: • The remote computing centres will have to continue providing highlevel of service • Since the data are stored locally, interruptions of service will fail (or make very slow) the analysis jobs. The backup copy at CERN is on tape only and will take considerable amount of time to stage back in case the local copy is not accessible • The above is valid for the centres directly controlled through AliEn and the LCG sites Status of PDC04

  15. Conclusions • Phase 2 of the PDC’04 is about 50% finished and is progressing well, despite its complexity • There is a keen competition for resources at all sites (LHCb and ATLAS are also running massive DCs) • We have not encountered any show-stoppers. All production problems arising are fixed by the AliEn and LCG crew very quickly. • The response of the experts at the computing centres is very efficient • We are also running a considerable amount of jobs on LCG sites and it is performing very well with more and more resources being made available for ALICE, thanks to the hard work of the LCG team • In about 3 weeks time we will seamlessly enter the last phase of the PDC’04… • It’s not over yet, but we are getting close! Status of PDC04

  16. Acknowledgements • Special tanks to the site experts for the computing and storage resources and for the excellent support: Francesco Minafra – Bari Haavard Helstrup – Bergen Roberto Barbera – Catania Giuseppe Lo Re – CNAF Bologna Kilian Schwarz – FZK Karlsruhe Jason Holland – TLC² Houston Galina Shabratova – IHEP, ITEP, JINR Eygene Ryabinkin – KIAE Moscow Doug Olson – LBL Yves Schutz – CC-IN2P3 Lyon Doug Johnson – OSC Ohio Jiri Chudoba – Golias Prague Andrey Zarochencev – SPBsU St. Petersburg Jean-Michel Barbet – SUBATECH Nantes Mario Sitta – Torino And to: Patricia Lorenzo – LCG contact person for ALICE Status of PDC04

More Related