100 likes | 237 Views
ALICE Computing – an update. F.Carminati 23 October 2001. ALICE Computing. ALICE computing of the same order of magnitude than ATLAS or CMS Major decisions already taken (DAQ, Off-line) Move to C++ completed TDRs all produced with the new framework Adoption of the ROOT framework
E N D
ALICE Computing – an update F.Carminati 23 October 2001 ALICE RRB-T 2001-66
ALICE Computing • ALICE computing of the same order of magnitude than ATLAS or CMS • Major decisions already taken (DAQ, Off-line) • Move to C++ completed • TDRs all produced with the new framework • Adoption of the ROOT framework • Tightly knit Off-line team – single development line • Physics performance and computing in a single team • Aggressive DC program on the LHC prototype • ALICE DAQ/Computing integration realised during data challenges in collaboration with IT/CS and IT/PDP ALICE RRB-T 2001-66
ALICE Physics Performance Report • Evaluation of acceptance, efficiency, signal resolution • Step1: Simulation of ~10,000 central Pb-Pb events • Step2: Signal superposition + Reconstruction 10,000 events • Step3: Event Analysis • Starting in November 2001 • Distributed production on several ALICE sites using GRID tools ALICE RRB-T 2001-66
ALICE GRID resources http://www.to.infn.it/activities/experiments/alice-grid OSU/OSC LBL/NERSC Dubna Birmingham NIKHEF Saclay GSI CERN Padova Merida IRB Bologna Lyon Torino 37 people21 insitutions Bari Cagliari Yerevan Catania Kolkata, India Capetown, ZA ALICE RRB-T 2001-66
Writing to local disk Migration to tape MB/s MB/s ALICE Data Challenge III • Need to run yearly DC of increasing complexity and size to reach 1.25GB/s • ADC III gave excellent system stability during 3 months • DATE throughput: 550 MB/s (max) 350 MB/s (ALICE-like) • DATE+ROOT+CASTOR throughput: 120 MB/s, <85> MB/s • 2200 runs, 2* 107 events, 86 hours, 54 TB DATE run • 500 TB in DAQ, 200 TB in DAQ+ROOT I/O, 110 TB in CASTOR • 105 files > 1GB in CASTOR and in MetaData DB • HP SMP’s: cost-effective alternative to inexpensive disk servers • Online monitoring tools developed ALICE RRB-T 2001-66
~1000MB/s LDC’s GDC's Phys data 1 Phys data 1 Centre 2 Centre 1 ADC IV (2002) • Increase performances (200MB/s to tape, 1GB/s through the switch) • Focus on computers and fabric architecture • Include some L3 trigger functionality • Involve 1 or 2 regional centres • Use new tape generation and 10 Gbit Eth. ALICE RRB-T 2001-66
Detector Projects Software Projects Production Environment & Coordination P.Buncic • Simulationproduction • Reconstructionproduction • Production farms • Databaseorganisation • Relations with IT & other comp. centres Reconstruction & Physics K.Safarik • Tracking • Detector reconstruction • HLT algorithms • Global reconstruction • Analysis tools • Analysis algorithms • Calibration & alignment algorithms ALICE Offline Computing Structure Management Board EC DataGRID WP8 Coordination International Computing Board Offline Board Chair F.Carminati Regional Tiers DAQ P.Vande Vyvre ROOT FrameWk Data Challenges HLT algorithms Framework & Infrastructure F.Rademakers • Framework development • Database technology • HLT Farm • Distributed computing (GRID) • Data Challenge • Industrial joint Projects • Tech. Tracking • Documentation Simulation A.Morsch • Detector Simulation • Physics simulation • Physics validation • GEANT 4 integration • FLUKA integration • Radiation Studies • Geometrical modeller Technical Support ALICE RRB-T 2001-66
CERN Off-line effort strategy • ALICE opted for a light core CERN offline team… • 17 FTE’s are needed: for the moment 3 are missing • plus 10-15 people to be provided from the collaboration • To be formalised by a Software Agreements/MOU • Good precedents • GRID coordination (Torino), ALICE World Computing Model (Nantes), Detector database (Warsaw) • We would like to avoid a full MoU! • Imbalance between manpower for experiments and GRID • Enough people pledged for GRID • Both needed for the success of the project • Candidate Tiers’ should provide people during phase 1 • We have to design the global model, and we need outside people to develop it with us ALICE RRB-T 2001-66
CERN Off-line effort strategy • Staffing of the offline team is critical otherwise • Coordination activities jeopardized (cascade effect) • Data Challenges & Physics Performance Report delayed • Readiness of ALICE at stake • Efforts are made to alleviate the shortage • Technology transfer with Information Technology experts • IRST Trento, CRS4 Cagliari, HP • Recruit additional temporary staff (Project Associates) • Adopt mature Open Source products (e.g. ROOT) • Ask support from IT for core software (FLUKA, ROOT) • Recommended by the LHC Computing Review ALICE RRB-T 2001-66
Conclusion • The development of the ALICE Offline continues successfully • The understaffing of the CORE Offline team will soon become a problem as its activity expands • Additional effort will be asked to the collaboration • We hope to avoid a MoU here and work with bilateral software agreements (a la ATLAS) • Where the GRID project has found a good resonance the experiment part still need to be solved ALICE RRB-T 2001-66