1 / 13

LHCb computing in Russia

LHCb computing in Russia. Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, March 2005. Russian participation in LHCb. IHEP (Protvino) INP (Novosibirsk) INR (Troitsk) ITEP (Moscow) PNPI (St.Petersburg). RICH mirrors HCAL SPD and Preshower ECAL Muon System.

tawana
Download Presentation

LHCb computing in Russia

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, March 2005

  2. Russian participation in LHCb IHEP (Protvino) INP (Novosibirsk) INR (Troitsk) ITEP (Moscow) PNPI (St.Petersburg) RICH mirrors HCAL SPD and Preshower ECAL Muon System

  3. History of LHCb DCs in Russia 2002130K events, 1% contribution only one centre (ITEP) 20031.3M events, 3% contribution all 4 our centers (IHEP,ITEP,JINR,MSU) 20049.0M events, 5% contribution started to use LCG 2005 …PNPI and INR are joining…

  4. 2004 DC Phase 1 Statistics

  5. What to “compute” in Russia ? There are 2 main tasks: 1. Provide facilities for LHC data analysis in all participating Russian institutes. 2. Satisfy collaboration needs

  6. LHCb computing model HLT output: 2000 Hz For details see LHCb 2004-119

  7. LHCb computing model 1. RAW data → 2x1010 events/year 25 kB/event , 500 TB/year 1 copy stored at CERN and 1 copy in Tier1 centres 2. Reconstruction → 2.4 kSI2k.s/event 2 times per year → 7 months and 2 months CPU needs : 12MSI2k storage: 500 TB (rDST) 1 copy stored across CERN and Tier1 centres

  8. LHCb computing model 3. Stripping → 2.1x109 events/year 50-100 kB/event , 139 TB/year (DST) 4 times per year (1 month) stored at CERN, Tier1 and some Tier2 centres 4. Analysis → 0.3 kSI2k.s/event 140 physicists , 2 jobs/week , 3x106 events CPU needs : 0.8 + 0.8x(n-1) MSI2k storage: 200 TB run at CERN, Tier1 and some Tier2 centres

  9. LHCb computing model 5. MC production → 4x109 events/year ~50 kSI2k.s/event (8 MSI2k.years) 400 kB/event 160 TB to store only triggered events (4x108) produced at Tier2 centres stored at CERN and Tier1 centres LHCb trigger differs significantly from ATLAS and CMS

  10. Computing in Russia 1. Store stripped data (DST) 2 most recent copies for the current year 1 copy for all previous years storage: 280 + 140x(n-1) TB 2. Run analysis jobs (~15% of LHCb) CPU → 0.1 + 0.1x(n-1) MSI2k.years storage: 30 + 30x(n-1) TB

  11. Computing in Russia 3. MC production (~10% of LHCb) CPU → 0.8 MSI2k.years storage: 3 TB 4. Calibration of detectors, rec. algorithms CPU → 0.1 MSI2k.years storage: 30 TB (hard to estimate now, double analysis needs)

  12. Computing in Russia Cluster of Tier2 centres in Russia partial Tier1 functionality (DST, analysis) LHCb requirements CPU → 1.0 + 0.1x(n-1) MSI2k.years storage → 350 + 140x(n-1) TB

  13. Contingency Unlike, for example CMS,LHCb has no contingency beyond a common expt efficiency factor

More Related