1 / 20

V.Gavrilov 1 , I.Golutvin 2 , V.Ilyin 3 , O.Kodolova 3 ,

RDMS CMS computing activities to satisfy LHC data processing and analysis scenario. V.Gavrilov 1 , I.Golutvin 2 , V.Ilyin 3 , O.Kodolova 3 , V.Korenkov 2 , E.Tikhonenko 2 , S.Shmatov 2 ,V.Zhiltsov 2 1- Institute of Theoretical and Experimental Physics, Moscow, Russia

kailey
Download Presentation

V.Gavrilov 1 , I.Golutvin 2 , V.Ilyin 3 , O.Kodolova 3 ,

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. RDMS CMS computing activities to satisfy LHC data processing and analysis scenario V.Gavrilov1, I.Golutvin2, V.Ilyin3, O.Kodolova3, V.Korenkov2, E.Tikhonenko2, S.Shmatov2 ,V.Zhiltsov 2 1- Institute of Theoretical and Experimental Physics, Moscow, Russia 2- Joint Institute for Nuclear Research, Dubna, Russia 3 – Skobeltsyn Institute of Nuclear Physics, Moscow, Russia NEC’2009 Varna, Bulgaria, September 07-14, 2009

  2. Composition of the RDMS CMS Collaboration RDMS - Russia andDubna Member States CMS Collaboration • Russia • Russian Federation • Institute for High Energy Physics, Protvino • Institute for Theoretical and Experimental Physics, Moscow • Institute for Nuclear Research, RAS, Moscow • Moscow State University, Institute for Nuclear Physics, Moscow • Petersburg Nuclear Physics Institute, RAS, St.Petersburg • P.N.Lebedev Physical Institute, Moscow Associated members: • High Temperature Technology Center of Research & Development Institute of Power Engineering, Moscow • Russian Federal Nuclear Centre – Scientific Research Institute for Technical Physics, Snezhinsk • Myasishchev Design Bureau, Zhukovsky • Electron, National Research Institute, St. Petersburg • Dubna Member States • Armenia • Yerevan Physics Institute, Yerevan • Belarus • Byelorussian State University, Minsk • Research Institute for Nuclear Problems, Minsk • National Centre for Particle and High Energy Physics, Minsk • Research Institute for Applied Physical Problems, Minsk • Bulgaria • Institute for Nuclear Research and Nuclear Energy, BAS, Sofia • University of Sofia, Sofia • Georgia • High Energy Physics Institute, Tbilisi State University, Tbilisi • Institute of Physics, Academy of Science ,Tbilisi • Ukraine • Institute of Single Crystals of National Academy of Science, Kharkov • National Scientific Center, Kharkov Institute of Physics and Technology, Kharkov • Kharkov State University, Kharkov • Uzbekistan • Institute for Nuclear Physics, UAS, Tashkent • JINR • Joint Institute for Nuclear Research, Dubna the RDMS CMS Collaboration was founded in Dubna in September 1994

  3. ME1/1 ME SE EE HE HF FS RDMS Participation in CMS Construction RDMS Full Responsibility RDMS Participation

  4. RDMS Participation in CMS Project • Full responsibility including management, design, construction, • installation, commissioning, maintenance and operation for: • Endcap Hadron Calorimeter, HE • 1st Forward Muon Station,ME1/1 Participation in: Forward Hadron Calorimeter,HF EndcapECAL,EE Endcap Preshower,SE Endcap Muon System,ME Forward Shielding,FS

  5. RDMS activities in CMS • Design, production and installation • Calibration and alignment • Reconstruction algorithms • Data processing and analysis • Monte Carlo simulation H (150 GeV)  Z0Z0  4 

  6. small centres Santiago RAL desktops portables Tier-2 Weizmann Tier-1 MSU IN2P3 LHC Computing Model • Tier-0 (CERN) • Filter  raw data • Reconstruction  summary data (ESD) • Record raw data and ESD • Distribute raw and ESD to Tier-1 IC JINR ITEP FNAL Cambridge CNAF Budapest Prague FZK IHEP PIC TRIUMF ICEPP BNL Legnaro CSCS • Tier-1 • Permanent storage and management of raw, ESD, calibration data, meta-data, analysis data and databases  grid-enabled data service • Data-heavy analysis • Re-processing raw  ESD • ESD-AOD selection • National, regional support Rome Kharkov PNPI Minsk NIKHEF • Tier-2 • Simulation, digitization, calibration of simulated data • End-user analysis

  7. Tier 0 – Tier 1 – Tier 2 Tier-0 (CERN): • Data recording • Initial data reconstruction • Data distribution Tier-1 (11 centres): • Permanent storage • Re-processing • Analysis Tier-2 (>200 centres): • Simulation • End-user analysis 7

  8. RDMS CMS computing structure RDIG sites 8

  9. RCMS CMS T2 association Now Future interest Analysis Groups Exotica: T2_RU_JINR Exotica: T2_RU_INR HI: T2_RU_SINP QCD: T2_RU_PNPI Top: T2_RU_SINP FWD: T2_RU_IHEP Object/Performance Groups Muon: T2_RU_JINR e-gamma-ECAL: T2_RU_INR JetMET-HCAL: T2_RU_ITEP 9

  10. CMS T2 requirements Basic requirements to CMS VO T2 sites for Physics group hosting: a) info on contact persons responsible for site operation b) site visibility (BDII) c) availability of CMSSW actual version d) regular file transfer test “OK” e) Certified links with CMS T1: 2 up and 4 down f) CMS job robot test “OK” g) disk space ~ 150-200 TB for: - central space (~30 TB) - analysis space (~60-90 TB) - MC space (~20 TB) - local space (~30-60 TB) - local CMS users space (~1 TB per user) h) CPU resources ~ 3KSI2K per 1 TB disk space, 2GB memory per job 10 10

  11. T2 readiness requirements • Site visibility and CMS VO support • Availability of disk and CPU resources • Daily SAM availability > 80% • Daily JR-MM efficiency > 80% • Commissioned links TO Tier-1 sites ≥ 2 • Commissioned links FROM Tier-1 sites ≥ 4 11

  12. CMS T1 – RU T2 link status 13

  13. Available resources 14

  14. RDMS CMS T2 readiness T2_RU_ITEP: Ready T2_RU_SINP:Ready T2_UA_KIPT:Ready T2_RU_JINR: Ready 15

  15. CMS computing in 2009 year • Computing scale test (together with ATLAS) • May – June 2009 • Cosmic run data processing and analysis • July – September 2009 • Big MC samples production • Starting in July 2009 • LHC data processing and analysis • Starting in October 2009 16

  16. STEP 09 results Test of data transfer from CMS T1s to T2s RU_SINP, RU_JINR, RU_ITEP were participated High transfer rate and quality were achieved SINP max 101 MB/s 17

  17. CMS T1-CH-CERN

  18. Russia Normalized CPU time per VO (August 2009)

  19. Request for RDMS CMS T2s upgrade CMS request to upgrade by Jan. 2010: Total disk space – up to 1300TB Total CPU - up 4500kSI2K (~1800 job slots) First priority tasks: • Complete T1<->T2 link certification for INR, IHEP, PNPI • Improve stability of operation (“availability” & “readiness”) • Full test of MC prod and analysis jobs running in parallel • Increase disk space at each of T2 up to 150 TB • Increase number of CMS job slots at each of T2 up to 200 20

  20. Summary ITEP, JINR, SINP and UA_KIPT- in a stable state RRC_KI – all sw required is installed, the links are certified but not in a stable state INR – not all the links required are certified – to be accomplished in a month or earlier PNPI – now installed 1 Gbs external channel; certification of link is in process IHEP – now installed 1 Gbs external channel; certification of link is in process ITEP, JINR and SINP support group space for MUON, JetMET/HCAL, HI and Exotica – thus the main efforts were applied to certify links to/from these institutes 21

More Related