1 / 35

Russia in LHC DCs and EDG/LCG/EGEE

Russia in LHC DCs and EDG/LCG/EGEE. V.A. Ilyin Moscow State University. LISHEP, Rio de Janeiro, 2 0 February 200 4. The opportunity of Grid technology. Lab m. Uni x. CERN Tier1. Uni a. UK. Russia. France. Tier3 physics department. Uni n. CERN. Tier2. USA. Desktop. Lab b.

tarmon
Download Presentation

Russia in LHC DCs and EDG/LCG/EGEE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Russia in LHC DCs andEDG/LCG/EGEE V.A. Ilyin Moscow State University LISHEP, Rio de Janeiro, 20 February 2004

  2. The opportunity of Grid technology Lab m Uni x CERN Tier1 Uni a UK Russia France Tier3 physics department Uni n CERN Tier2 USA Desktop Lab b Germany Italy Lab c  Uni y Uni b regional group   MONARC project LHC Computing GRID: the “cloud” view Tier1

  3. Russian Tier2-ClusterRussian regional center for LHC computing (RRC-LHC) Cluster of institutional computing centers with Tier2 functionality and summary resources at 50-70% level of the canonical Tier1 center for each experiment (ALICE, ATLAS, CMS, LHCb): analysis; simulations; users data support. Participating institutes: Moscow ITEP, RRC KI, MSU, LPI, MEPhI… Moscow region JINR, IHEP, INR RAS St.Petersburg PNPI RAS, … Novosibirsk BINP SB RAS Coherent use of distributed resources by means of LCG (EDG+VDT, …) technologies. Active participation in the LCG Phase1 Prototyping and Data Challenges (at 5% level).

  4. Russia Country Map Three regions are indicated on the map, where HEP centers are located: Moscow, St-Petersburg and Novosibirsk

  5. Site (Centre) Acc./Coll. HEP Fac. OtherExp’s Participation in major HEP Int. Collab. CERN: ALICE, ATLAS, CMS, LHCb, AMS DESY: H1, HERMES, HERA-B, TESLA BINP SB RAS (Novosibirsk) http://www.inp.nsk.su VEPP-2M (linear collider at 1.4 GeV) VEPP-4 (linear collider up to 6 GeV) Non-Acc. HEP Exp’s. (Neutrino Phys., etc), Synchrotron Rad. F. CERN: ATLAS, LHC-acc, CLIC FNAL: Tevatron-acc DESY: TESLA KEK: BELLE SLAC: BaBar FNAL: D0, CDF, E-781(Selex) KEK: BELLE DAFNE: KLOE IHEP (Protvino, Moscow Region) http://www.ihep.su U-70 (fix target, proton beam 70 GeV) Medical Exp’s BNL: PHENIX, STAR CERN: ALICE, ATLAS, CMS, LHCb DESY: ZEUS, HERA-B, TESLA FNAL: D0, E-781(Selex) ITEP (Moscow) http://www.itep.ru U-10 (fix target, proton beam 10 GeV) Non-Acc. HEP Exp’s. (Neutrino Phys., etc) JINR (Dubna, Moscow Region) http://www.jinr.ru Nuclotron (heavy ions coll. at 6 GeV/n) Low Ener. Acc., Nuclear Reactor, Synchrotron Rad.F., Non-Acc. HEP Exp’s: Neutrino Phys., Medical Exp’s, Heavy-ion Physics BNL: PHENIX, STAR CERN: ALICE, ATLAS, CMS, NA48, COMPASS, CLIC, DIRAC DESY: H1, HERA-B, HERMES, TESLA FNAL: D0, CDF KEK: E391a

  6. Site (Centre) HEP Acc./Coll. Other Exp’s Participation in major HEP Int. Collab. BNL: PHENIX CERN: ALICE, AMS CERN: ATLAS, CMS, AMS, CLIC DESY: ZEUS, TESLA INR RAS (Troitsk, Moscow region, Research Centre) http://www.inr.ac.ru Low Energy Acc., Non-Acc. HEP Exp’s (Neutrino Phys.) CERN: ALICE, CMS, LHCb KEK: E-246 TRIUMF: E-497 FNAL: D0, E-781(Selex) RRC KI (Moscow, Res. Centre) http://www.kiae.ru Low Energy Acc., Nuclear Reactors, Synchrotron Rad. F. MEPhI (Moscow, University) http://www.mephi.ru Low Energy Acc., Nuclear Reactor BNL: STAR CERN: ATLAS DESY: ZEUS, HERA-B, TESLA PNPI RAS (Gatchina, St-Petersburg region, Research Centre) http://www.pnpi.spb.ru Mid/Low Energy Acc., Nulcear Reactor BNL: PHENIX CERN: ALICE, ATLAS, CMS, LHCb DESY: HERMES FNAL: D0, E-781(Selex) SINP MSU (Moscow, University) http://www.sinp.msu.ru Low Energy Acc., Non-Acc. HEP Exp. (EAS-1000)

  7. Goals of Russian (distributed) Tier2 • to provide a full-scale participation of Russian physicists in the analysis • only in this case Russian investments in LHC would lead to the final goal of obtaining a new fundamental knowledge on the structure of matter • to open wide possibilities for participation of students and young scientists in research at LHC • support and improve a high level of scientific schools in Russia • participation in the creation of international LHC Computing GRID will mean for Russia an access to new advanced computingtechniques

  8. Functions of Russian (distributed) Tier2 • physical analysis of AOD (Analysis Object Data); • access to (external) ESD/RAW and SIM data bases • for preparing necessary (local) AOD sets; • replication of AOD sets from Tier1/Tier2 grid (cloud); • event simulation • at the level of 5-10% of the whole SIM data bases for each experiment; • replication and store of 5-10% of ESD • required for testing the procedures of the AOD creation; • storage of data produced by users.  participation in distributed storage of full ESD data (Tier1 function)…?

  9. Architecture of Russian (distributed) Tier2 • RRC-LHC will be a cluster of institutional centers with Tier2 functionality • distributed system - DataGrid cloud of Tier2(/Tier3) centers • a coherent interaction of computing centers of participating Institutes: • each Institute knows its resources but can get significantly more if others agree; • for each Collaboration summary resources (of about 4-5 basic institutional centers) will reach the level of 50-70% of a canonical Tier1 center: • each Collaboration knows its summary resources but can get significantly more if other Collaborations agree; • RRC-LHC will be connected to Tier1 at CERN and/or to other Tier(s)1 in a context of a global grid for data store and access: • each Institute and each Collaboration can get significantly more if other reg.centers agree.

  10. The opportunity of Grid technology JINR IHEP RRC KI SINP MSU PNPI ITEP Russian Regional Center: theDataGrid cloud LCG Tier1/Tier2 cloud FZK CERN GRID access … Gbits/s RRC-LHC Tier2 cluster Regional connectivity: cloud backbone – Gbit’s/s to labs – 100–1000 Mbit/s Collaborative centers

  11. “Users”-“Tasks” and resources(analysis from 2001 – need to be updated – conception of Tier2s) The number of active users is main parameter for estimation of the resources needed. We did some estimates, in particular based on extrapolation of Tevatron analysis tasks performed by our physicists (single top production at D0, …). Thus, in some “averaging” figures: an “user task” – analysis of 107events per a day (8 hours) by one physicist ALICEATLASCMSLHCb 40606040 Very poor understanding of this key (for Tier2) characteristics! In the following we estimate RRC resources (Phase 1) by the assumption that our participation in SIMdata base production is at 5% level for each experiment.

  12. Resources required by the 2008 • We suppose: • each active user will create local AOD sets ~10 times per year, and keep these sets on the disks during the year • the general AOD sets will be replicated from the Tier1 cloud ~10 times per year, storing previous sets on the tapes. • The disk space usage will be partitioned as • 15% to store general AOD+TAG sets; • 15% to store local sets of AOD+TAG; • 15% to store users data; • 15% to store current sets of sim.data (SIM-AOD, partially SIM-ESD); • 30-35% to store the 10% portion of ESD; • 5-10% cache.

  13. Construction timeline Timeline for the RRC-LHC resources at the construction phase: After 2008 investments will be necessary for supporting the computing and storage facilities and increasing the CPU power and storage space. In 2008 about 30% of the expenses in 2008. Every next year: renewing of 1/3 of CPU, increase the disk space for 50%, and increase the tape storage space for 100%.

  14. Financial aspects • Phase1 (2001-2005) • 2.5 MCHF equipment, 3.5 MCHF network • + initial inivestments • to some regional networks • Construction phase (2006-2008) • 10 MCHF equipment, 3 MCHF network • _____________________________________ • in total (2001-2008) 19 MCHF • 2009 – 200x 2 MCHF/year 2003, December – new Protocol has been signed by Russia and CERN on frameworks for Russia participation in LHC project on period from 2007, including: 1) M&O, 2) computing in Exps, 3) RRC-LHC and LCG.

  15. LHCb DC03 Resource Usage JINR Dubna SINP MSU ITEP Moscow IHEP Protvino • c.f. DC02 • 3.3M evts • 49 days • CERN 44% • Bologna 30% • Lyon 18% • RAL 3.9% • Cambridge 1.1% • Moscow 0.8% • Amsterdam 0.7% • Rio 0.7% • Oxford 0.7%

  16. CMS Productions (2001)

  17. Sept.2003 Man Power for CMS Computing in Russia in a total – 25.3 FTE

  18. CERN RefDB Environment Dolly Jobs UI BOSS MySQL DB Recource Broker GRID Gate Keeper Batch Manager CE Job Job Executer WN2 WN1 Wnn IMPALA CMKIN SINP MSU (Moscow) – JINR (Dubna) – INFN (Padova) 2002 IMPALA/BOSS integration with GRID

  19. Russia in LCG • We have started activity in LCG in autumn 2002. • Russia joined to the LCG-1 infrastructure (CERN press-release 29.09.2003). First SINP MSU, soon RRC KI, JINR, ITEP and IHEP (already to LHC-2). • http://goc.grid-support.ac.uk/gridsite/gocmain/monitoring/ • Manpower contribution to LCG (started in May 2003): • the Agreement is under signing by CERN and Russia and JINR officials, • 3 tasks for our responsibility: • 1) testing new GRID mw to be used in LCG • 2) evaluation of new-on-the-market GRID mw (first task – evaluation of OGSA/GT3) • 3) common solutions for event generators (event data bases) • Twice per year (spring-autumn) meetings of the Russia-CERN Joint Working Group on Computing. Next meeting on 19 March at CERN.

  20. Information System testing for LCG-1 Elena Slabospitskaya Institute for High Energy Physics, Protvino, Russia 18.07.2003

  21. Information System testing for LCG-1 WN WN CE WN PBS, LSF.... Globus EDG Gatekeeper CondorG Globusrun CondorG CondorG Workload Manager RB Network server UI Edg-job-submit The schema of the job submission via RB and directly to the CE via Globus GRAM

  22. It was designed andrealized OGSA/GT3 testbed (named 'Beryllium') on the basis of PCs located at CERN andSINPMSU modelling a GT3 based Gridsystem. http://lcg.web.cern.ch/LCG/PEB/GTA/LCG_GTA_OGSA.htm Created software for common library of MC generators, GENSER, http://lcgapp.cern.ch/project/simu/generator/ New project MCDB (Monte Carlo Data Base) for LCG AA is proposed with Russia responsibility, as common solution for storing and providing access cross the LCG sites to the samples of events at partonic level.

  23. The simplified schema of Beryllium testbed (CERN-SINP) • The resource broker plays a central role: • Accepts requests from the User • Using the Information Service information, selects the suitable computer elements • Reserve the selected Computing Element • Communicates to the user a “ticket” to allow job submission • Maintains a list of all jobs running and receive confirmation • messages of the ongoing processing from the CEs • At job end, it updates the table of running job/CE status

  24. Externally Funded LCG Personnel at CERN

  25. EU-DataGrid • Russia institutes participated in the EU-DataGrid project • WP6 (Testbed and Demonstration) • WP8 (HEP Application) • 2001: • Grid information service (GRIS-GIIS), • DataGrid Certificate Authority (CA) and Registration Authority (RA). • WP6 Testbed0 (Spring-Summer 2001) – 2 sites. • WP6 Testbed1 (Autumn 2001) – 4 active sites (SINP MSU, ITEP, JINR, IHEP), significant resources (160 CPUs, 7.5 TB disk). • 2002: • Testbed1 new active site – PNPI • Testbed1 Virtual Organizations (VO) – WP6, ITeam • WP8 –CMS VO, ATLAS and ALICE VO’s, • WP8 CMS MC Run (spring) – ~1 Tbyte data transferred to CERN and FNAL, • Resource Broker(RB) – SINP MSU +CERN+INFN experiment • Metadispatcher (MD) – colaboration with Keldysh Inst.Appl.Math. (Moscow) – algorithms of dispatchering (scheduling) jobs in DataGrid environment.

  26. lhc20.sinp.msu.ru SINP MSU Site grid011.pd.infn.it lhc01.sinp.msu.ru lhc02.sinp.msu.ru lhc03.sinp.msu.ru Padova SINP MSU RB+ InformationIndex lhc04.sinp.msu.ru CE lxshare0220.cern.ch WN SE User Interface Node CERN EDG Software deployment at SINP MSU (example - CMS VO, 7 June 2002)

  27. EGEE Enabling Grids for e-Science in Europe – EGEE • EU project approved to provide partial funding for operation of a general e-Science grid in Europe, including the supply of suitable middleware. EGEE is proposed as a project funded by the European Union under contract IST-2003-508833. Budget – about 32 Meuro per 2004-2005. • EGEE provides funding for 70 partners, large majority of which have strong HEP ties. • Russia: 8 institutes (SINP MSU, JINR, ITEP, IHEP, RRC KI, PNPI, KIAM RAS, IMPB RAS), budget 1 Meuro per 2004-2005 • Russian matching of the EC budget is in good shape (!)

  28. EGEE Partner Federations • Integrate regional Grid efforts

  29. EGEE Timeline

  30. Distribution of Service Activities over Europe: • Operations Management at CERN; • Core Infrastructure Centres in the UK, France, Italy, Russia (PM12) and at CERN, responsible for managing the overall Gridinfrastructure; • Regional Operations Centres, responsible for coordinating regional resources, regional deployment and support of services. Russia: CIC – SINP MSU, RRC KI ROC – IHEP, PNPI, IMPB RAS Dissemination&Outreach – JINR,

  31. ICFA SCIC Feb 2004 • S.E. Europe, Russia: Catching Up • Latin Am., Mid East, China: Keeping Up • India, Africa: Falling Behind

  32. Typical example – transferring of 100 Gbyte of data from Moscow to CERN for one working day 50 Mbps of bandwidth ! LHC Data Challenges

  33. REGIONAL CONNECTIVITY for RUSSIA HEP Moscow 1 Gbps IHEP 8 Mbps (m/w), under construction 100 Mbps fiber-optic (Q1-Q2 2004?) JINR 45Mbps, 100-155 Mbps (Q1-Q2 2004), Gbps (2004-2005) INR RAS 2 Mbps+2x4Mbps(m/w) BINP 1 Mbps, 45 Mbps (2004 ?), … GLORIAD PNPI 512 Kbps (commodity Internet), and 34 Mbps f/o but (!) budget is only for 2 Mbps INTERNATIONAL CONNECTIVITY for RUSSIA HEP USA NaukaNET 155 Mbps GEANT 155 Mbps basic link, plus 155 Mbps additional link for GRID projects Japan through USA by FastNET, 512 Kbps Novosibirsk(BINP) – KEK(Belle) GLORIAD 10 Gbps

More Related