1 / 25

Russia-CERN Joint Working Group on LHC Computing

Russia-CERN Joint Working Group on LHC Computing. V.A. Ilyin. Some about JWGC Russia in LCG Russia in EGEE DataChallenges in Experiments ( presentations from Exps) Financial aspects Networking. Russia-CERN Joint Working Group on LHC Computing, 19 March, 2004, CERN. Russia-CERN JWGC:.

aimee
Download Presentation

Russia-CERN Joint Working Group on LHC Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Russia-CERN Joint Working Group on LHC Computing V.A. Ilyin • Some about JWGC • Russia in LCG • Russia in EGEE • DataChallenges in Experiments (presentations from Exps) • Financial aspects • Networking Russia-CERN Joint Working Group on LHC Computing, 19 March, 2004, CERN

  2. Russia-CERN JWGC: • to fix problems (technical/organization/financial) and report them to the Russia-CERN JWG • to discuss tasks/milestones/results of Russian participation in LHC computing • to discuss plans • Meetings twice per year. (attached to the JWG meetings in April and October). • Change of the place: CERN – Moscow • Membership: • chairs S. Belyaev (deputy V. Ilyin) and L. Robertson • each of four LHC experiments nominates 2 members, from CERN and Russia • specialists from CERN and Russia on fabric, data management and connectivity

  3. Russian Tier2-Cluster Cluster of institutional computing centers with Tier2 functionality and summary resources at 50-70% level of the canonical Tier1 center for each experiment (ALICE, ATLAS, CMS, LHCb): analysis; simulations; users data support. Participating institutes: Moscow ITEP, SINP MSU, RRC KI, LPI, MEPhI… Moscow region JINR, IHEP, INR RAS St.Petersburg PNPI RAS, … Novosibirsk BINP SB RAS Coherent use of distributed resources by means of DataGrid technologies. Active participation in the LCG Phase1 Prototyping and Data Challenges (at 5% level). FTE: 12 (2002), 20 (2004 Q1), 25-30 (2004 Q4)

  4. Russian Research Center "Kurchatov Institute Moscow, Russia Petersburg Nuclear Physics Institute of Russian Acad. of Sci Gatchina, Russia Skobeltsyn Inst. Nucl. Physics of Moscow St. Univ Moscow, Russia Institute of High Energy Physics Protvino, Russia Moscow Engineering Physics Institute St. Univ of Russia Moscow, Russia Inst. for Nucl. Res. of Russian Acad. Of Science Troitsk, Russia Institute of Theoretical and Experimental Physics Moscow, Russia Joint Institute of Nuclear Research Dubna, Russia Budker Institute of Nuclear Physics of the Siberian Branch of Russian Acad. of Sci Academgorodok, Russia Russian HEP Institutions

  5. Moscow city map. Location of HEP centers are indicated, as well location of M9-Internet-Exchange Point M9-IX

  6. Moscow Region Map. Location of HEP centers: JINR (Dubna), INR RAS (Troitsk) and IHEP (Protvino)

  7. St.-Petersburg Region Map. Location of PNPI (Gatchina) is indicated

  8. Novosibirsk Region Map. Location of BINP (Akademgorodok) is indicated

  9. Russia in LCG • We have started activity in LCG in autumn 2002. • Russia joined to the LCG-1 infrastructure (CERN press-release 29.09.2003). Goal now – to join to LCG-2, to be an operational segment of world-wide LCG infrastructure and participate in DC04’s. • Manpower contribution to LCG (started in May 2003): • the Protocol is signed by CERN, Russia and JINR, in total 3 FTEs per year: • 3 months visits to IT to work on tasks agreed, budget for 2003 is approved, • 3 tasks for our responsibility, work started in April 2003: • 1) testing GRID mw to be used in LCG (3x3months – IHEP, JINR, PNPI) • 2) evaluation of new GRID mw (OGSA/GT3, 9 months – SINP, JINR) • 3) common solutions for event generators and event data bases (9 months – SINP, ITEP) • The Protocol on Russia participation in experiments at LHC has been signed in November 2003- frameworks for period from 2007: • M&O • regional center in Russia and LCG • computing in Experiments (on-line, off-line)

  10. LHC Computing GRID LCG-1 (Autumn 2003)

  11. Externally Funded LCG Personnel at CERN

  12. Information System testing for LCG-1 Elena Slabospitskaya Institute for High Energy Physics, Protvino, Russia 18.07.2003

  13. It was designed andrealized OGSA/GT3 testbed (named 'Beryllium') on the basis of PCs located at CERN andSINPMSU modelling a GT3 based Gridsystem. http://lcg.web.cern.ch/LCG/PEB/GTA/LCG_GTA_OGSA.htm Created software for common library of MC generators, GENSER, http://lcgapp.cern.ch/project/simu/generator/ New project MCDB (Monte Carlo Data Base) for LCG AA is proposed with Russia responsibility, as common solution for storing and providing access cross the LCG sites to the samples of events at partonic level.

  14. The simplified schema of Beryllium testbed (CERN-SINP) • The resource broker plays a central role: • Accepts requests from the User • Using the Information Service information, selects the suitable computer elements • Reserve the selected Computing Element • Communicates to the user a “ticket” to allow job submission • Maintains a list of all jobs running and receive confirmation • messages of the ongoing processing from the CEs • At job end, it updates the table of running job/CE status

  15. CERN-INTAS In 2001-2003 we had CERN-INTAS grant 00-0440 (60 Keuro per year): Russia teams: SINP MSU, ITEP, IHEP and JINR INTAS teams: CERN IT, IN2P3 In July 2003 this grant is finishing. Final meeting was in May at CERN. New CERN-INTAS grant 03-52-4297 will start in April 2004. Main goals: on the base of LCG infrastructure to study some key R&D problems (advanced algorithms for task dispatching, managing chaotic set of analysis tasks etc). Russia teams: SINP MSU, JINR, ITEP, IHEP, BINP and PNPI INTAS teams: CERN IT, INFN-Padua, FZK Budget again 60 Keuro per year. First working meeting – today (19.03.04) at CERN.

  16. EGEE Six Russian HEP institutes participate in the EGEE project (Enabling Grids for E-science in Europe– EU FP6 Contract 508833). EGEE will start in April 2004, for two years. This is an infrastructure project: distributed ROC (24x7 service) – IHEP, JINR, ITEP, PNPI CIC (from the end of 2004) – SINP, plus some functions by JINR, RRC KI Budget of Russian institutes - 1 MEuro for two years. Major application – LHC computing (some 100%, at least in 2004). Thus, CIC-ROC infrastructure will serve both for EGEE and LCG.

  17. Distribution of Service Activities over Europe: • Operations Management at CERN; • Core Infrastructure Centres in the UK, France, Italy, Russia (PM12) and at CERN, responsible for managing the overall Gridinfrastructure; • Regional Operations Centres, responsible for coordinating regional resources, regional deployment and support of services. Russia: CIC – SINP MSU, RRC KI (security and CA), JINR (monitoring) ROC – IHEP, PNPI, IMPB RAS Dissemination&Outreach – JINR,

  18. Financial situation Main points: 2003: Ministry of Atomic Energy (IHEP, ITEP and RRC KI) - 250 Keuro. Local sources of the institutes - ~150 Keuro CERN-INTAS – 60 Keuro MoIST&JINR - manpower contribution to LCG - 63 (42+21) KEuro 2004: EGEE- 500 Keuro (EU budget) MoAE – 200 Keuro (to match EGEE EU budget) MoIST – 400 Keuro (to match EGEE EU budget) CERN-INTAS - 60 Keuro (plus 35 Keuro on networking – INTAS infrastructure project RuGNet) Local sources of the institutes - ~150 Keuro MoIST&JINR - manpower contribution to LCG - 98 (65+33) KEuro This level of financial support will allow us to continue participation in DC04’s and in LCG-2.

  19. Connectivity with CERN • International links for Russian science are (RBNet): • 4 STM1 (4x155 Mbps) links Moscow-Stockholm • Today three 155 Mbps links operate: • one for NaukaNet 155 Mbps connectivity with StarLight in Chicago • second 155 Mbps for commodity Internet • 155 Mbps to GEANT • 4th 155 Mbps – plan to establish backup link to GEANT and pan European GRID projects. • Actual problem: get few-to-few connectivity with GEANT (GRID motivation) – MPLS (infrastructure INTAS project RuGNet) • RunNET (Moscow-St-Petersburg-Helsinki (NorduNET) – GEANT) 622 Mbps (soon 2.4 Gbps). Some bandwidth can be used for HEP (LCG/DC04) applications. • Some prospocts: • Project GLORIAD (global f/o ring Chicago-Amsterdam-Moscow-Novosibirsk-Khabarovsk-Beijing-Japan-Chicago), initiated by USA (NSF+DoE). Protocol was signed at official level by USA, Russia and China. • In 2005 10 Gbps - LHC needs are recognized as major application! • Now littleGLORIAD has started in January 2004 – (the circle of) 155 Mbps.

  20. Pan-European Multi-Gigabit Backbone (33 Countries)January 2004 Note 10 Gbps Connections to Poland, Czech Republic, Hungary Planning Underway for “GEANT2” (GN2) Multi-Lambda Backbone, to Start In 2005

  21. GLOBAL RING NETWORK FOR ADVANCED APPLICATIONS DEVELOPMENTRussia-China-USA Science & Education Network

  22. ICFA SCIC Feb 2004, http://icfa-scic.web.cern.ch/ICFA-SCIC/ • S.E. Europe, Russia: Catching Up • Latin Am., Mid East, China: Keeping Up • India, Africa: Falling Behind

  23. REGIONAL CONNECTIVITY for RUSSIA HEP Moscow 1 Gbps IHEP 8 Mbps (m/w), under construction 100 Mbps fiber-optic (Q2-Q3 2004?) JINR 45Mbps, 100-155 Mbps (Q2 2004?), Gbps (2005?) INR RAS 2 Mbps+2x4Mbps(m/w) BINP 1 Mbps, 45 Mbps (2004 ?), … GLORIAD PNPI 512 Kbps (commodity Internet), and 34 Mbps f/o but (!) budget is only for 2 Mbps

  24. Typical example – transferring of 100 Gbyte of data from Moscow to CERN for one working day 50 Mbps of bandwidth ! LHC Data Challenges

More Related