1 / 32

HEP GRID computing in Poland

HEP GRID computing in Poland. Henryk Palka Institute of Nuclear Physics, PAN, Krakow, Poland. Topics:.

ella
Download Presentation

HEP GRID computing in Poland

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HEP GRID computing in Poland Henryk Palka Institute of Nuclear Physics, PAN, Krakow, Poland NEC’07 Varna, Sept 2007

  2. Topics: ● LHC data rates and computing model● LCG : LHC Computing GRID project● Polish Grid infrastructure● Sharing of Central Europe Grid resources● ATLAS MC production and Data Challanges at ACC Cyfronet● BalticGrid project NEC’07 Varna, Sept 2007

  3. LHC experiments data rates For LHC computing: 100M SpecInt2000 or 100K of ~3GHz cores is needed! For data storage: 20 Peta Bytes or 100K of disks/ tapes per year is needed! 107 seconds/year pp from 2008 on (?)  ~109 events/experiment106 seconds/year heavy ion NEC’07 Varna, Sept 2007

  4. LHC Computing Modelorganisation of WLCG NEC’07 Varna, Sept 2007

  5. 2005 2005 2006 2006 2007 2007 2008 2008 Cosmics First beams First physics LHC Computing Grid project - LCG • Objectives • - design, prototyping and implementation of the computing • environment for LHC experiments (Monte Carlo simulation, • reconstruction and data analysis): • - infrastructure (PC farms, networking) • - middleware (based on EDG, VDT, gLite….) • - operations (experiment VOs, operation and support centres) • Schedule • phase 1 (2002 – 2005;~50 MCHF); R&D and prototyping (up to • 30% of the final size) • - phase 2 (2006 – 2008 ); preparation of a Technical Design • Report, Memoranda of Understanding, deployment (2007) Cosmics First beams First physics Full physics run NEC’07 Varna, Sept 2007

  6. Planned sharing of capacity betweenCERN and Regional Centres in 2008 Tape Preliminary planning data Requirements from December 2004 Computingmodel papers, reviewed by LHCC Jan 05 NEC’07 Varna, Sept 2007

  7. LHC Computing WLCG is based on few computing scientific grids

  8. GDAŃSK KOSZALIN OLSZTYN BASNET 34 Mb/s SZCZECIN TORUŃ BYDGOSZCZ BIAŁYSTOK DFN 10 Gb/s Gorzów GÉANT 10+10 Gb/s POZNAŃ WARSZAWA GÉANT/TELIA 2x2,5 Gb/s ZIELONA GÓRA ŁÓDŹ GTS 1,6 Gb/s RADOM WROCŁAW CZĘSTOCHOWA PIONIER’S FIBERS KIELCE PUŁAWY OPOLE LUBLIN 2 x 10 Gb/s KATOWICE 10 Gb/s (1 lambda) RZESZÓW KRAKÓW CBDF 10 Gb/s Bielsko-Biała 1 Gb/s CESNET, SANET MAN Polish Grid infrastructure Networking – PIONIER project Tier1 FZK Karlsruhe Tier2 PCSS Poznań HEP VLAN 1 Gb/s Tier2 ICM Warszawa HEP VLAN 1 Gb/s Tier2 ACK Cyfronet Kraków

  9. Warsaw-ICM Cracow-CYFRONET Poznan-PSNC Polish Grid infrastructure Tier2: ACC Cyfronet – ICM – PSNC • Three computing centres contribute to the • Polish Tier2 (as part of EGEE/ LCG ROC) • ACC Cyfronet Krakow • ~300 (450) Pentium 32 bit processors • connected to PSNC via 1 Gbs HEP VLAN • ICM Warsaw • ~180 (340) AMD-64 Opteron processors • connected to PSNC via 1 Gbs HEP VLAN • PSNC Poznan • ~240 Itanium IA-64 processors • connected to GEANT and DFN – 10 Gbs • In the hierarchy of WLCG the Polish Tier2 • is connected to Tier1 at FZK Karlsruhe • Building Tier3 at IFJ PAN Krakow and IPJ/ • FP TU Warsaw

  10. Polish Grid infrastructure Disk storage at ACC Cyfronet HP EVA 8000:  - 8GB cache;  - 8 FC shortwave ports;  - 240 FATA 500GB 7200rpm HDDs (120TB) 2nd HP EVA with 90 TB underway

  11. Polish Grid infrastructure Towards Polish National Grid • Poland has been/ is involved in number of EU Grid projects • 5 FP: EUROGRID, GridLab, CrossGrid, GridStart, GRIP,… • 6 FP: EGEE, EGEE2, K-WF Grid, BalticGrid, Core GRID, ViroLab, • Gredia, int.eu.grid, UNIGRIDS, • EUChinaGrid,… • Yearly Cracow Grid Workshops • about 150 participants (in 2007: W. Boch, F. Gagliardi, W. Gensch, K. Kasselman, D. Kranzmueller, T. Priol, P. Sloot, and others…), • this year workshop, 7th in row, will take place on 15-18 October 2007 In 2007 five major Polish computing centers (Krakow, Gdansk, Poznan, Warsaw and Wroclaw) signed an agreement to form Polish National Grid called PL-Grid.

  12. Sharing of CEGC resources NEC’07 Varna, Sept 2007

  13. Sharing of CEGC resources NEC’07 Varna, Sept 2007

  14. Cyfronet Kraków ICM Warszawa PCSS Poznań WCSS64 Wrocław EGEE-CEGC computing resources usage by LHC experiments and other VOs NEC’07 Varna, Sept 2007

  15. Atlas production at Cyfronet was running very well and with high efficiency. ATLAS is regularly getting its fairshare recently running at the level of more than 100 CPU constantly. ATLAS MC production at ACC Cyfronet NEC’07 Varna, Sept 2007

  16. ~ 1350 kSI2k.months ~ 120,000 jobs ~ 10 Million events fully simulated (Geant4) ~ 27 TB ATLAS Data Challenges Status - DC2 Phase I started in July, finished in October 2004 - 3 Grids were used • LCG ( ~70 sites, up to 7600 CPUs) • NorduGrid (22 sites, ~3280 CPUs (800), ~14TB) • Grid3 (28 sites, ~2000 CPUs) Grid3 29% NorduGrid 30% LCG 41% from L. Robertson at C-RRB 2004 All 3 Grids have been proven to be usable for a real production about 1% of the events have been generated in Cracow ATLAS

  17. FZK dCache Transfer Rates Data transfer T0->T1->T2 T0  T1 tests started in May Mostly FZK Tier1 involved End of May: proposal to include Tier2s from FZK Cloud Delayed due to a high rate of errors at FZK(even though nominal transfer rates has been achieved) Mid June: T1(FZK) (Cloud)T2 functional test started DQ2 tool at FZK worked well CYFRONET and 4 other sites out of total 7 tested had ~100% file transfer efficiency Transfer rates FZKCYF as high as 60 Mbyte/s NEC’07 Varna, Sept 2007

  18. Where is the digital divide in Europe? courtesy of D. Foster

  19. BalticGrid in One Slide • Started 1 Nov 2005 (duration 30m) • Partners: • 10 Leading institutions in six countries in the Baltic Region and Switzerland (CERN) • Budget: • 3.0 M€ • Coordinator: • KTH PDC, Stockholm • Compute Resources: • 17 resource centres • Pilot applications: HEP, material science, biology, linguistics • Estonian Educational and Research Network, EENet • Keemilise ja Bioloogilse Füüsika Instituut, NICPB • Inst. of Mathematics and Computer Science, IMCS UL • Riga Technical University, RTU • Vilnius University, VU • Institute of Theoretical Physics and Astronomy, ITPA • Poznan Supercomputing and Networking Center, PSNC • Instytut Fizyki Jadrowej, im. H Niewodniczanskiego, Polskiej Akademii Nauk, IFJ PAN • Parallelldatorcentrum, Kungl Tek. Högskolan, KTH PDC • CERN SA - Specific Service Activities NA - Networking Activities JRA - Joint Research Activities

  20. Grid Operations (Status end of 2006) Activity Leader: Lauri Anton Krakow Coordinator: Marcin Radecki

  21. BaltiGrid recources in IFJ PAN The seed of Tier 3: • Development of local GRID installations • Access GRID from local UI • Support for HEP users • Installation of experimental applications • Development and tests of user algorithms • Submit jobs to GRID – distributed analysis • Mini cluster (blade technology) • Financed from associated national project • 32 cores, 2 GB RAM/core, 2 TB disks • To be extended in future (local Tier 3)

  22. Summary and conclusions The insurmountable problem of LHC computing seems to be solvable, thanks to rapid progress in IT technologiesPolish Tier-2 LCG infrastructure and organisation are sound and they are being developed further to meet committments for 2008HEP GRID community plays also essential role in removing ‘digital divides’ and in bringing the GRID technology to other branches of science NEC’07 Varna, Sept 2007

  23. Acknowledgements The material used in this presentations comes from many sources: the LHC collider and LCG projects, the LHC experimental teams… Special thanks are to Michal Turala, the spiritus movens of Polish GRID computing.I also thankmy other Krakow colleagues: P. Lason, A. Olszewski M. Radecki and M. Witek. NEC’07 Varna, Sept 2007

  24. Thank you for your attention NEC’07 Varna, Sept 2007

  25. Backup

  26. LHC Computing NEC’07 Varna, Sept 2007

  27. LHC experiments and data rate Data preselection in real time- many different physics processes- several levels of filtering- high efficiency for events of interest- total reduction factor of about 107 40 MHz (1000 TB/sec) equivalent) Level 1 - Special Hardware 75 KHz (75 GB/sec)fully digitised Level 2 - Embedded Processors/Farm 5 KHz(5 GB/sec) Level 3 – Farm of commodity CPU 100 Hz(100 MB/sec) Data Recording & Offline Analysis NEC’07 Varna, Sept 2007

  28. ICFA Network Task Force (1998): required network bandwidth (Mbps) 100–1000 X Bandwidth Increase Foreseen for 1998-2005 See the ICFA-NTF Requirements Report: http://l3www.cern.ch/~newman/icfareq98.html

  29. Progress on IT technology Performance per unit cost in function of time from R. Mount NEC’07 Varna, Sept 2007

  30. Where is the digital divide in Europe?

  31. BalticGrid initiative • Estonian Educational and Research Network, EENet • Keemilise ja Bioloogilse Füüsika Instituut, NICPB • Institute of Mathematics and Computer Science, IMCS UL • Riga Technical University, RTU • Vilnius University, VU • Institute of Theoretical Physics and Astronomy, ITPA • Poznan Supercomputing and Networking Center, PSNC • Instytut Fizyki Jadrowej, im. Henryka Niewodniczanskiego, Polskiej Akademii Nauk, IFJ PAN • Parallelldatorcentrum at Kungl Tekniska Högskolan, KTH PDC • CERN Proposal to the recent EU 6 FP call (research infrastructure) submitted

  32. Estonia Tallinn, Tartu Lithuania Vilnius Latvia Riga Poland Kraków, Poznan Switzerland Geneva Sweden Stockholm BalticGrid Partners Details on www.balticgrid.org

More Related