180 likes | 271 Views
Enabling e-Research over GridPP. Dan Tovey University of Sheffield. ATLAS. Large Hadron Collider (LHC) under construction at CERN in Geneva. When commences operation in 2007 will be the world’s highest energy collider.
E N D
Enabling e-Research over GridPP Dan Tovey University of Sheffield
ATLAS • Large Hadron Collider (LHC) under construction at CERN in Geneva. • When commences operation in 2007 will be the world’s highest energy collider. • Sheffield key member of ATLAS collaboration building one of two General Purpose Detectors on LHC ring. • Main motivations for building LHC and ATLAS: • Finding the Higgs boson • Finding evidence for Supersymmetry – believed to be next great discovery / layer in our understanding of the universe. University of Sheffield
Sheffield leads Supersymmetry (SUSY) searches at ATLAS Also coordinates all ATLAS physics activities in the UK including Higgs and SUSY searches. Sheffield responsible for building ATLAS Semiconductor Tracker (SCT) detector, and writing event reconstruction software. ATLAS @ Sheffield SUSY (= Nobel Prize) SM NB: This is a simulation! University of Sheffield
Construction University of Sheffield
Event Selection 9 orders of magnitude University of Sheffield
The Data Deluge • Many events • ~109 events/experiment/year • >~1 MB/event raw data • several passes required • Worldwide LHC computing requirement (2007): • 100 Million SPECint2000(=100,000 of today’s fastest processors) • 12-14 PetaBytes of data per year (=100,000 of today’s highest capacity HDD). Detectors 16 Million channels 40 MHz 3 Gigacell COLLISION RATE buffers Charge Time Pattern 100 kHz LEVEL - 1 TRIGGER Energy Tracks 1 MegaByte EVENT DATA 1 Terabit/s 200 GigaByte BUFFERS (50000 DATA CHANNELS) 500 Readout memories EVENT BUILDER 500 Gigabit/s Networks 20 TeraIPS EVENT FILTER Gigabit/s PetaByte Grid Computing Service ARCHIVE SERVICE LAN 300 TeraIPS Understand/interpret data via numerically intensive simulations • e.g. 1 SUSY event (ATLAS Monte Carlo Simulation) = 20 mins/3.5 MB on 1 GHz PIII University of Sheffield
LCG • Aim to use Grid techniques to solve this problem • CERN LHC Computing Grid (LCG) project coordinating activities in Europe. • Similar projects in US (Grid3/OSG) and Nordic countries (NorduGrid). • LCG prototype went live in September 2003 in 12 countries including UK. • Extensively tested by the LHC experiments University of Sheffield
What is GridPP? • 19 UK Universities, CCLRC (RAL & Daresbury) and CERN • Funded by the Particle Physics and Astronomy Research Council (PPARC) • GridPP1 - 2001-2004 £17m "From Web to Grid" • GridPP2 - 2004-2007 £16m "From Prototype to Production" • UK contribution to LCG. University of Sheffield
GridPP in Context Apps Dev Grid Support Centre GridPP CERN LCG Tier-1/A Middleware, Security, Networking Experiments Tier-2 Centres Apps Int Institutes GridPP UK Core e-Science Programme EGEE Not to scale! University of Sheffield
LCG ARDA EGEE Expmts CB PMB Deployment Board User Board Tier1/Tier2, Testbeds, Rollout Service specification & provision Requirements Application Development User feedback Metadata Storage Workload Network Security Info. Mon. University of Sheffield
Tier Structure Tier-0 (CERN) Tier-1 (BNL) Tier-1 Tier-1 (Lyon) Tier-1 (RAL) Tier-1 ScotGrid ULGrid NorthGrid SouthGrid Tier-2 University of Sheffield
UK Tier-1/A Centre Rutherford Appleton Laboratory Grid Resource Discovery Time = 8 Hours • High quality data services • National and international role • UK focus for international Grid development 1400 CPU 80 TB Disk 60 TB Tape (Capacity 1PB) 2004 CPU Utilisation University of Sheffield
UK Tier-2 Centres ScotGrid Durham, Edinburgh, Glasgow NorthGrid Daresbury, Lancaster, Liverpool, Manchester, Sheffield (WRG) SouthGrid Birmingham, Bristol, Cambridge, Oxford, RAL PPD, Warwick LondonGrid Brunel, Imperial, QMUL, RHUL, UCL University of Sheffield
NorthGrid • Tier-2 collaboration between Sheffield (WRG), Lancaster, Liverpool, Manchester and Daresbury Lab. University of Sheffield
WRG & NorthGrid • White Rose Grid contributing to NorthGrid and GridPP with new SRIF2 funded machine at Sheffield (Iceberg). • LCG component to Iceberg provides a base of 230kSI2k and on demand up to 340kSI2k, with state-of-the-art 2.4 GHz Opteron cpus. • Delivered 2nd highest GridPP Tier-2 throughput for ATLAS in 2005. http://lcg.shef.ac.uk/ganglia University of Sheffield
GridPP Deployment Status • GridPP deployment is part of LCG • Currently the largest Grid in the world Three Grids on Global scale in HEP (similar functionality) sites CPUs • LCG (GridPP) 228 (19) 17820 (3500) • Grid3 [USA] 29 2800 • NorduGrid 30 3200 University of Sheffield
ATLAS Data Challenges • DC2 (2005): 7.7 M GEANT4 events and 22 TB • DC3/CSC (2006): > 20M G4 events • UK ~20% of LCG • Ongoing.. • (3) Grid Production • Largest total computing requirement • Small fraction of what ATLAS needs.. Now in Grid Production Phase LCG now reliably used for production University of Sheffield
Further Info http://www.gridpp.ac.uk University of Sheffield