350 likes | 364 Views
Explore challenges faced in managing the LHC Computing Grid, covering data flow, simulation, and collaboration across global networks. Learn about proposed grid solutions and access patterns for particle physics experiments.
E N D
The Particle Physics Computational Grid Paul Jeffreys/CCLRC System Managers Meeting Slide 1
Financial Times, 7 March 2000 System Managers Meeting Slide 2
Front Page FT, 7 March 2000 System Managers Meeting Slide 3
LHC Computing: Different from Previous Experiment Generations • Geographical dispersion: of people and resources • Complexity: the detector and the LHC environment • Scale: Petabytes per year of data • (NB – for purposes of this talk – mostly LHC specific) ~5000 Physicists 250 Institutes ~50 Countries • Major challenges associated with: • Coordinated Use of Distributed Computing Resources • Remote software development and physics analysis • Communication and collaboration at a distance • R&D: A New Form of Distributed System: Data-Grid System Managers Meeting Slide 4
The LHC Computing Challenge – by example • Consider UK group searching for Higgs particle in LHC experiment • Data flowing off detectors at 40TB/sec (30 million floppies/sec)! • Factor of c. 5.105 rejection made online before writing to media • But have to be sure not throwing away the physics with the background • Need to simulate samples to exercise rejection algorithms • Simulation samples will be created around the world • Common access required • After 1 year, 1PB sample of experimental events stored on media • Initial analysed sample will be at CERN, in due course elsewhere • UK has particular detector expertise (CMS: e-, e+, ) • Apply our expertise to : access 1PB exptal. data (located?), re-analyse e.m. signatures (where?) to select c. 1 in 104 Higgs candidates, but S/N will be c. 1 to 20 (continuum background), and store results (where?) • Also .. access some simulated samples (located?), generate (where?) additional samples, store (where?) -- PHYSICS (where?) • In addition .. strong competition • Desire to implement infrastructure in generic way System Managers Meeting Slide 5
CERN T2 3 3 3 3 3 T2 T2 T2 3 Tier 1 3 3 T2 T2 3 3 3 3 3 3 4 4 4 4 Proposed Solution to LHC Computing Challenge (?) • A data analysis ‘Grid’ for High Energy Physics System Managers Meeting Slide 6
Access Patterns Typical particle physics experiment in 2000-2005:On year of acquisition and analysis of data Access Rates (aggregate, average) 100 Mbytes/s (2-5 physicists) 500 Mbytes/s (5-10 physicists) 1000 Mbytes/s (~50 physicists) 2000 Mbytes/s (~150 physicists) Raw Data ~1000 Tbytes Reco-V1 ~1000 Tbytes Reco-V2 ~1000 Tbytes ESD-V1.1 ~100 Tbytes ESD-V1.2 ~100 Tbytes ESD-V2.1 ~100 Tbytes ESD-V2.2 ~100 Tbytes AOD ~10 TB AOD ~10 TB AOD ~10 TB AOD ~10 TB AOD ~10 TB AOD ~10 TB AOD ~10 TB AOD ~10 TB AOD ~10 TB System Managers Meeting Slide 7
Hierarchical Data Grid • Physical • Efficient network/resource use local > regional > national > oceanic • Human • University/regional computing complements national labs, in turn complements accelerator site • Easier to leverage resources, maintain control, assert priorities at regional/local level • Effective involvement of scientists and students independently of location • The ‘challenge for UK particle physics’ … How do we: • Go from the 200 PC99 farm maximum of today to 10000 PC99 centre? • Connect/participate in European and World-wide PP grid? • Write the applications needed to operate within this hierarchical grid? AND • Ensure other disciplines able to work with us, our developments & applications are made available to others, exchange of expertise, and enjoy fruitful collaboration with Computer Scientists and Industry System Managers Meeting Slide 8
Quantitative Requirements • Start with typical experiment’s Computing Model • UK Tier-1 Regional Centre specification • Then consider implications for UK Particle Physics Computational Grid • Over years 2000, 2001, 2002, 2003 • Joint Infrastructure Bid made for resources to cover this • Estimates of costs • Look further ahead System Managers Meeting Slide 9
System Managers Meeting Slide 10
System Managers Meeting Slide 11
System Managers Meeting Slide 12
System Managers Meeting Slide 13
System Managers Meeting Slide 14
System Managers Meeting Slide 15
System Managers Meeting Slide 16
System Managers Meeting Slide 17
Steering Committee ‘Help establish the Particle Physics Grid activities in the UK' a. An interim committee be put in place. b. The immediate objectives would be prepare for the presentation to John Taylor on 27 March 2000, and to co-ordinate the EU 'Work Package' activities for April 14 c. After discharging these objectives, membership would be re-considered d. The next action of the committee would be to refine the Terms of Reference (presented to the meeting on 15 March) e. After that the Steering Committee will be charged with commissioning a Project Team to co-ordinate the Grid technical work in the UK f. The interim membership is: • Chairman: Andy Halley • Secretary: Paul Jeffreys • Tier 2 reps: Themis Bowcock, Steve Playfer • CDF: Todd Hoffmann • D0: Ian Bertram • CMS: David Britton • BaBar: Alessandra Forti • CNAP: Steve Lloyd • The 'labels' against the members are not official in any sense at this stage, but the members are intended to cover these areas approximately! System Managers Meeting Slide 18
UK Project Team • Need to really get underway! • System Managers crucial! • PPARC needs to see genuine plans and genuine activities… • Must coordinate our activities • And • Fit in with CERN activities • Meet needs of experiments (BaBar, CDF, D0, …) • So … go through range of options and then discuss… System Managers Meeting Slide 19
EU Bid(1) • Bid will be made to EU to link national grids • “Process” has become more than ‘just a bid’ • Almost reached the point where have to be active participant in EU bid, and associated activities, in order to access data from CERN in the future • Decisions need to be taken today… • Timescale: • March 7 Workshop at CERN to prepare programme of work (RPM) • March 17 Editorial meeting to look for industrial partners • March 30 Outline of paper used to obtain pre-commitment of partners • April 17 Finalise ‘Work Packages’ – see next slides • April 25 Final draft of proposal • May 1 Final version of proposal for signature • May 7 Submit System Managers Meeting Slide 20
EU Bid(2) • The bid was originally for 30MECU, with matching contribution from national funding organisations • Now scaled down, possibly to 10MECU • Possibly as ‘taster’ before follow-up bid? • EU funds for Grid activities in Framework VI likely to be larger • Work Packages have been defined • Objective is that countries (through named individuals) take responsibility to split up the work and define deliverables within each, to generate draft content for EU bid • BUT • Without doubt the same people will be well positioned to lead the work in due course • .. And funds split accordingly?? • Considerable manoeuvering! • UK – need to establish priorities, decide where to contribute… System Managers Meeting Slide 21
Work Packages Middleware Contact Point 1Grid Work SchedulingCristina Vistoli INFN 2Grid Data ManagementBen Segal CERN 3Grid Application MonitoringRobin Middleton UK 4Fabric ManagementTim Smith CERN 5Mass Storage ManagementOlof Barring CERN Infrastructure 6Testbed and DemonstratorsFrançois Etienne IN2P3 7Network ServicesChristian Michau CNRS Applications 8HEP ApplicationsHans Hoffmann 4expts 9Earth Observation ApplicationsLuigi Fusco 10Biology ApplicationsChristian Michau Management • Project ManagementFabrizio Gagliardi CERN Robin is ‘place-holder’ – holding UK’s interest (explanation in Open Session) System Managers Meeting Slide 22
UK Participation in Work Packages MIDDLEWARE 1. Grid Work Scheduling 2. Grid Data Management TONY DOYLE, Iain Bertram? 3. Grid Application monitoring ROBIN MIDDLETON, Chris Brew 4. Fabric Management 5. Mass Storage Management JOHN GORDON INFRASTRUCTURE 6. Testbed and demonstrators 7. Network Services PETER CLARKE, Richard Hughes-Jones APPLICATIONS 8. HEP Applications System Managers Meeting Slide 23
PPDG System Managers Meeting Slide 24
PPDG System Managers Meeting Slide 25
PPDG System Managers Meeting Slide 26
PPDG System Managers Meeting Slide 27
LHCb contribution to EU proposal HEP Applications Work Package • Grid testbed in 2001, 2002 • Production 106 simulated b->D*pi • Create 108 events at Liverpool MAP in 4 months • Transfer 0.62TB to RAL • RAL dispatch AOD and TAG datasets to other sites • 0.02TB to Lyon and CERN • Then permit a study of all the various options for performing a distributed analysis in a Grid environment System Managers Meeting Slide 28
American Activities • Collaboration with Ian Foster • Transatlantic collaboration using GLOBUS • Networking • QoS tests with SLAC • Also link in with GLOBUS? • CDF and D0 • Real challenge to ‘export data’ • Have to implement 4Mbps connection • Have to set up mini Grid • BaBar • Distributed LINUX farms etc in JIF bid System Managers Meeting Slide 29
Networking Proposal - 1 System Managers Meeting Slide 30
Networking - 2 System Managers Meeting Slide 31
Networking - 3 System Managers Meeting Slide 32
Networking - 4 System Managers Meeting Slide 33
Pulling it together… • Networking: • EU work package • Existing tests • Integration of ICFA studies to Grid • Will networking lead the non-experiment activities?? • Data Storage • EU work package • Grid Application Monitoring • EU work package • CDF, D0 and BaBar • Need to integrate these into Grid activities • Best approach is to centre on experiments System Managers Meeting Slide 34
…Pulling it all together • Experiment-driven • Like LHCb, meet specific objectives • Middleware preparation • Set up GLOBUS? • QMW, RAL, DL ..? • Authenticate • Familiar • Try moving data between sites • Resource Specification • Collect dynamic information • Try with international collaborators • Learn about alternatives to GLOBUS • Understand what is missing • Exercise and measure performance of distributed cacheing • What do you think? • Anyone like to work with Ian Foster for 3 months?! System Managers Meeting Slide 35