80 likes | 96 Views
Detailed overview of DOE Exascale Requirements Review for High Energy Physics (HEP) computing needs, collaboration with ASCR, key discussions, and outcomes.
E N D
Opening Remarks Regarding DOE SC Exascale Requirements Review Bethesda Hyatt, June 2015 Rob Roser
Overview • ASCR expects exascale machines on the floor in the early 2020’s • We (HEP) were asked to to discuss with ASCR and ourselves what are our computational needs necessary to get to the best science over the next decade. • HEP was the first agency to discuss its needs with ASCR • Had about 70 people in attendance, 30 from HEP and ~40 from ASCR • excellent turn out by both research and facilities side of ASCR • HEP prepared a series of talks • 9 white-papersand a dozen case studies that were submitted a week before the meeting • Two Day Agenda with full group and a third morning to organize the report
The Organizing Committee Salman Habib, ANL Robert M Roser, FNAL LaliChatterjee, DOE HEP Barbara Helland, DOE ASCR Dave Goodwin, DOE ASCR Carolyn Lauzon, DOE ASCR SudipDosanjh, NERSC Katie Antypas, NERSC Richard Gerber, NERSC Paul Messina, ALCF Katherine Riley, ALCF Tim Williams, ALCF James J. Hack, OLCF Jack Wells, OLCF TjerkStraatsma, OLCF Julia White, OLCF
Agenda – Day 1 • 9:00 – 9:40 Introduction and Welcome • ASCR –Steve Binkley (15 min) • HEP – Jim Siegrist (15 min) • ASCR Facilities – Carolyn Lauzon (5 min) • Comp HEP – LaliChatterjee (5 min) • 9:40 – 10:30 – The P5 Science Drivers • Short Overviews of the physics landscape in 2020-2025 and the need for exascale computing in the context of the P5 Science Drivers • Major accelerator experiments (LHC, LBNF) – Spentzouris (15) • Cosmic surveys – Nugent (15) • Theory – TBD (15) • 10:30 – 10:45 Break (15 min) • 10:45 – 11:00 – Why and How the meeting is organized – Habib/Roser (15) • 11:00 – 11:30 – Open Discussion (30) • 11:30 – 12:15 Traditional HPC needs (3X10+Q/A) – Brower/Heitmann/Vay • 12:15 – 1:30 LUNCH • 1:30– 2:15 Use of HPC by experiments for data intensive tasks (20+15+Q/A) – LeCompte/TBD (sky surveys, CMB/LSS) • 2:15 – 3:00 Data Movement and Storage/Analytics (20+15+Q/A) Wurthwein/TBD • 3:00 – 3:30 BREAK • 3:30 – 4:15 HEP Science Workflow Usecase (15+15+Q/A) Wenaus/Petravick • 4:15 – 5:00 ASCR and HEP Facilities and their co-evolution (20+20+Q/A) TBD(ASCR)/Bauerdick
Agenda – Day 2 9:00 -- 12:00 BREAKOUTS • 1. Traditional HPC • Lattice QCD • Computational Cosmology • Accelerator Modeling • Science Cases and Next-Generation Architecture • Do we need a better model for storing/dealing with HPC output? • Are there data-intensive computing requirements at the HPC facility or do we expect to do large-scale analytics elsewhere? • Data persistency at the facility [Comment – should this be part of “better model” above?] • How to integrate a higher-level submission system with the HPC scheduler [Comment – this is pretty near-term already] • 2. Use of HPC for data intensive tasks/Data Movement and storage • Data Simulation (Geant4) and Event Generators • Data Reconstruction • Staging of data and transfer rates • Data analytics at host vs off site • Need of Data Facility • OUTBRIEF • 12:00 – 1:00 LUNCH • Follow up as needed… • End of day outbrief
Some HEP Specific Questions What changes in HEP computing practices would enable us to make the best use of ASCR Resources How should the ASCR and HEP computing/data systems co-evolve in order to optimize our combined resources and talent How aggressive will HEP be in its use of GPU’s and other co-processors? What is the nature of the required software environment for each application (this will vary considerably) What are the steps HEP needs to take now to be ready for this exciting new landscape and how to we educate our workforce?
Desired Outcomes from ASCR Perspective • Gather computing, storage, and HPC services required (at all scales) to support HEP research through 2020-2025 • Important to discuss application readiness for many-core/gpu and also application portability • Workflows • Compute and Storage • Mission Needs and Science Drivers (this is largely done already, but needs to be included in the report) • Collect a set of white papers with scientific goals and how HPC requirements support achieving those goals • What scientists want to do with computers/storage in future and what do we need to acquire to enable that. • As many specifics of architecture required as possible: memory, network, disk, nvram, single-core performance, etc.
Connection to Networking Take Away Points from the Exascale Meeting Because of the scale of HEP computing tasks, ASCR and HEP should collaborate on developing the overall architecture of the eco-system, allocation process, security, the back-end software on their facilities, and perform necessary R&D on topics related to data and networking (and maybe others) to connect HPC facilities The exascale environment will be more complex than the current HPC computing model and should include data-intensive computing as part of the capability model. Networking – compute models will have to change if ESnet bandwidth for HEP data does not keep pace with coming requirements