1 / 19

Information Technology and Computing Infrastructure for U.S. LHC Physics

Information Technology and Computing Infrastructure for U.S. LHC Physics. Lothar A.T. Bauerdick, Fermilab Project Manager U.S. CMS Software and Computing. LHC Discovery is Through Software and Computing. +. LHC Computing Unprecedented in Scale and Complexity.

Download Presentation

Information Technology and Computing Infrastructure for U.S. LHC Physics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Information Technology and Computing Infrastructure for U.S. LHC Physics Lothar A.T. Bauerdick, Fermilab Project Manager U.S. CMS Software and Computing

  2. LHC Discovery is ThroughSoftware and Computing + LHC Computing Unprecedented in Scale and Complexity

  3. Physics Discoveriesby Researchers at U.S. Universities U.S. LHC is Committed to Empower the Universities to do Research on LHC Physics Data This is why we are interested in the Grid as an Enabling Technology

  4. Distributed Grid Computing and Data Model is the LHC Baseline Model R&D and Testbeds: Prototyping Grid Infrastructure, Deploying Grid Tools, Developing Grid Applications US Atlas Grid Testbed

  5. Tier-ed System of Regional Centers LHC Experiment Online System 100-200 MBytes/s CERN Computer Center > 20 TIPS Tier 0 2.5 - 10 Gbits/s Tier 1 Korea Russia UK USA 2.5 Gbits/s Tier 2 Tier2 Center Tier2 Center Tier2 Center Tier2 Center ~0.6 Gbits/s Tier 3 Institute Institute Institute Institute 1 Gbits/s Physics cache Tier 4 PCs, other portals • Developing the hierarchically organized fabric of “Grid Nodes” …

  6. Transition to Production-quality Grid • Centers taking part in LHC Grid 2003 • Production Service Around the World  Around the Clock!

  7. …towards Dynamic Workspaces for Scientists • To empower Communities of Scientists to analyze dataand to work locally within a global context Resource Management, Knowledge Systems, Human-Grid Interfaces, Collab. Tools • Infrastructure to support sharing and consistency of Physics and Calibration Data + Schema, Data Provenance, Workflow, etc

  8. The Goal: • Provide individual physicists and groups of scientists capabilities from the desktop that allow them • To participate as an equal in one or more “Analysis Communities” • Full representation in the Global Experiment Enterprise • To on-demand receive whatever resources and information they need to explore their science interest while respecting the collaboration wide priorities and needs • That is, providemassive computing, storage, networking resources • Including “opportunistic” use of resources that are not LHC owned! • Provide full access to dauntingly complex “meta-data” • That need to be kept consistent to make sense of the event data

  9. These Goals Require Substantial R&D Global Access and Global Management of Massive and Complex Data Location Transparency of Complex Processing Environments and of Massive Data Collections Monitoring, Simulation, Scheduling and Optimization on a Heterogeneous Grid of Computing Facilities and Networks Virtual Data, Workflow, Knowledge Management Technologies End-to-End Networking Performance, Application Integration Management of Virtual Organizations across the Grid, Technologies and Services for Security, Privacy, Accounting Scientific Collaboration over the distance Etc … Grids are the Enabling TechnologyLHC Needs are Pushing the LimitsTechnology and Architecture still evolvingNew IT and CS Research is required

  10. Start of the NSF LHC Research Program • Exciting opportunities with the start of the NSF Research Program funding this year! • US CMS Universities will profit in major ways • develop the strong U.S. LHC environment for Physics Analysis • Address the core issues in U.S. LHC S&C: • developing and implementing the distributed computing model central for success of U.S. Universities participation • Focus on end-to-end services, • Focus on distributed data access and management • Major injection of new R&D manpower + running Tier-2 centerse.g. U.S. CMS: • At U.S. Universities for Architecture, Middleware and Physics support • Start of a pilot Tier-2, possibly a Tier-2-based PAC • Start Grid Operations R&D and support (2-3 FTE)

  11. U.S. LHC Grid Technology Cycles • “Rolling Prototypes”: evolution of the facility and data systems • Prototyping, • Early roll out, • Emphasis on Quality, Documentation, Dissemination, • Tracking of external “practices”

  12. Grid Testbeds And Production Grids GriPhyN PPDG iVDGL Brazil South Korea • Grid Testbeds: Development and Dissemination! • LHC Grid Testbeds first real-life large Grid installations, becoming production quality • Strong Partnership between Universities, Labs, with Grid (iVDGL, GriPhyN, PPDG) and Middleware Projects (Condor, Globus) • Strong dissemination component, together with Grid Projects • E.g. U.S. CMS Testbed: • Caltech, UCSD, U.Florida, UW Madison, Fermilab, CERN • Expression of interest: MIT Rice Minnesota Iowa Princeton

  13. Example: Monitoring and Information Services • MonALISA (Caltech)Currently deployed in Testbed environment • Dynamic information/resource discovery mechanism using intelligent agents • Java / Jini with interfaces to SNMP, MDS, Ganglia, and Hawkeye • WDSL / SOAP with UDDI • Aim to incorporate into a “Grid Control Room” Service

  14. Distributed Collaborative Engineering • Projectization essential for Software and Computing Effort of this complexity • Physics and Detector Groups at Universities are the first to profit from this • Example: Atlas Detector Geometry Description Databases • Idea and Concept • Geometry Modeller based on CDF [U.Pittsburg] • Massive Development Effort • NOVA MySQL Database [BNL] • Repository of persistent configuration information • NOVA Service [ANL] • Retrieval of transient C++ objects from NOVA Database • Conditions Database Service [ANL/Lisbon] • Access to time-varying information based on type, time, version and key • Used in conjunction with other persistency services (e.g. NOVA Service) • Interval Of Validity Service [LBNL] • Registration of clients; retrieval of updated information when validity expires; caching policy management • Release as scheduled to Detector and Physics Groups • Prototype at Silicon alignment workshop in December 2002

  15. Example: Detector Description • Geometry Modeller, Database, Visualization, Optimization Detail from Barrel Liquid Argon(parameterized - 40kB in memory) Detail from TRT

  16. A Large International Effort GriPhyN PPDG iVDGL • Grid Projects Directly Related to LHC

  17. Pieces For the LHC Computing Infostructure GriPhyN iVDGL Blue Ribbon Panel on Cyberinfrastructure ITR Proposals US LHC S&C CERN LCG

  18. + Universities have an Enormous Impact on R&D&D&DInter-agency partnership between the NSF-funded Tier-2 and the DOE-funded Tier-1 efforts addresses a major part of the24x7 support and Grid services issues Software and Computingfor LHC Discovery requires Research, Development, Deployment, Dissemination And also Running Facilities and Services

  19. The U.S. LHC Mission is Physics Discovery at the Energy Frontier! This model takes advantage of the significant strengths of U.S. universities in the area of CS and IT Draw Strength and Exploit Synergy BetweenU.S. Universities and National Labs,Software Professionals and Physicists,Computer Scientists and High Energy Physicists LHC is amongst the first to put a truly distributed “Cyberinfrastructure” in place, spearheading important innovations in how we do science

More Related