220 likes | 393 Views
Data Management at CERN’s Large Hadron Collider (LHC) Dirk D ü llmann CERN IT/DB, Switzerland http://cern.ch/db http://pool.cern.ch. Outline. Short Introduction to CERN & LHC Data Management Challenges The LHC Computing Grid (LCG) LCG Data Management Components
E N D
Data Management at CERN’s Large Hadron Collider (LHC) Dirk Düllmann CERN IT/DB, Switzerland http://cern.ch/db http://pool.cern.ch Data Management at the LHC
Outline • Short Introduction to CERN & LHC • Data Management Challenges • The LHC Computing Grid (LCG) • LCG Data Management Components • Object Persistency and the POOL Project • Connecting to the GRID – LCG Replica Location Service Data Management at the LHC
CERN - The European Organisation for Nuclear ResearchThe European Laboratory for Particle Physics • Fundamental research in particle physics • Designs, builds & operates large accelerators • Financed by 20 European countries (member states) + others (US, Canada, Russia, India, ….) • ~€650M budget - operation + new accelerators • 2000 staff + 6000 users (researchers) from all over the world • Next Major Research Project - LHC start ~2007 • 4 LHC Experiments, each with • 2000 physicists, 150 universities, apparatus costing ~€300M, computing ~€250M to setup, ~€60M/year to run • 10-15 year lifetime
airport Geneva Computer Centre 27km Data Management at the LHC
The LHC machine Two counter- circulating proton beams Collision energy 7+7 TeV 27 Km of magnetswith a field of 8.4 Tesla Super-fluid Heliumcooled to 1.9°K The world’s largest superconducting structure Data Management at the LHC
40 MHz (40 TB/sec) level 1 - special hardware 75 KHz (75 GB/sec) level 2 - embedded processors 5 KHz (5 GB/sec) level 3 - PCs 100 Hz (500 MB/sec) data recording & offline analysis online system multi-level trigger filter out background reduce data volume from 40TB/s to 500MB/s Data Management at the LHC
LHC Data Challenges • 4 large experiments, 10-15 year lifetime • Data rates: 500MB/s – 1.5GB/s • Total data volume: 12-14PB/ year • Several hundred PB total ! • Analysed by thousands of users world-wide • Data reduced from “raw data” to “analysis data” in a small number of well-defined steps Data Management at the LHC
CERN Data Handling and Computation for Physics Analysis event filter (selection & reconstruction) detector processed data event summary data raw data batch physics analysis event reprocessing analysis objects (extracted by physics topic) event simulation interactive physics analysis les.robertson@cern.ch
Other experiments Other experiments LHC LHC Moore’s law Disk Mass Storage CPU Planned capacity evolution at CERN
Lab m Uni x regional group CERN Tier 1 Uni a UK USA Lab a France The LHC Computing Centre Tier 1 Tier3 physics department Uni n CERN Tier2 ………. Italy Desktop Lab b Germany ………. Lab c Uni y Uni b physics group les.robertson@cern.ch Multi Tiered Computing Models - Computing Grids
Event Tracker Calor. TrackList HitList Track Hit Track Hit Track Hit Track Hit Track Hit LHC Data Models • LHC data models are complex! • Typically hundreds (500-1000) of structure types (classes in OO) • Many relations between them • Different access patterns • LHC experiments rely on OO technology • OO applications deal with networks of objects • Pointers (or references) are used to describe inter object relations • Need to support this navigational model in our data store Data Management at the LHC
What is POOL? • POOL is the common persistency framework for physics applications at the LHC • Pool Of persistent Objects for LHC • Hybrid Store – Object Streaming & Relational Database • Eg ROOT I/O for object streaming • complex data, simple consistency model (write once) • Eg RDBMS for consistent meta data handling • simple data, transactional consistency • Initiated in April 2002 • Ramping up over the last year from 1.5 FTE to ~10 FTE • Common effort between LHC experiments and the CERN Database group • project scope and architecture and development • => Rapid feedback cycles between project and its users • First larger data productions starting now! Data Management at the LHC
Component Architecture • POOL (as most other LCG software) is based on a strict component software approach • Components provide technology neutral APIs • Communicate with other components only via abstract component interfaces • Goal: Insulatethe very large experiment software systemsfrom concrete implementation details and technologies used today • POOL user code is not dependent on any implementation libraries • No link time dependency on any implementation packages (e.g. MySQL, Root, Xerces-c..) • Component implementations are loaded at runtime via a plug-in infrastructure • POOL framework consists of three major, weakly coupled, domains Data Management at the LHC
POOL Components Data Management at the LHC
POOL Generic Storage Hierarchy • A application may access databases (eg streaming files) from one or more file catalogs • Each database is structured into containers of one specific technology (eg ROOT trees or RDBMS Tables) • POOL provides a “Smart Pointers” type pool::Ref<UserClass> • to transparently load objects from the back end into a client side cache • define persistent inter object associations across file or technology boundaries Data Management at the LHC
Gateway I/O LCGdictionary Other Clients Reflection Data Dictionary & Storage C++ Header Abstract DDL GCC-XML Code Generator DictionaryGeneration LCG dictionary code CINT dictionary Data I/O Data Management at the LHC Technology dependent
Logical Naming Object Lookup LFN2 PFN2, technology PFNn, technology LFNn File Identity and metadata POOL File Catalog • Files are referred to inside POOL via a unique and immutable file identifier which is system generated at file creation time • This allows to provide stable inter-file reference • FileID are implemented as Global Unique Identifier (GUID) • Allows to create consistent sets of files with internal references • without requiring a central ID allocation service • Catalog fragments created independently can later be merged without modification to corresponding data file Data Management at the LHC
EDG Replica Location Services - Basic Functionality Each file has a unique GUID. Locations corresponding to the GUID are kept in the Replica Location Service. Users may assign aliases to the GUIDs. These are kept in the Replica Metadata Catalog. Files have replicas stored at many Grid sites on Storage Elements. james.casey@cern.ch Replica Metadata Catalog Replica Location Service Replica Manager The Replica Manager provides atomicity for file operations, assuring consistency of SE and catalog contents. Storage Element Storage Element Data Management at the LHC
Interactions with other Grid Middleware Components User Interface or Worker Node Resource Broker Virtual Organization Membership Service Information Service james.casey@cern.ch Replica Metadata Catalog Replica Location Service Replica Manager Replica Optimization Service Applications and users interface to data through the Replica Manager either directly or through the Resource Broker. Storage Element Storage Element SE Monitor Network Monitor Data Management at the LHC
RLS Service Goals • To offer production quality services for LCG 1 to meet the requirements of forthcoming (and current!) data challenges • e.g. CMS PCP/DC04, ALICE PDC-3, ATLAS DC2, LHCb CDC’04 • To provide distribution kits, scripts and documentation to assist other sites in offering production services • To leverage the many years’ experience in running such services at CERN and other institutes • Monitoring, backup & recovery, tuning, capacity planning, … • To understand experiments’ requirements in how these services should be established, extended and clarify current limitations • Nottargeting small-medium scale DB apps that need to be run and administered locally (to user) Data Management at the LHC
Conclusions • Data Management at LHC remains a significant challenge because of data volume, project lifetime, complexity of S/W and H/W setups. • The LHC Computing Grid (LCG) approach is based on eg the EDG and GLOBUS Middleware projects and uses a strict component approach for physics application software • The LCG-POOL project has developed a technology neutral persistency framework which is currently being integrated into the experiment production systems • In conjunction with POOL a data catalog production service is provided to support several upcoming data productions in the 100 of terabyte area Data Management at the LHC
LHC Software Challenges • Experiment software systems are large and complex • Developed by teams of expert developers • Permanent evolution and improvement for years… • Analysis is performed by many end user developers • Often participating only for short time • Usually without strong computer science background • Need simple and stable software environment • Need tomanage change over a long project lifetime • Migration to new software, implementation languages • New computing platforms, storage media • New computing paradigms ??? • Data management system needs to be designed such confine the impact of unavoidable change during the project Data Management at the LHC