1 / 15

Status and plans for on line database set ups in ALICE, ATLAS, CMS and LHCb

This mini workshop presents the current status and future plans for the online database setups in ALICE, ATLAS, CMS, and LHCb. Topics include hardware ordering, on/offline transfer, replication, configuration, recording of conditions data, and usage of various database technologies.

bbarraza
Download Presentation

Status and plans for on line database set ups in ALICE, ATLAS, CMS and LHCb

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Status and plans foron line database set ups inALICE, ATLAS, CMS and LHCb Database mini workshop Friday, 26.01.07 Frank Glege

  2. ALICE status and plans • Currently ordering hardware for a 6 node, 25TB RAC to be installed at ALICE cavern. • On to off line transfer using custom software. • Redo logs will be stored on the fileservers and DB backup should go on Castor.

  3. Databases and technologies used in ATLAS • Oracle is primary source of information • In ATLAS Online as well as ATLAS Offline • Replication with Oracle Streams • Online databases implemented with Oracle are • COOL, plus ist POOL payloads, is the most important information channel between ATLAS online and offline (apart from event data...) • Additional relational databases for trigger, some detectors, geometry - all accessed through CORAL • And of course the PVSS database of the detector control system DCS • Usage of the online databases • Configuration of detector frontends + readout, and of all trigger levels • Recording of conditions data from DCS and other sources • Several specific database technologies used online ... to interface online Oracle efficiently with the large number of clients • OKS, an in-memory OODB for the basic configuration of all online infrastructure • DbProxy, an hierarchical DB caching system mainly used as a fanout to high-level trigger algorithms - utilizing MySQL protocols internally • Offline (Athena) based server for configuring algorithms used in the readout crates of some detectors

  4. Synopsis of Online + Offline DBs in ATLAS Dataflow, Detectors, HLT farms Athena Server Oracle Streams Oracle Streams DbProxy DCS to COOL CORAL OKS Diagnostics Remote sites ATLAS Online Control Network at Point1, and extending to computer centre Tier-1 replica offline calibration (10 x) Tier-1 replica Online OracleDB 6 nodes (centre) Offline master OracleDB 6 nodes Tier-0 recon replica Tier-0 farm Diagnostics replica CERN centre

  5. Status and tests of Online DBs in ATLAS • Servers, and replication online to offline are in place • Online RAC just upgraded to 6 nodes - thanks IT • Recent test of the higher-level trigger • Could use O(1000) nodes provided by IT - thanks • Configuration using COOL, special trigger configuration DB, geometry DB - all interfaced via DbProxy - plus POOL files • Successfully configured and ran all trigger slices on many nodes, both Level 2 and Event Filter, using MC events • Activities in 2007 involving online DBs • Data Streaming tests (referring to the RAW data streams) involve online and offline dataflow components - production of ESD, AOD, and TAG data • Final Dress Rehearsal - extrapolates this to large scale, with MC input worth ~1 LHC fill • Cosmics datataking of combined ATLAS detectors, producing major volumes of COOL data online, used by offline reconstruction and analysis ... leading into 900 GeV running

  6. Pixel Pixel Strips Strips EMDB ECAL ECAL HCAL HCAL DDD RPC RPC Logbook DT DT ES ES LHC data CSC CSC Trigger Trigger DAQ DAQ DCS DCS Create rec conDB data set Offline Reconstruction Conditions DB ONline subset Online Master Data Storage Conditions Calibration Poolification ORACLE streams Bat 513 Configurations Conditions Production Validation Development ORACLE RAC 6 nodes, 15TB PVSS XDAQ Offline Reconstruction Conditions DB OFfline subset Configuration Conditions Master copy Tier 0

  7. Technologies used on line • DB: ORACLE • DB interfaces • PVSS with in build DB access • XDAQ (CMS DAQ SW framework) with it’s DB interface • CMSSW (CMS off line SW framework) using POOL • ORACLE portal for standard human access • SQL+, benthic, TORA, etc. for experts • Data transfer • Custom applications for “poolification” • ORACLE streams for on line to off line

  8. Status and plans • Currently one DB server (1TB) on line containing all on line DBs • DB model has been tested successfully in cosmic data taking last August. • Order of a 6 node RAC system initiated • Realistic performance tests still to be done • Installation of the CMS on line system has started DB needs will increase with the system until it’s fully installed in autumn.

  9. Databases in LHCb Online • Foreseen Databases • PVSS Archive (main diagnostic and archiving DB) • Configuration Database • Conditions Database (COOL) for reconstruction/physics analysis • Extracted from PVSS • Replicated/streamed to/from Tier-0 • Replicated further to/from Tier-1 • Others (smaller) • Histogram Database • Run/File Database • shift database • … • DBMS • Oracle + Oracle RAC • Operated at the LHCb pit • Hardware • Part of the LHCb Online Storage System

  10. Oracle Streams PVSS Arch. COOL Others ConfDDB Logical View LHCb Controls Network (local to Point 8) offline calibration Offline master OracleDB Online OracleDB CERN centre

  11. Hardware View Tier-0 Oracle Service Tier-0 Storage

  12. Service/Support from IT • During installation and setup • Advice on setting up • Advice on performance tuning • Advice on backup and logging strategies • During operation • Interfacing with Oracle • Problems/updates/patches etc. • Support with restore operations from backups in case DB gets destroyed DB Mini Workshop 26 January 2007

  13. DB administration • HW maintenance covered by computing people • Efficient, continuous 24/7 running of an ORACLE RAC system requires 4 to 5 well trained people. • Currently • ALICE: 1 person trained, until next year • ATLAS: 2 DBAs • CMS: 5 people trained, 1 DBA job opening • LHCb: 3 people trained

  14. DB administration evolution • Manpower intensive periods are ahead of all experiments and will happen more or less at the same time: • Installation and commissioning of the RACs • Integration and optimization of the applications • Steady state running can be expected towards 2009

  15. summary • Design of database models mostly done in all experiments and tests have been made. • Purchase of final hardware is imminent or done • Database administration requires IT support at least for non standard operation

More Related