1 / 14

The HERA-B detector The database problem The Architecture The Berkeley-DB DBMS

The HERA-B database services detector configuration, calibration, alignment, slow control, data classification. A. Amorim, Vasco Amaral, Umberto Marconi, Tome Pessegueiro, Stefan Steinbeck, Antonio Tome, Vicenzo Vagnoni and Helmut Wolters. The HERA-B detector The database problem

decker
Download Presentation

The HERA-B detector The database problem The Architecture The Berkeley-DB DBMS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The HERA-B database servicesdetector configuration, calibration, alignment, slow control, data classification A. Amorim A. Amorim, Vasco Amaral, Umberto Marconi, Tome Pessegueiro, Stefan Steinbeck, Antonio Tome, Vicenzo Vagnoni and Helmut Wolters • The HERA-B detector • The database problem • The Architecture • The Berkeley-DB DBMS • The client/server integration • The domains and solutions • Conclusions and Outlook

  2. B/B-tagging B0/B0 J/y KS HERA-B Experiment A. Amorim Vertex Detector Si strip 12 mm resolution MUON (m/h) tube,pad and gas pixel chambers RICH (p/K) multianode PMT ECAL(g+e/h) W/Pb scintillator shashlik TRD (e/h) straw tubes +thin fibers C4F10 HiPt trigger pad/gas pixel Magnet: 2 Tm Tracking: - ITR(<20cm): MSGC-GEM - OTR(>20cm): 5+10mm drift cells

  3. The main challenge: Selecting A. Amorim

  4. HERA-B DAQ Detector Front End Electronics A. Amorim FCS 1000 SHARC (DSP) Event Control DSP SWITCH DSP SWITCH Trigger PC Trigger PC Trigger PC SLT/TLT INTERNET SWITCH 4LT PC 4LT PC 4LT Logger PC L2-farm: 240 PC’s L4-farm 200 PC’s

  5. To provide persistence services (including online-offline replication) to: • Detector configuration • Common accepted schema • Calibration and alignment • Distributing information to the reconstruction and trigger farms • Associate each event with the corresponding database information • Slow control • Manage updates without data redundancy • Data set and event classification • Online Bookkeeping • Detector Configuration • Calibration and Alignment • Slow Control • Data Set and Event Classification • Online Bookkeeping The HERA-B database problem A. Amorim

  6. Characterizing the context A. Amorim

  7. Key= name+ version Machine independent blub of DATA /PM/ Descrip. field1 ; field 2; ... Db: /RICH/HV/ .2 .5 -.1 56892 ... versions Keys, objects and client/server A. Amorim client/server at the SDB level +RPM -> an UDP based communication package.

  8. The Berkeley DB A. Amorim See http://www.sleepycat.com/ • Embedded transactional store with: logging, locking, commit and roll back, disaster recovery. • Intended for: high-concurrency read-write workloads, transactions and recoverability. • Cursors to speed access from many clients.. • Open Source policyThe license is free for non-commercial purposes - rather nice support • No client/server support is provided

  9. Slow Control Interface A. Amorim Metadata Object Data Object Update Update Pmt1000 Pmt1003 Pmt2000 1.2 ... 1 ... 1.5 ... 2 ... 1.6 ... 2.3 ... time Optimized Queries

  10. Associations to Events A. Amorim ... ... ... Index Obj. Index Obj. Index Obj. Revision 0 - online Calibrating 1 - offline Index Obj. Active server interface Index Objects (referenced by events) Client/server Dynamic Associations Index Obj. Created in active Servers

  11. key’ Keykey’ Key Basic n-n associations (LEDA) A. Amorim • Associations are navigated with iterators • Using hash tables. • Keys as OID’s with the scope of classes. • Explicitly loaded or saved (as containers) LEDA - Object Manager (hash table implemented associations) Active server interface Key objects (referenced by events) Client/server

  12. GUI for editing and drawing A. Amorim From R&D: JAVA, TCL/TK, gtk Reusing and extending widget. Data hidden from TCL/TK ROOT database Binding Socket: Client/ Server

  13. General Architecture 109 Evt./y A. Amorim

  14. Conclusions A. Amorim • ONLINE: • Large number of Clients => Gigabytes per Update • broadcast simultaneously to SLT • tree of cache database servers to the 4LT • Correlates (dynamically) each event with the databases objects • 600 k SLC parameters using data and update objects • parameter history is re-clustered on the database servers • The online database system has been successfully commissioned • OFFLINE: • Replication mechanism decouples online from offline • also provides incremental backup of the data • TCP/IP gateways and proxies • “data warehousing” for data-set classification -> MySQL • Relation to event tag under evaluation • Also providing persistency to ROOT objects • Using Open Source external packages has been extremely useful.

More Related