1 / 27

Sebastian Robert Bablok (Department of Physics and Technology, University of Bergen, Norway)

High Level Trigger Online Calibration framework in ALICE CHEP 2007 – Victoria (02.09. – 07.09.2007). Sebastian Robert Bablok (Department of Physics and Technology, University of Bergen, Norway) for the ALICE HLT Collaboration (see last slide for complete list of authors). Outline.

hafwen
Download Presentation

Sebastian Robert Bablok (Department of Physics and Technology, University of Bergen, Norway)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Level Trigger Online Calibration framework in ALICE CHEP 2007 – Victoria (02.09. – 07.09.2007) Sebastian Robert Bablok (Department of Physics and Technology, University of Bergen, Norway) for the ALICE HLT Collaboration (see last slide for complete list of authors)

  2. Outline High Level Triger (HLT) system HLT Interfaces to the Experiment Control System (ECS) to Offline to the Detector Control System (DCS) to event visualization (AliEve) Synchronization Timeline Summary & Outlook

  3. High Level Trigger system ECS Purpose: Online event reconstruction and analysis 1 kHz event rate (minimum bias PbPb, pp) Providing of trigger decisions Selection of regions of interest within an event performance monitoring of the ALICE detectors Lossless compression of event data Online production of calibration data analyzed events / trigger decisions DAQ HLT DCS Trigger Mass Storage raw event data

  4. High Level Trigger system • Hardware Architecture: • PC cluster with up to 500 computing node • AMD Opterons 2GHz (dual board, dual core) • 8 GByte Ram • 2 x 1GBit ethernet • Infiniband backbone • infrastucture nodes for maintaining the cluster • 8 TByte AFS file server • dedicated portal nodes for interaction with the other ALICE systems • cluster organization matches structure of the ALICE detectors

  5. High Level Trigger system • Software Architecture: • dynamic software data transport framework (Publish-Subscriber) • Cluster management (TaskManager) • Analysis Software (AliRoot)

  6. ECS (Experiment Control System) HLT - Interfaces DCS (Detector Control System) DCSvalues Control DAQ (Data Acquisition) Processed events DCS-portal (Pendolino, FED) ECS-proxy Calculated values Trigger decisions DDL Online event monitoring AliEve (ALICE Event Monitor) HLTcluster HOMER FEP Event data OFFLINE FEE (Front-End- Electronics) Calibration data OCDB (Offline Conditions DB) OFFLINE-portal(Shuttle, Taxi) DAQ (Data Acquisition) New calculated calibration data Shuttle FEP = Front-End-Processor; DDL = Direct Data Link; HOMER = HLT Online Monitoring Environment including ROOT

  7. HLT  ECS interface ECS Proxy: • Steering of the HLT via the Experiment Control System (ECS) • Synchronizes the HLT with the other online and offline systems in ALICE • Provides the HLT with upcoming run setup (run number, type of collision, trigger classes, …) • Consists of a state machine states (implemented in SMI++) • well defined states and transition commands HLTcluster control ECS TaskManager ECS-proxy states and transition commands

  8. OFFLINE  HLT interface(Taxi) Taxi portal: • Fetches latest available calibration settings from the OCDB • pre-produced or in former runs calculated calibration objects • enveloped in ROOT-files (can be directly used by ALICE analysing framework - AliRoot CDB access classes  transparent access for analysis code if running online or offline) • Caches calibration data locally in the HLT (HLT Conditions DataBase - HCDB) • Retrieval is performed asynchronously to run • HCDB distributed to nodes with the Detector Algorithms (DAs) • copied locally to the DA nodes before each run  fixed version per run

  9. intern extern Detector responsibility OFFLINE  HLT interface(Taxi) Cluster node (Analysis) Cluster node (Analysis) AliRoot CDB access classes AliRoot CDB access classes DA DA DA DA DA_HCDB DA_HCDB TaskManager updated before each run ECS-proxy Taxi- HCDB current run number portal-taxi 1. Taxi DA = Detector Algorithm OFFLINE OCDB Framework components

  10. DCS  HLT interface(Pendolino) Purpose: • Requests current run conditions (like temperature, voltages, …) • permanently measured by the Detector Control System (DCS) • frequently request of “current” values (three different time intervals) • providing of latest available values to the Detector Algorithms (DAs) • preparation / enveloping of the data • transparent access to data for the DAs (online and offline)

  11. intern extern DCS  HLT interface (Pendolino) Detector responsibility DCS Archive DB Framework components PVSS DA = Detector Algorithm Pendolino portal-Pendolino Cluster node (Analysis) preparation / enveloping Pendolino HCDB DA DA DA_HCDB frequent update during run

  12. intern extern HLT  DCS interfaces (FED) Cluster node (Analysis) FED-portal DA DA FED API DCS PVSS Archive DB Detector responsibility FED portal: • collects produced data inside the HLT during the run • writing back data to the DCS (TPC drift velocity, …) • uses the Front-End-Device (FED) API • common among all detectors in ALICE • PVSS panels to integrate data into DCS Framework components DA = Detector Algorithm

  13. HLT  OFFLINE interface (Shuttle) Cluster node Cluster node DA DA DA DA notifies: “collecting finished” portal-shuttle (FXS-Subscriber) TaskManager Purpose: • Collects online calculated calibration settings from the DAs inside the HLT • calibration objects stored in a File EXchange Server (FXS) • meta data stored in a MySQL DB (run number, detector, timestamp, …) • Feeding back new calibration data to the OCDB ECS-proxy FXS MySQL ECS Shuttle OFFLINE synchronisation OCDB

  14. HLT  AliEve interface DA DA • integrates the HLT in existing visualization infrastructure of AliEve (M. Tadel) • connection to the HLT from anywhere within CERN network • ONE visualization infrastructure for all detectors Homer (Alice Event visualization) AliEve Visualization of results

  15. At Start-of- Run During Run After End-of-Run Before Run Offline Taxi (asynch) outstream to DAQ AliEve- Homer DCS- FED DCS- Pendolino FEE- data Offline Shuttle ... Distribute HCDB collect data for Shuttle HLT Initialize cluster analyze events / calculate calibration data Complete ECS- proxy Init SoR EoR contact with the various other systems in ALICE Synchronization timeline Sequence:

  16. Summary & Outlook • HLT consists of a large computing farmmatching the structure of the ALICE detectors. • Interfaces with the other systems of ALICE (online and offline) enables data exchange with the connected parts. • HLT allows for detector performance- and physics monitoring and calibration online. • With the presented mechanism the HLT will be able to provide all required conditions and calibration settings to the Analysis Software for first physics in ALICE 2008.

  17. ALICE HLT Collaboration Sebastian Robert Bablok, Oystein Djuvsland, Kalliopi Kanaki, Joakim Nystrand, Matthias Richter, Dieter Roehrich, Kyrre Skjerdal, Kjetil Ullaland, Gaute Ovrebekk, Dag Larsen, Johan Alme (Department of Physics and Technology, University of Bergen, Norway) Torsten Alt, Volker Lindenstruth, Timm M. Steinbeck, Jochen Thaeder, Udo Kebschull, Stefan Boettger, Sebastian Kalcher, Camilo Lara, Ralf Panse (Kirchhoff Institute of Physics, Ruprecht-Karls-University Heidelberg, Germany) Harald Appelshaeuser, Mateusz Ploskon (Institute for Nuclear Physics, University of Frankfurt, Germany) Haavard Helstrup, Kirstin F. Hetland, Oystein Haaland, Ketil Roed, Torstein Thingnaes (Faculty of Engineering, Bergen University College, Norway) Kenneth Aamodt, Per Thomas Hille, Gunnar Lovhoiden, Bernhard Skaali, Trine Tveter (Department of Physics, University of Oslo, Norway) Indranil Das, Sukalyan Chattopadhyay (Saha Institute of Nuclear Physics, Kolkata, India) Bruce Becker, Corrado Cicalo, Davide Marras, Sabyasachi Siddhanta (I.N.F.N. Sezione di Cagliari, Cittadella Universitaria, Cagliari, Italy) Jean Cleymans, Artur Szostak, Roger Fearick, Gareth de Vaux, Zeblon Vilakazi (UCT-CERN, Department of Physics, University of Cape Town, South Africa)

  18. Backup slides

  19. HLT interfaces • ECS: • Controls the HLT via well defined states (SMI++) • Provides general experiment settings (type of collision, run number, …) • DCS: • Provides HLT with current Detector parameters (voltages, temperature, …) Pendolino, (DCS  HLT) • Provides DCS with processed data from the HLT (TPC drift velocity, …) FED-portal, (HLT  DCS) • OFFLINE: • Interface to fetch calibration settings from the OCDB  Taxi, (OFFLINE  HLT) • Feeding back new calibration data to OCDB  Shuttle-portal, (HLT  OFFLINE) • HOMER: • HLT interface to AliEve for online (event) monitoring • FEE: • stream in of raw event data via H-RORCs • DAQ: • stream out of processed data and trigger decisions to DAQ-LDCs

  20. Synchronization timeline Timeline (1) • Before Run (continuously): • Taxi requests latest available calibration settings from the OCDB (asynchronously to actual run) • At Start-of-Run (SoR) - Initial Setting: • ECS  HLT: initialization of the HLT cluster (run number, beam type, trigger classes, …) • fixating the HCDB for a run, distribution of the HCDB to the DA nodes Offline Taxi (asynch) outstream to DAQ AliEve- Homer DCS- FED DCS- Pendolino FEE- data Offline Shuttle ... Distribute HCDB collect data for Shuttle HLT Initialize cluster analyze events / calculate calibration data Complete ECS- proxy Init SoR EoR contact with the various other systems in ALICE

  21. Synchronization timeline Timeline (2) • During Run (after SoR): • DCS  HLT: • current environment/condition values fetched via the Pendolino • update of the HCDB on the DA nodes with the new DCS data • HLT  DCS: processed, DCS relevant values are written back via the FED-portal (e.g. TPC drift velocity) • HLT  DAQ: analyzed events go to permanent storage (includes a certain time period after the End-of-Run) • HLT  AliEve: online event monitoring in the Alice Control Room • After End-of-Run (EoR): • HLT  OFFLINE: • collecting of fresh produced calibration objects in the FXS of the Shuttle portal • OFFLINE Shuttle fetches these objects from the HLT (synchronization via ECS)

  22. implicit transition OFF INITIALIZE ECS Proxy INITIALIZING<< intermediate state>> DEINITIALIZING<< intermediate state>> Preparing of HCDB for DAs implicit transition INITIALIZED CONFIGURE + params ECS interface SHUTDOWN CONFIGURING<< intermediate state>> RESET implicit transition CONFIGURED ENGANGE implicit transition Pendolino fetches DCS data for DAs DISENGAGING<< intermediate state>> ENGAGING<< intermediate state>> Offline Shuttle can fetch data DISENGAGE READY implicit transition implicit transition START COMPLETING Event processing Collecting data for Shuttle Portal RUNNING STOP

  23. TAXI AliEn TAXI-portal TAXI- HCDB HLT - cluster AFS OCDB release TAXI-HCDB before SOR set links from TAXI-HCDB to HCDB TAXI- HCDB (read-only) ECS-proxy Task Manager Init HCDB script links from TAXI-HCDB to HCDB DA control Pendolino DCS Archive DB Pendolino-portal DA HCDB (read-only) release HCDB after update Amanda HCDB Pendolino Prediction processor DA AFS inform about new release notify PubSub of new release ship update info through chain

  24. Introducing the HLT system HLT counting room in ALICE pit

  25. Detectors • Event selection based on software trigger • Efficient data compression HLT DAQ Mass storage HLT overview ALICE data rates (example TPC) • Central Pb+Pb collisions • event rates: ~200 Hz (past/future protected) • event sizes: ~75 Mbyte(after zero-suppression) • data rates: ~15 Gbyte/sec TPC data rate alone exceeds by far the total DAQ bandwidth of 1.25 Gbyte/sec TPC is the largest data source with 570132 channels, 512 timebins and 10 bit ADC value.

  26. Architecture

  27. Architecture Mass Storage ECS analyzed events / trigger decisions DAQ HLT DCS Trigger raw event data

More Related