200 likes | 327 Views
HLT – DCS interfaces (Alice week – DCS workshop 10-10-2006). D. R ö hrich, M. Richter, S. Bablok (IFT, University of Bergen) T. Steinbeck, V. Lindenstruth (KIP, University of Heidelberg) M. Ploskon (IKF, University of Frankfurt) for the HLT - Collaboration. TOC. HLT interfaces
E N D
HLT – DCS interfaces(Alice week – DCS workshop 10-10-2006) D. Röhrich, M. Richter, S. Bablok(IFT, University of Bergen) T. Steinbeck, V. Lindenstruth(KIP, University of Heidelberg) M. Ploskon (IKF, University of Frankfurt) for the HLT - Collaboration
TOC • HLT interfaces • HLT DCS dataflow • DCS HLT dataflow • Open issues • HLT calibration (Use Case)
HLT interfaces • ECS: • Controls the HLT via well defined states (SMI) • Provides general experiment settings (type of collision, run number, …) • FEE: • Event data from Front-End-Electronics as direct copies of DAQ data (via DDLs) • DCS: • Provides current Detector parameters (voltages, temperature, …) • Retrieves DCS related processed data from HLT (TPC drift velocity, …) • DAQ: • To stores processed and analyzed event data (over DDL connections to LDCs) • OFFLINE: • Interface to fetch data from the OCDB (OFFLINE HLT) • Provides OFFLINE with calculated calibration data (HLT OFFLINE)
HLT - Interfaces DCS ECS Archive DB DCS values Control (run number, …) PVSS DAQ Pendolino HLT-proxy Processed events DCS-portal Calibration data DDL HLTcluster FEP OFFLINE Calibration data OCDB (Conditions) Event data OFFLINE-Shuttle OCDB-portal FEE Shuttle Processed data
DCS portal Pendolino DCS portal (Dim Subscriber) DIM- Subscriber Pendolino-PubSub Interface (PubSub –FED API [DIM]) HLT spec Datasink (Subscriber) FED API DA DA MySQL RORC- Publisher FES FEP local Cache Taxi-PubSub Taxi Pendolino- PubSub (Data processor) • Interface • (PubSub • Pendolino • [AliRoot]) Detector responsibility Framework components Interface HLT components to interface DCS DCS Archive DB PVSS Pendolino
HLT DCS dataflow • Purpose: • Storing of DCS related data in the DCS Archive DB (HLT-cluster monitoring [HLT]; TPC drift velocity, …. [detector specific]) • HLT side: • One node providing a special PubSub framework component implementing (partly) the FED API (DCS portal node) • DCS side: • Different PVSS panels: • A dedicated panel for HLT cluster monitoring • Integration of detector specific processed data in the PVSS panels of the according detector
HLT DCS dataflow (design model) ECS DCS HLT DAQ TRG Dim – HLT PVSS connection This connection provides monitored properties of the HLT cluster, which can relevant for the DCS. The receiving PVSS panel is only for the HLT and not corresponding to any particular detector. The interface uses the FED API of the DCS. . . . TRD TPC HLT PHOS Cluster monitoring Dim – Detector PVSS connection This connection comes from the HLT cluster with detector specific processed data. It uses Dim – connection points to sent the data to the PVSS of the according detector (here TPC). This data can be TPC drift velocity or other detector specific HLT data which is relevant for the DCS Archive DB. Integration into DCS will be performed via the PVSS panels of each participating detector. OnlineCalibration FEE
HLT DCS dataflow • HLT Cluster has a dedicated DCS portal node • DCS portal acts as DIM server • DIM channels to detectors PVSS panels and HLT PVSS panels (DIM client) • implements a FedServer (DIM server) [partly] • "ConfigureFeeCom" command channel (setting log level) • Service channels (Single and Grouped service channels, Message channel, ACK channel) • all located in one DIM-DNS domain • 2 DCS PCs for HLT PVSS panels • worker node: PVSS panels to receive the monitored cluster data • this node will also connect to Pendolino(vice versa connection, see below) • operator node: PVSS to watch the data (counting room and outer world) • HLT cluster intern data is transported via PubSub system
HLT DCS dataflow (Hardware + Software components) Common Detector – DCS integration (over PVSS) Ordinary detector DCS nodes, connecting to the HLT portal in addition to their normal tasks. DCS Services of one detector are offered in Single and/or Grouped Service channel and can be requested by the PVSS of the according detector via DIM. TPC . . . TRD HLT HLT OnlineCalib. FEDClient (PVSS) HLT - cluster OnlineCalib. DIM-DNS domain FEDServer (Dim) Cluster monitoring HLT-DCSportal . . . • 2 HLT- DCS Nodes • (located in DCS counting room): • - worker node: PVSS panels to receive the monitored data; • operator node: PVSS panels to watch the data • (remotely: counting room and outer world) Pub-Sub connections The connections inside the cluster are based on the Pub-Sub framework.
DCS HLT dataflow HLT needs DCS data (temperature, current configuration, …): • Online analysis • Calibration data processing The required data can be acquired from the DCS Archive DB: • retrieval viaAMANDA Pendolino • request data in regular time intervals • about three Pendolinos with different frequencies are foreseen(three different frequencies – requesting different type of data) • HLT intern data is distributed via PubSub framework
Pendolino Request Data response DCS HLT dataflow(Hardware + Software components) DCS Archive HLT-wn HLT HLT - cluster Request of data via AMANDA (PVSS DataManager) AMANDA – Pendolinos HLT-DCSportal . . . worker node (wn), where AMANDA server for HLT is running Pub-Sub connections The connections inside the cluster are based on the Pub-Sub framework.
DCS HLT dataflow Pendolino details: • Three different frequencies: • fast Pendolino: 10 sec - 1 min • normal Pendolino: 1 min - 5 min • slow Pendolino: over 5 min • Response time: • ~13000 values per second • e.g. If Pendolino needs data for N channels, for a period of X seconds and the channels change at a rate of Y Hz (with Y smaller than 1 Hz !), it will take: (N*X*Y) / 13000 seconds to read back the data. (given by Peter Chochula) • Remark: • The requested values can be up to 2 min old. (This is the time, that can pass until the data is shifted from the DCS PVSS to the DCS Archive DB)
DCS data HLT Detectors needs to define which values are required:
DCS data HLT Remarks: • Amanda Pendolino can only be used to request data included in the DCS Archive DB • Requests of values with higher frequency than ~0.1 Hz need a different connection • Data from Condition DB (OCDB) will be requested via the OFFLINE interface • Requests for huge amount of data will be requested via the FES of DCS
Open issues • Faster retrieval for certain service / values • TPC anode current (required for calculating the space charge,request frequency of about 100 ms (= 10 Hz) too fast for Pendolino) • HLT DCS File Exchange Server (FES) • Additional (optional) interface to transfer large amount of data between DCS and HLT • Details have to be discussed (will be similar to OFFLINE FES)
HLT dataflow / Use Case • Framework for data exchange • Initial Settings (before Start-of-Run (SoR)) • ECS HLT (over SMI: run number, beam type, mode, etc) • OFFLINE HLT (run and experiment conditions from OCDB; local caching) • During run (after SoR) • Event data from FEEs (as duplicates from DAQ LDCs) • DCS HLT (current environment values via Amanda Pendolino) • HLT DCS (processed data via DIM-PVSS; e.g. drift velocity) • Processed data back to DAQ (also for certain period after End-of-Run) • After run (after End-of-Run (EoR)) • HLT OFFLINE (OFFLINE Shuttle requests over MySQL DB and File Exchange Server (FES))
intern extern DCS Data flow in HLT Archive DB FEE PVSS 3a. 3a. 2a. 4a. 2b. DCS portal Pendolino OFFLINE DIM- Subscriber Pendolino-PubSub 2b. 3a. OFFLINE Storage HLT spec Datasink (Subscriber) 4a+b. PubSub 4b. 3b. Shuttle DA MySQL 2a. DA FES RORC- Publisher ALiEn 4b. OCDB (Conditions) FEP 1. local Cache Taxi-PubSub Taxi 1.
Timing diagram Init EoR SoR ECS DAQ ...................................... DCS HLT OFFLINE Pre-Proc SHUTTLE
HLT dataflow / Remarks • Goal: framework components shall be independent of data • definition can be changed later without change of model design & framework implementation • usage of already proven technologies (AMANDA, PVSS, DIM, AliRoot, PubSub framework) • Detectors / Detector algorithms can define the required data later on • BUT: they have to make sure, that their requested data is available in the connected systems (OCDB, DCS Archive, event data stream (from FEE)) • Limit their requests to the actual required amount of data (Performance)