160 likes | 390 Views
Data Acquisition Development at JLAB. David Abbott - Jefferson Lab DAQ group. Data Acquisition Status. DAQ Group now stands at 5 members. Recent Experiments have begun to test limits of the current distribution of CODA (v 2.5).
E N D
Data Acquisition Development at JLAB David Abbott - Jefferson Lab DAQ group
Data Acquisition Status • DAQ Group now stands at 5 members. • Recent Experiments have begun to test limits of the current distribution of CODA (v 2.5). • Aging technologies (software and hardware) are being retired and replaced both for support of the 6 GeV program as well as development of 12 GeV. • Continue to use open standards and minimize the use of commercial software while maximizing use of commercial hardware. • We continue to focus on a “migration” plan from CODA2 to CODA3 (v 2.6).
General DAQ Issues… • Front-end hardware is evolving. Real-time intelligence is moving from the CPU to FPGAs. Old hardware technologies are no longer commercially supported (FASTBUS). • CPU-Based real-time readout on a per event basis limits the maximum accepted L1 trigger rate (~10 KHz). • 32 crate limit on the trigger distribution system is nearly reached in Hall B. • Event transport limitations in the current CODA architecture are being seen for moderately complex systems. • Computing platform and OS changes (Muli-core, more memory, 64 bit systems etc…) are not taken advantage of. • Aging software technologies and reliance on third party packages are making code portability and upkeep difficult. • Monitoring and control of large numbers of distributed objects are not handled in a consistent way (too many protocols). • “Slow” controls only minimally supported
CODA3 - Requirements/Goals • Pipelined Electronics (FADC, TDC) • Dead-timeless system • Replacement for obsolete electronics • Eliminate large numbers of delay cables • Integrated L1/L2 Trigger and Trigger Distribution System • Support up to 200 KHz L1 Trigger • Use FADC for L1 trigger input • Support 100+ crates • Parallel/Staged Event Building • Handle ~100 of input data streams • Scalable (>1 GByte/s) aggregate data throughput • L3 Online Farm • Online (up to x10) reduction in data to permanent storage • Integrated Experiment Control • DAQ RunControl + “Slow” control/monitoring • Distributed, scalable, and “intelligent”
Current DAQ Projects • Components: • CODA Objects • CODA ROC • CODA EMU (EB/ER/ANA) • Run Control • Software Tools: • cMsg • ET • EVIO • Config and Display GUIs • Hardware: • FADC/F1TDC • Trigger Interface (VME/PCI) • Trigger/Clock Distribution • Commercial Module Support • R&D: • Embedded Linux • Experiment Control • Staged/Parallel Event Building • 200KHz Trigger/readout • Clock distribution • L3 Farm
Front-End Systems R&D to support fully pipelined crates capable of 200 KHz trigger rates VME CPU - (MV6100) PPC, GigE, vxWorks (GE V7865) Intel, GigE, Linux CODA ROC Readout ~160-200 MB/s Flash ADC F1 TDC Trigger Interface - (V3) Pipeline Trigger Event Blocking Clock distribution Event ID Bank Info
VXS - L1 Trigger Use VXS High speed serial backplane (P0) to collect Energy sum and hit data from FADCs Flash ADC Flash ADC VME CPU -??? Intel, GigE Linux CODA ROC VME Readout of Event Data P0 Switch Sum and Trigger Distribution Modules (VXS) Collect Sums/Hits Pass Data to Master L1 Clock distribution Trigger Distribution
ROC ROC ROC ROC Process Building a DAQ System To EMU To File To ET To User
Event Distribution • ET provides efficient transport of Data for building, and provides flexible User access • EMU provides easy configuration, and User specific processing options
Staged/Parallel Event Building • Divide total throughput into N streams (1GB/sec -> N*xMB/sec). • Two stages - Data Concentration -> Event Building. • Each EMU is a software component running on a separate host.
AFECS - Integrated Experiment Control • FIPA Java-Based (v 1.5) “Intelligent” agents • Extensions provide runtime “distributed” Containers (JVM). • Agents provide a customizable intelligence (state machine) and communication (cMsg, CA, SNMP etc…) with external processes. • Many independent “logical” control systems can operate within the platform. • System is scalable. Agents can migrate to JVM containers on different nodes at runtime. • System tested: 3 Containers on different hosts with 1000 Agents controling 1000 physical components distributed over 20 other nodes. • ~40% CPU and 200 MB usage for each JVM
Hierarchy of Control Normative Agents Physical Components NR NA NC A CODA ROC S CODA EMU A Front-End Supervisor agent A EPICS IOC 1 S S A EPICS CAG Grand supervisor S A Trigger soft AFECS Platform (Java 1.5+) S ACC Trigger hard A Online ANA A IPC IPC IPC IPC WEB
CODA3 AFECS CODA Evolves cMsg
CODA 2.6 Features • Integrate new AFECS Run Control System • cMsg IPC replaces RC<->(ROC,EB,ER) communication as well as CMLOG message logging. • Various Support for newer operating systems and compilers • vxWorks 5.5, 6+ • RHEL 4, Solaris 10, OS X • New and Updated tools • ET upgraded, 64bit compliant • Db2cool • EVIO package • Support for new CODA3 Objects and components • Integration of long time bug fixes, new driver libraries and feature enhancements
Summary • The DAQ Group must support ALL experimental programs at JLAB. The current group must grow by at least 2 FTEs soon to manage current timelines. • CODA 2.6 is available now, and will provide an integration path for CODA 3 technologies. • Much DAQ software development is dependent on custom hardware development in order satisfy many 12GeV requirements. • Current DAQ projects reflect the philosophy that we can progress to support the physics of the 12 GeV program through an evolution of the existing proven system.