240 likes | 253 Views
This article discusses the data acquisition backbone core used in the FutureDAQ project, including its components, functionality, and driving forces. It covers topics such as data flow engine, event building network, performance, and supported work.
E N D
Data Acquisition Backbone Core J.Adamczewski, H.G.Essel, N.Kurz, S.LinevGSI, Experiment Electronics, Data Processing group • Motivation • Data-flow engine, control • Event building network • Performance • To do Work supported by EU RP6 project JRA1 FutureDAQ RII3-CT-2004-506078 J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Detector Detector Detector FEE deliver time stamped data ~50000 FEE chips ~1000 collectors CNet collect data into buffers collect ~1000 active buffers data dispatcher TNet time distribution ~1000 links a 1 GB/sec switching network ~10 dispatchers →subfarm BNet sort time stamped data event dispatcher subfarm subfarm subfarm PNet process events level 1&2 selection ~100 subfarms ~100 nodes per subfarm processing HNet high level selection to high level computing and archiving ~1 GB/sec Output CBM data acquisition W.F.J.Müller, 2004 J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Use case example: Frontend components test FE Front end board: sampling ADCs, clock distribution FE FE: Frontend board ABB: Active Buffer board 2.5 GBit/sbi-directional (optical) link: data, clock Active Buffer board *: PCI express card • The goal: • Detector tests • FEE tests • Data flow tests ABB PCIe PCIe PC * A.Kugel, G.Marcus, W.Gao, Mannheim University Informatics V J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
GE switch Use case example: middle-size setup FE Front end board: sampling ADCs, clock distribution FE FE: Frontend board DC: Data combiner board ABB: Active Buffer board GE: Gigabit Ethernet IB: InfiniBand MBS: Multi Branch System ~625 Mb/sbi-directional (optical) link: data, clock 8 DCB Data combiner boards, clock distribution to FE DC ~2.5 Gb/s data links 4 • The goal: • Investigate critical technology • Detector tests • Replace existing DAQ ABB Active Buffer board: PCI express card ABB PCIe PCIe PC 8-20 PCs dual/quad PC MBS Scales up to 10k channels, 160 CPUs IB switch J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Driving forces and motivation for DABC Requirements • connect (nearly) any front-ends • handle triggered or self-trigger front-ends • process time stamped data streams • build events over fast networks • provide data flow control (to front-ends) • interfaces to plug in application codes • connect MBS readout or collector nodes • be controllable by several controls frameworks 2004 → EU RP6 project JRA1 FutureDAQ* 2004 → CBM FutureDAQ for FAIR 2005 → FOPI DAQ upgrade (skipped) 2007 → NUSTAR DAQ Intermediatedemonstrator 1996 →MBS future 50 installations at GSI, 50 external http://daq.gsi.de • Detector tests • FE equipment tests • Data transport • Time distribution • Switched event building • Software evaluation • MBS event builder • General purpose DAQ Data Acquisition Backbone Core * RII3-CT-2004-506078 J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
DABC key components • Data-flow engine • memory and buffers management • threads and events management • data processing modules with ports, parameters, timers • transport and device classes, file I/O • back pressure mechanism • Slow control and configuration • components setup on each node • parameters monitoring/changing • commands execution • state machine logic • user interface J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Data flow engine A moduleprocesses data of one or several data streams. Data streams propagated through ports, which are connected bytransports DABC Module DABC Module port port process process port port Local transport Queue J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Network Data flow engine A moduleprocesses data of one or several data streams. Data streams propagated through ports, which are connected bytransportsand devices DABC Module DABC Module port port process process port port Net transport Net transport Queue Queue Device Device J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Memory management Main features: • All memory to be used in modules for transport organized in memory pools • Memory pool consists of one or several blocks of memory, divided on equal peaces - buffers • Each buffer in memory pool can be referenced once for writing and any times for reading • Several references may be combined in gather list Use for transport: • Only data from memory pools can be transported via ports • Each module port associated with only memory pool • Transport between two modules in same application done via pointer • Zero-copy network transport where supported (InfiniBand) • Support of gather lists for all kind of transports J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Threads and event management A device, i.e. socket device controls several ports (transports). Once a queue buffer is filled, the transport signals the Event manager, which in turn calls the processInput function of the associated module. Thread with modules Device thread Data ready event Event manager Transport Queue DABC Module B DABC Module A processInput processCommand Manager thread Command Command event Commands are synchronized with the data flow also through the Event manager J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Back pressure mechanism Basic idea: • sender allowed to send packets only after receiver confirms (with special acknowledge message) that it has enough resources to receive these packets • Implemented on transport layer (not visible for user) Impact on module code: • No additional efforts is required • Can be enabled/disabled for any port • To block connection - just do not read packets from it Pro: easy method for traffic control in small system Con: easy way to block complete network when single node is hanging J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Class diagram as currently implemented Module Port PCIboard Bnet Socket InfiniBand J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Slow control Tools for control • Configuration via XML files • State machines, Infospace, message/error loggers, monitoring • Communication: Webserver, SOAP, DIM • Connectivity through DIM to: LabView, EPICS, Java, any DIM client/server • Java with NetBeans (Matisse GUI builder) maybe soon in Eclipse • Front-end controls? • Mix of cooperating control systems • First LabView and Java GUIs operable J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Controls & monitoring communication XDAQ Executive (process, address space) Web browser Web server (SOAP) SOAP client: Java GUI DIM server DIM client: Java GUI LabView GUI EPICS GUI XDAQ Application State machine Infospace* DABC data-flow Modules, command queue GRIDCC * Infospace – remotely accessible parameters J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
LabView-DIM Control GUI* Generic construction of fixed parameter table from DIM servers * Dietrich Beck, EE J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Java-DIM Control Generic construction from commands and parameters offered by DABC DIM servers Applications create the commands and parameters and publish Ratemeter, trending, statistics J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Generic Java DIM GUI controls J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Detector Detector Detector FEE deliver time stamped data ~50000 FEE chips ~1000 collectors CNet collect data into buffers collect ~1000 active buffers data dispatcher TNet time distribution ~1000 links a 1 GB/sec switching network ~10 dispatchers →subfarm BNet sort time stamped data event dispatcher subfarm subfarm subfarm PNet process events level 1&2 selection ~100 subfarms ~100 nodes per subfarm processing HNet high level selection to high level computing and archiving ~1 GB/sec Output Event building network (BNet) J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Linux DABC BNet dataflow – bidirectional approach building filtering analysis collecting sorting tagging frontendDataDispatcher frontendMBS readout Sender Receiver frontendother GE IB Linux Linux analysis archive Sender Receiver GE: Gigabit Ethernet IB: InfiniBand collecting sorting tagging building filtering analysis archive J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
input Readout plugin* plugin Network NxM BNet – modules view M sender nodes SubeventCombiner DataSender N outputs plugin 6 different modules N receiver nodes EventFilter EventBuilder Analisys / Storage M inputs DataReceiver plugin plugin* * optional J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
InfiniBand – testbench for BNet InfiniBand - reliable low-latency zero-copy high-speed data transport Aim: Prove of principal as event building network candidate for CBM Tests last year: • GSI cluster - 4 nodes, SDR • Forschungszentrum Karlsruhe* (March 2007) – 23 nodes DDR • UNI Mainz** (August 2007) - 110 nodes DDR DDR – double data rate (up to 20 Gb/s), SDR – single data rate (10 Gb/s) Point-to-point tests BNet prototype (GSI, 4 Nodes) * thanks to Frank Schmitz, Ivan Kondov and Project CampusGrid in FZK ** thanks to Klaus Merle and Markus Tacke at the Zentrum für Datenverarbeitung in Uni Mainz J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Scaling of asynchronous traffic J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
Chaotic (async.) versus scheduled (sync.) J.Adamczewski, H.G.Essel, N.Kurz, S.Linev
DABC further tasks Achieved: • Control infrastructure • setup, configure • communication • very first Java GUI • Data flow engine • multi-threading • InfiniBand • Sockets (Gigabit Ethernet) • first PCIexpress board • good performance • back pressure To do: • Data formats • Error handling/recovery • MBS event building • Time-stamped data • Final API definitions • Documentation J.Adamczewski, H.G.Essel, N.Kurz, S.Linev