1 / 25

Concepts and Technologies used in Contemporary Data Acquisition Systems

Concepts and Technologies used in Contemporary Data Acquisition Systems. Martin L. Purschke Brookhaven National Laboratory. The Evolution. Particle Detectors have come a long way Many new detector developments (Silicon, TPC, …)

kbouchard
Download Presentation

Concepts and Technologies used in Contemporary Data Acquisition Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concepts and Technologies used in Contemporary Data Acquisition Systems Martin L. Purschke Brookhaven National Laboratory

  2. The Evolution • Particle Detectors have come a long way • Many new detector developments (Silicon, TPC, …) • but also vintage technology -- calorimeters, drift chambers, scintillators, ... • Revolutionary changes in readout technology • FPGA’s, ASICS, PC Farms, High-Speed networks • 10MB/s was considered an insurmountable hurdle once • It’s nostalgic reading TDR’s from the 80’s. Let’s briefly review how it used to be...

  3. Historical Technological Trends 80’s - 90’ Minicomputers. CAMAC and NIM electronics emerge as the de-facto standards for instrumentation VME and Fastbus, VAX-VMS data acquisition hardware from standard building blocks VME Processors with CAMAC and Fastbus. ECL gains acceptance. custom solutions for (by then-standards) high-bandwidth DAQ’s. Single VME crates are no longer big enough VME Crate Interconnect, VME Taxi standard networks for interconnect 1970 1980 1990

  4. … Walter LeCroy There is a name... There are many manufacturers (CAEN, …), but this era of physics instrumentation is really associated with… Even today, many modern detector elements and prototypes see their first test using LeCroy electronics “I did it in Blue Logic”, referring to the LC blue color Also, a prestigious series of “Electronics for Future Colliders” conferences held at LeCroy’s headquarters in the early 90’s

  5. Forward to 2000 Board design in ASIC’s and FPGA’s Firmware replaces hardware board designs, faster development cycles “The decade of communication” formerly expensive computing gadgets now a commodity PC’s and Linux replace for Unix workstations “The ATX Motherboard is the new crate” Networks replace VME and other buses Gigabit networks become standard proprietary processor boards get replaced with standard PC’s data rates and event sizes grow dramatically

  6. Let’s look back at what we need to accomplish Some 1990 vintage generic fixed-target experiment target Delay cables Trigger detectors Trigger signals take the shortest possible path trigger Signals sent through delay cables buy time for a trigger decision DAQ Delays are short and expensive 100m buy 500ns, per channel WA80 experiment, for example, had about 160Km of cables

  7. Pipelines replace Delay cables The delay cables and the electronics too expensive unwanted material in the detector (scattering, shadowing) no space in the central region to put 10,000’s of cables. A couple of 100 nanoseconds no longer enough. Delay cables no longer an option. some kind of storage that buys time for a trigger decision without cables design of electronic pipelines. Crossing Nr --> Storage cells Trigger decision is ready Signal trigger latency time At each bunch crossing, store the signal in the next “memory” location. When the trigger decision is made, get the stored value that corresponds to that crossing

  8. Storage Pipelines Storage cells Storage can be either analog (“AMU”) … or digital (“Flash ADC”) At each bunch crossing, the signal is digitized and stored in digital memory if selected by trigger, the proper memory location is picked Some pulse shaping is applied Charge of the signal is stored in some capacitor if selected by the trigger, the capacitor is later connected to an ADC and digitized This is where the (Level-1) trigger comes in The “40MHz” refers to the LHC crossing rate; this is taken from CMS

  9. Analog/AMU Example: PHENIX EmCal 64 AMU Cells per Channel At each accepted L1 the charge is digitized twice - high gain and low-gain 12-bit ADC Also, digitize another off-signal cell to get a pedestal Pedestal Signal Storage cells Trigger decision Sample high Sample low Pedestal high Pedestal low Subject to noise pickup Signal decays over time, leak currents, etc potential cell-dependent variations - can be a nightmare for offline but fast enough 12-bit FADC’s weren’t available when PHENIX was designed matter of price, power, speed, and space

  10. Types of Pipelines - CMS

  11. L1-triggered data • Needs a very fast trigger decision (depth of pipeline x crossing rate) • “Keyhole view” - L1 doesn’t have the scope to look at cross-detector signals • times between crossing getting shorter • LEP 22ms • Tevatron 3.5ms • Tev II 396ns • RHIC 106ns • LHC 25ns • --> L1 rejection rate is limited.

  12. ATLAS Higher-Level Triggers We need to filter the amount of data down to a manageable size. The most common setup is 3 levels. L1 constructs Regions of Interest, which L2 looks at Finally, L3 looks at full events to cherry-pick 10% or so of the surviving events In this 3-levels group: CDFII, D0, Atlas, LHCb, BTeV, BaBar, ...

  13. Another 3-levels Example - BaBar

  14. Current Belle's Event Builder L2/size reduction L2.5 L3 ~10 Event Building Farms ~1000 CoPPERs mass storage H-RORC H-RORC H-RORC H-RORC H-RORC H-RORC H-RORC H-RORC H-RORC H-RORC H-RORC H-RORC ..... ... ... ~50 Readout PCs TPC sector TPC sector H-RORC: ReadOut Receiver Card (FPGA Co-Processors) Optical links Local pattern recognition Transfer Network Track finding on sector level Detector/sector merging and track fitting ~10 L3 Farms ~1GB/sec ..... Trigger decision ~500MB/sec ALICE ~250MB/sec 200-300KB/ev -> 100KB/ev The numbers of trigger levels vary Belle has another “2.5” level... ALICE needs 4 levels to achieve enough reduction

  15. ATLAS Detectors Detectors Lvl-1 Front end pipelines Lvl-1 Front end pipelines Readout buffers Readout buffers Lvl-2 Switching network Switching network Lvl-3 HLT Processor farms CMS Processor farms CMS has adopted another design ATLAS: 3 physical levels CMS: 2 physical levels 40MHz 40MHz 100KHz 100KHz 1KHz 100Hz 100Hz Atlas has 3 levels with “Regions of Interest” CMS has only 2 levels. CMS’ Switching Network needs to perform at much higher throughput (1000GB/s vs 1/100 of that for 3 levels) But the processor farm has the full view of all L1 events

  16. Heavy-Ion expt’s are out here - the multiplicity is high, and the trigger rejection is somewhat limited in HI PHENIX Run Control ATLAS LHCb CMS current “scare threshold” ALICE Data archiving rates All in MB/s all approximate ~1250 ~300 (600) ~150 ~100 ~100 ~100 400-600MB/s are not so Sci-Fi these days ~25 ~40

  17. Event building (just to mention it) Event fragments Fully assembled Events This is where most of the final filtering takes place.

  18. What’s at work where? Level1 Done in ASIC’s, FPGA’s, hardwired processors … massive pipelined logic Low-level data transfers, G-Link, S-Link, etc Level2 CPU’s on highly specialized hardware (e.g. CDF)commercial VME processor boards (typically VxWorks as OS), standard Linux PC’s. Standard (Gigabit Ethernet) or specialized networks (e.g. Myrinet) Level 3+ standard Linux processor farms, standard networks Virtually every experiment has a Linux-based processor farm for Event building and filtering Some level of merging of offline software with High Level Trigger software (often not quite straightforward) Linux and C++ has pretty much taken over, Java typically in the GUI/Visualization dept. Networks have largely replaced standard or proprietary buses (now it’s the PCI bus…) Triggering and DAQ/Event Building have, to a large extent, merged

  19. Data Compression (PHENIX) An old fantasy of mine… After all your zero-suppression, thresholds, and other data reduction techniques have been applied, one usually finds that the resulting raw data files are still compressible by compression utilities such as gzip, often dramatically so. Why not also apply compression in the DAQ? Used to be too slow. Orders of magnitude too slow. After many years, we pulled it off in PHENIX Typical compression factors are ~50% -- a 100MB buffer shrinks to 50MB Obviously, you cannot work on files - you want the compression on the fly before you write them, and you want to uncompress them on the fly when reading them. We have had such a compressed raw data format for years, but...

  20. ATP SEB SEB ATP ATP SEB ATP SEB ATP SEB SEB ATP SEB ATP ATP SEB ATP SEB ATP SEB Getting up to speed by load sharing While used in “slow” data, such as calibrations, this used to be about 2 orders of magnitude too slow to be usable in the actual DAQ. We replaced the “compress2” algorithm of gzip fame with a faster one from the “LZO” family and gained a factor of 4 (still a factor of 25 off) We then distributed the compression CPU load over the Event Builder CPU’s and easily get that factor of 25. The “Assembly and Trigger Processors” compress the output they send Event Builder Gigabit Switch Logger Applying the compression after the trigger sidesteps all issues with compressed data access Logger

  21. At the time, we had actually 1.5 Billion events archived Without the compression, it would have been about half that. And it worked...

  22. Common frameworks?? We have heard that the concepts of the DAQ’s of many experiments are similar these days, yet the software/firmware is all home-grown and special Some attempts to share an offline framework between experiments, but still in its infancy (e.g. BaBar, CDF, also STAR, PHENIX… fizzled) In the online world, this has worked even worse so far We are not used to this - it’s all so special, so experiment-specific But that was the situation in business software in the 70’s - each company wrote their own accounting software on some mainframe, highly specialized… unimaginable these days where everyone uses something off-the-shelf, SAP or something similar. Current candidates - XDAQ in CMS, provide a uniform DAQ “builder” for all test beams, etc. Seems to work ok, but it’s not designed to be “CMS-free” for others to use MIDAS - smaller-scale generic DAQ framework by Stefan Ritt and many contributors (Triumf etc) NA60 is using ALICE’s DATE DAQ framework Maybe in another decade we see a consolidation of DAQ frameworks? It would certainly save a lot of work MIDAS

  23. Endangered Technologies in HEP In large-scale experiments, the better is the enemy of the good… and the cheaper is the enemy of the just-as-good. Hard to justify an optimal solution if it’s 10x as expensive as the second-best that also works Commodity pricing and also maturity is hard to beat ATM great networking, but definitely not a commodity these days - Gigabit Ethernet won. ATM is just too expensive, both the switches and NIC’s, Open-Source drivers not mainstream SCI Scalable Cache-Coherent Interface. In use in LHCb. Was the Next Big Thing once. Windows Another Next Big Thing in HEP that wasn’t. PHENIX had a Windows-based Event Builder … no more. Linux it is. UNIX There’s virtually nothing that doesn’t run better (or cheaper) on PC/Linux. Solaris will probably survive for a while as server and perhaps as niche development system (e.g. for VxWorks)

  24. And things still going strong... After 3 decades, the CAMAC standard is still in use, and new readout controllers appear on the market There is a wealth of existing CAMAC hardware out there in most labs Also a lot of VME electronics appearing commercially, also in-house developments...

  25. Summary FPGA and ASIC designs replace traditional hardware logic -- most functionality now in firmware The Decade of the (Gigabit) Network -- has replaced the majority of buses (VME, etc) Linux farms rule! Virtually each experiment has one or more DAQ, Trigger, Filter Farms integral parts of “the online system” C++ is the dominant programming language typically 3, but also 2 (CMS) or more (ALICE) trigger levels Mix-and-match of online and offline code bases - use offline reconstruction code in HLT 500MB/s archiving rates, and 500GB/s EVB rate don’t really scare us anymore (but what about 1000+ ?) Data compression techniques can augment standard data reduction techniques (worth taking a raw data file and see what gzip gives you) Hints of emerging standard DAQ frameworks (Midas, XDAQ) Early test setups still use CAMAC and VME - still good to know what cdreg and cssa means, and what an address modifier is

More Related