1 / 22

Summary Computing and DAQ

Summary Computing and DAQ. Walter F.J. Müller , GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005. Computing and DAQ Session. Thursday 14:00 – 17:00 – Theory Seminar Room. Handle 20PB a year. CBM Grid first steps. Controls, not an after sought this time.

werner
Download Presentation

Summary Computing and DAQ

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Summary Computing and DAQ Walter F.J. Müller, GSI, Darmstadt 5th CBM Collaboration MeetingGSI, March 9-12, 2005

  2. Computing and DAQ Session Thursday 14:00 – 17:00 – Theory Seminar Room Handle 20PB a year CBM Gridfirst steps Controls, not anafter sought this time Network & processing p-p, p-A – >108 int/sec 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

  3. Computing and DAQ Session 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

  4. slide from D. Rohrich Data rates • Data rates into HLPS • Open charm • 10 kHz * 168 kbyte = 1.7 Gbyte/sec • Low-mass di-lepton pairs • 25 kHz * 84 kbyte = 2.1 Gbyte/sec • Data volume per year – no HLPS action • 10 Pbyte/year • ALICE = 10 Pbyte/year: 25% raw, 25% reconstructed, 50% simulated

  5. slide from D. Rohrich Processing concept • HLPS’ tasks • Event reconstruction with offline quality • Sharpen Open Charm selection criteria – reduce event rate further • Create compressed ESDs  • Create AODs • No offline re-processing • Same amount of CPU-time needed for unpacking and dissemination of data as for reconstruction • RAW->ESD: never • ESD->ESD’: only exceptionally

  6. slide from D. Rohrich Data Compression Scenarios • Loss-less data compression • Run-Length Encoding (standard technique) • Entropy coder (Huffman)  • Lempel Ziff • Lossy data compression • Compress 10-bit ADC into 8-bit ADC using logarithmic transfer function (standard technique) • Vector quantization  • Data modeling  Perform all of the above wherever possible

  7. slide from D. Rohrich Offline and online issues • Requirements to software • offline code = online code • Emphasis on • Run-time performance • Clear interfaces • Fault tolerance and error recovery • Alignment • Calibration • ”Prussian” programming

  8. slide from D. Rohrich Storage concept Main challenge of processing heavy-ion data: logistics • No archival of raw data • Storage of ESDs • Advanced compressing techniques: 10-20% • Only one pass • Multiple versions of AODs

  9. slide from V. Ivanov Dubna educational and scientific networkDubna-Grid Project (2004) Laboratory of Information Technologies, JINR University "Dubna" Directorate of programme for development of the science city Dubna University of Chicago, USA University of Lund, Sweden Creation of Grid-testbed on the basis of resources of Dubna scientific and educational establishments, in particular, JINR Laboratories, International University "Dubna“, secondary schools and other organizations More than 1000 CPU 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

  10. slide from K.Schwarz summary (middlewares) • LCG-2: GSI and Dubna • - pro: large distribution, support • - contra: difficult to set up, no distributed analysis • AliEn: GSI, Dubna, Bergen - pro: in production since 2001 - contra: unsecure future, no support Globus 2: GSI, Dubna, Bergen? - pro/contra: simple, but functioning (no RB, no FC, no support) gLite/GT4: new on the market - pro/contra: nobody has production experience (gLite)

  11. CBM Grid – Status • CBM VO Server setup • First certificate in work • Use for MC Transport production this summer • Initial participants: • Bergen, Dubna, GSI, ITEP • Initial Middleware: • AliEn (available on all 4 sites, good working horse) 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

  12. ECS (Experiment Control System) • Definition of Functionality of ECS and DCS • Draft of URD (user requirements document) • Constitute ECS working group 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

  13. FEE – DAQ Interface FEE FEE FEE FEE FEE Diversityinevitavble Concentrator orread-out controller Cave 3 logical interfaces Shack Clock and Time(in only) Control(bidirectional) Commoninterfacesindispensible Hit Data(out only) First Drafts readyfor fall 2005CBM TB Meeting Time DAQ DCS 3 Specs: 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

  14. slide from H. Essel • • • • n H Currently investigated structure n=4 : 16x16 n * (n - 1) / 2 bidirectional connections TG/BC n - 1 ports n - 1 ports BNet controller switch n × n switch n × n • • • • n • • • n - 1 H: histogrammer TG: event tagger HC: histogram collector BC: scheduler DD: data dispatcher ED: event dispatcher DD/ED DD/HC DD/ED active buffer CNet CNet PNet CNet PNet 5th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet

  15. slide from H. Essel Simulation with SystemC Modules: • event generator • data dispatcher (sender) • histogram collector • tag generator • BNet controller (schedule) • event dispatcher (receiver) • transmitter (data rate, latency) • switches (buffer capacity, max. # of package queue, 4K) Running with 10 switches and 100 end nodes. Simulation takes 1.5 *105 times longer than simulated time. Various statistics (traffic, network load, etc.) 5th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet

  16. slide from H. Essel Some statistic examples single buffers excluded! 5th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet

  17. slide from H. Essel Topics for investigations • Event shaping • Separate meta data transfer system • Addressing/routing schemes • Broadcast • Synchronization • Determinism • Fault tolerance • Real test bed 5th CBM Collaboration Meeting, GSI, March 9-12, 2005Hans G. Essel, Sergey Linev: CBM - DAQ BNet

  18. slide from J. Gläß Overview of Processing Architecture Processing resources • Hardware processors • L1/FPGA • Software processors • L1/CPU • Active Buffers • Sub-farm network • Pnet 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 Joachim Gläß, Univ. Mannheim, Institute of Computer Engineering

  19. slide from J. Gläß connector 2 2 2 ZBT FPGA ZBT XC2VPX20 DDR DDR PPC Flash RS232 Ethernet µC Linux 2 S F P DDR Flash Ethernet Architecture of R&D Prototype zeroXT 10GB SMT • communication via backplane • 4 boards, all-to-all • different length of traces • up to 10 Gbit/s serial • => FR4 Rogers • FPGA with MGTs • up to 10 Gbit/s serial • => XC2VPX20 (8 x MGT) • => XC2VPX70 (20 x MGT) • externals • 2 x ZBT SRAM • 2 x DDR SDRAM • for PPC: Flash, Ethernet, … • initialization and control • standalone board/system • microcontroller running Linux 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 Joachim Gläß, Univ. Mannheim, Institute of Computer Engineering

  20. slide from J. Gläß Conclusion • R&D prototype to learn: • physical layer of communication • 2.5 Gbit/s up to 10 Gbit/s • chip-to-chip • board-to-board (-> connectors, backplane) • PCB layout, impedances • PCB material (FR4, Rogers, …) • next step: • communication protocols • more resources needed => XC2VPX70?, Virtex4? (availability?) • external memories • fast controllers for ZBT and DDR RAM • PCB layout, termination, … 5th CBM Collaboration Meeting, GSI, March 9-12, 2005 Joachim Gläß, Univ. Mannheim, Institute of Computer Engineering

  21. DAQ Challenge Incredibly small (unknown) cross-section: pp -> U + x at 90 GeV beam energy Q = 13.4 – 9.5 – 1. -1. = 1.9 GeV ( near threshold) What is the theoretical limit for the hardware and DAQ? How can one improve the sensitivity by clever algorithm? More questions than answers. 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

  22. Algorithms • Performance of L1 feature extraction algorithms is essential • critical in CBM: STS tracking + vertex reconstruction TRD tracking and Pid • Look for algorithms which allow massive parallel implementation • Hough Transform Trackerneeds lots of bit level operations, well suited for FPGA • Cellular Automaton Tracker • Co-develop tracking detectors and tracking algorithms • L1 tracking is necessarily speed optimized (>109 tracks/sec)→ possibly more detector granularity and redundancy needed • Aim for CBM:Validate final hardware design with at least 2 trackers suitable for L1 5th CBM Collaboration Meeting, GSI, March 9-12, 2005

More Related