1 / 16

One scenario for the CBM Trigger Architecture

One scenario for the CBM Trigger Architecture. Ivan Kisel Kirchhoff-Institut für Physik , Uni-Heidelberg. KIP. FutureDAQ Workshop, München March 25-26, 2004. Particle Multiplicities and Data Rates. n char n neut. 980 1080. 680 700. 1000 700. for Au+Au 25 A GeV UrQMD.

rock
Download Presentation

One scenario for the CBM Trigger Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. One scenario forthe CBM Trigger Architecture Ivan Kisel Kirchhoff-Institut für Physik, Uni-Heidelberg KIP FutureDAQ Workshop, München March 25-26, 2004

  2. Particle Multiplicities and Data Rates nchar nneut 980 1080 680 700 1000 700 for Au+Au 25 A GeV UrQMD Multiplicities for central collision not including: background link overhead Data rate for 107 int / sec Sub-System GByte/sec STS 60 Total 350 RICH 90 TRD 130 RPC 10 ECAL 50 W.Müller Ivan Kisel, KIP, Uni-Heidelberg

  3. DAQ-Trigger Scenario From LoI Hit Level Processing Bandwidth 1 TByte/sec Local Level Processing Buffering; Event association; Regional Level Pre-Processing 3-10 MHz Event Rate First Level Trigger: FGPA; DSP; PC 300 kHz Event Rate Second Level Trigger: PC Farm 20 kHz Event Rate 1 GByte/sec Bandwidth W.Müller Ivan Kisel, KIP, Uni-Heidelberg

  4. Modular Structure of DAQ RU RU RU RU RU RU RU RU RU RU RU RU RU RU RU RU Detector MAPS, STS RICH TRD ECAL 50 kB/ev 107 ev/s MAPS STS RICH TRD ECAL SFn Dt SFn Dt SFn Dt SFn Dt SFn Dt 100 ev/slice Time-Slice Builder Network N x M Scheduler SFn Dt MAPS STS RICH TRD ECAL SFn available Trigger/Offline PC Farm 5 MB/slice Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Farm Control System 105sl/s Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Sub-Farm Ivan Kisel, KIP, Uni-Heidelberg

  5. Scheduled Data Transfer Scheduler Core Scheduling Discipline IN Events Source OUT TX IN Destinations 1 2 3 New Event Entry • Idle. • Obtain Destination. • Produce a Tag. The Scheduler assigns Time-Slices to Sub-Farms. Deyan Atanasov, Scheduled Data Transfer, CBM DAQ, HD 18.03.04 Ivan Kisel, KIP, Uni-Heidelberg

  6. FCS - Farm Control System • Cluster Interface Agent (CIA) • every node can be (re)configured, turned on/off remotely via Ethernet • it can save hardware components like display cards, Floppy Disk Drives • monitor the whole cluster, every host regardless of its current state with additional monitoring possibilities independent from operating system Control FCS Network Monitor PCI bus Host • The farm implements its own control network • Control over every node is done via a redundant hardware unit attached to the node • The interface between FCS and ECS is done via a dedicated node Experiment Control System Farm Control System Ralf Panse, Farm Control System, CBM DAQ, HD 18.03.04 Ivan Kisel, KIP, Uni-Heidelberg

  7. Distributed Local Mass Storage 1 MB/sec/PC 10 TB/PC = 10 000 000 MB/PC = 10 000 000 sec = 120 days = 4 months of data taking Data Taking 3 Months Data Analysis 9 Months Data Reduction 1000:1 Data Reduction 100:1 Arne Wiebalck, ClusterRAID, CBM DAQ, HD 18.03.04 Lord Hess, ClusterRAID1 Prototype, CBM DAQ, HD 18.03.04 Ivan Kisel, KIP, Uni-Heidelberg

  8. PC Sub-Farm FPGA FPGA FPGA FPGA FPGA FPGA FPGA FPGA FPGA FPGA FPGA FPGA Scheduler Input Data Farm Control System -Farm Sub-Farm Sub-Farm Sub-Farm PC PC PC PC PC PC PC PC PC PC PC PC PC PC PC PC PC • Various CPU power • Shared PCs • Reconfigurable • Fault tolerant • Offline on background Ivan Kisel, KIP, Uni-Heidelberg

  9. FPGA: Pre-process/L1 Network In Free/Busy Offline Memory Buffers 500 MB/s/FPGA STS RICH TRD ECAL Local time slice 1 NIC Local time slice 2 Local time slice 3 MAPS Local time slice … Local time slice 99 Local time slice 100 RICH STS TRD ECAL Local time slice i LM/DSP LM/DSP LM/DSP LM/DSP 500 MB/s/FPGA Out MAPS Local time slice i+1 LM/DSP LM/DSP LM/DSP LM/DSP FPGA Pre-process/L1 PCs 10000 ev/s 100 ms/ev Ivan Kisel, KIP, Uni-Heidelberg

  10. CPU: L2/Offline FPGAs In Free/Busy Offline Memory Buffers 500 MB/s/CPU STS RICH TRD ECAL Updated local time slice 1 NIC Updated local time slice 2 Updated local time slice 3 MAPS Updated local time slice … Updated local time slice 99 Updated local time slice 100 CPU L2/Offline 1 MB/s/CPU 300 ev/s 3 ms/ev Storage Ivan Kisel, KIP, Uni-Heidelberg

  11. FPGA 4D Pre-processor/L1 Trigger Reconstruct primary vertex Select detached secondary D tracks Select RoIs of secondary J/y tracks Fit secondary tracks Fit secondary vertices Pre-process/Trigger STS STS ECAL TRD RICH STS STS STS Ivan Kisel, KIP, Uni-Heidelberg

  12. CPU 4D L2 Trigger Improve (m)any of the L1 results: Reconstruct primary vertex Select detached secondary D tracks Select RoIs of secondary J/ytracks Fit secondary tracks Fit secondary vertices Trigger STS STS ECAL TRD RICH STS ECAL TRD RICH STS STS Ivan Kisel, KIP, Uni-Heidelberg

  13. Offline Analysis Improve (m)any of the L2 results: ECAL TRD Reconstruct primary vertex Select detached secondary D tracks Select RoIs of secondary J/y tracks Fit secondary tracks Fit secondary vertices Analysis RICH STS ECAL TRD RICH STS ECAL TRD RICH STS ECAL TRD RICH STS ECAL TRD RICH STS Ivan Kisel, KIP, Uni-Heidelberg

  14. Algorithms Hough Transform Cellular Automaton Elastic Net Kalman Filter Simple Local Parallel Fast Ivan Kisel, KIP, Uni-Heidelberg

  15. Trigger Simulation Ivan Kisel, KIP, Uni-Heidelberg

  16. Summary • Modular structure of DAQ • Online reconfigurable farm • Commercial hardware • Modular software • „All-in-one“ sub-farm • No canonical event building • Access to all data at any stage • 4D event reconstruction • Flexible L1/L2/Offline definition • No need to re-process L1 at L2 • Offline can run/test L1 and L2 • All data on local mass storage • Online alignment • Online database update Ivan Kisel, KIP, Uni-Heidelberg

More Related