190 likes | 389 Views
Status FEE and DAQ. Walter F.J. Müller , GSI, Darmstadt 5 th CBM Collaboration Meeting GSI, March 9-12, 2005. CBM Trigger Requirements (reminder). assume archive rate: few GB/sec 20 kevents/sec. In-medium modifications of hadrons
E N D
Status FEE and DAQ Walter F.J. Müller, GSI, Darmstadt 5th CBM Collaboration MeetingGSI, March 9-12, 2005
CBM Trigger Requirements (reminder) assume archive rate: few GB/sec 20 kevents/sec • In-medium modifications of hadrons onset of chiral symmetry restoration at high ρBmeasure: , , e+e- open charm (D0, D±) • Strangeness in matter enhanced strangeness productionmeasure: K, , , , • Indications for deconfinement at high ρB anomalous charmonium suppression ?measure: D0, D±- J/ e+e • Critical point event-by-event fluctuations measure: π, K offline trigger trigger ondisplaced vertex offline drives FEE/DAQarchitecture trigger trigger trigger on high pte+ - e- pair offline 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
CBM DAQ Requirements Profile • D and J/Ψ signal drives the rate capability requirements • D signal drives FEE and DAQ/Trigger requirements • Adopted approach: displaced vertex 'trigger' in first level • Additional Problem: DC beam → interactions at random times → time stamps with ns precision needed → explicit event association needed • Current design for FEE and DAQ/Trigger: • Self-triggered FEE • Data-push architecture 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
L1 Select L2 Select Self-triggered FEE – Data Push DAQ Detector Self-triggered front-end Autonomous hit detection fclock FEE No dedicated trigger connectivity All detectors can contribute to L1 Cave Shack DAQ Large buffer depth available System is throughput-limited and not latency-limited High bandwidth Modular design: Few multi-purpose rather many special-purpose modules Special hardware Use term: Event Selection Archive 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
Front-End for Data Push Architecture • Each channel detects autonomously all hits • hit finding and sparsification needed in FEE • An absolute time stamp, precise to a fraction of the sampling period, is associated with each hit • time distribution system needed (replaces trigger distribution) • All hits are shipped to the next layer (usually concentrators) • de-randomization buffers needed • large output bandwidth • Association of hits with events done later using time correlation • Typical Parameters: • with 1% occupancy and 107 interaction rate: • some 100 kHz channel hit rate • few MByte/sec per channel • whole CBM detector: 1 Tbyte/sec 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
FEE – DAQ Interface FEE FEE FEE FEE FEE Diversityinevitavble Concentrator orread-out controller Cave 3 logical interfaces Shack Clock and Time(in only) Control(bidirectional) Commoninterfacesindispensible Hit Data(out only) First Drafts readyfor fall 2005CBM TB Meeting Time DAQ DCS 3 Specs: 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
FEE Working Group Session Check for last minutechanges in WG programs Friday 9:00 – 13:00 – Theory Seminar Room RPCFront-end Timemeasurement Time distribution TRD/RICHrequirements straw TRD FEE solution Si Strip FEE Building blocksfor amulti-purposeFEE ASICarchitecture 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
FEE Summary • Several FEE developments have started • define interfaces (Time, DAQ, DCS) • FAIR FEE workshop this summer • communicate with PANDA, NuSTAR, ... • Concrete R&D on FEE buildings blocks has started • MPWs planned in 2005 covering essential blocks for RICH, TRD, RPC (,ECAL) • Some Subsystems still in beginning of conceptual stage: • Hybrid Pixel FEE • Strip Detector FEE • Some requirements not yet finalized • RICH: PMT or MWPC • RICH(PMT) and ECAL: desired or required timing resolution 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
CBM DAQ and Online Event Selection • More than 50% of total data volume relevant for first level event selection • Aim for simplicity • Simple two layer approach: 1. event building 2. event processing neededfor D neededfor J/μ usefullfor J/μ STS, TRD, and ECAL data usedin first level event selection 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
Logical Data Flow Concentrators:multiplex channelsto high-speed links Time distribution Buffers Build Network Processing resources forfirst level event selectionstructured in small farms Connection to'high level' selection processing 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
Bandwidth Requirements Data flow: ~ 1 TB/sec Gilder helps Moore helps 1st level selection: ~ 1014-15 operation/sec Data flow: few 10 GB/sec to archive: few 1 GB/sec→ ~ 20 PByte/year → 100 Gevt/year 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
Event Building – Alternatives • Straight event-by-event approach: • data arrives on ~1000 links • ~100 byte per event and link • 1010 packets/sec to handle... • Handle time intervals or event intervals • 10 μs or 100 events seems reasonable • event slices allow to suppress background • time slices work also to p-p and p-A • Very regular and fully controlled traffic pattern: • data traffic can be scheduled to avoid network congestion • a large fraction of the switch bandwidth can be used • aim at COTS solution for BNet (→ Ethernet) • discrete event simulations under way 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
Network Characteristics Data PushDatagram'serrors markedbut not recovered Request/Responseand Data PushTransactionserrors recovered 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
L1 Event Selection Farm Layout • Current working hypothesis: CPU + FPGA hybrid system • Use programmable logic for cores of algorithms • Use CPU for the non-parallelizable parts • Use serial connection fabric (links and switches) • Modular design (only few board types) 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
Algorithms • Performance of L1 feature extraction algorithms is essential • critical in CBM: STS tracking + vertex reconstruction TRD tracking and Pid • Look for algorithms which allow massive parallel implementation • Hough Transform Trackerneeds lots of bit level operations, well suited for FPGA • Cellular Automaton Tracker • Co-develop tracking detectors and tracking algorithms • L1 tracking is necessarily speed optimized (>109 tracks/sec)→ possibly more detector granularity and redundancy needed • Aim for CBM:Validate final hardware design with at least 2 trackers suitable for L1 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
Design Iterations DetectorDesign TriggerPerformance Rejection ratio SignalPerformance S/N ratio Process has now successfullystarted Still somepieces missingin CVS PhysicsPerformance Many parameter studies neededGeneric, parametrizableHitProducers required 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
Computing and DAQ Session Check for last minutechanges in WG programs Thursday 14:00 – 17:00 – Theory Seminar Room Handle 20PB a year CBM Gridfirst steps Controls, not anafter sought this time Network & processing p-p, p-A – >108 int/sec 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
DAQ Summary • Event definition has changed: • now based on time stamps and time correlation • cope with >100 MHz interaction rate in p-p and p-A • Role of DAQ has changed: • DAQ is simply responsible to transport data from producers to consumers • Role of 'Trigger' has changed: • filter events delivered by DAQ • 'Online Event Selection' is better term • Online Event Selection Algorithms: • design iteration process now started (detector needs to change....) • System aspects: • 'online' – 'offline' boundary blurs • next: work on data model and compute model 5th CBM Collaboration Meeting, GSI, March 9-12, 2005
The End Thanks for your attention dinner time .. finally 5th CBM Collaboration Meeting, GSI, March 9-12, 2005