1 / 23

HLT - data compression vs event rejection

HLT - data compression vs event rejection. Assumptions. Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e. TPC) >> DAQ bandwidth  mass storage bandwidth

Download Presentation

HLT - data compression vs event rejection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HLT -data compression vs event rejection

  2. Assumptions • Need for an online rudimentary event reconstruction for monitoring • Detector readout rate (i.e. TPC) >> DAQ bandwidth  mass storage bandwidth • Some physics observables require running detectors at maximum rate (e.g. quarkonium spectroscopy:TPC/TRD dielectrons; jets in p+p: TPC tracking) • Online combination of different detectors can increase selectivity of triggers (e.g. jet quenching: PHOS/TPC high-pT - jet events)

  3. Data volume and event rate bandwidth TPC detector data volume = 300 Mbyte/event data rate = 200 Hz 60 Gbyte/sec front-end electronics 15 Gbyte/sec Level-3 system < 2 Gbyte/sec DAQ – event building < 1.2 Gbyte/sec permanent storage system

  4. HLT tasks • Online (sub)-event reconstruction • optimization and monitoring of detector performance • monitoring of trigger selectivity • fast check of physics program • Data rate reduction • data volume reduction • regions-of-interest and partial readout • data compression • event rate reduction • (sub)-event reconstruction and event rejection • p+p program • pile-up removal • charged particle jet trigger, etc.

  5. Data rate reduction • Volume reduction • regions-of-interest and partial readout • data compression • entropy coder • vector quantization • TPC-data modeling • Rate reduction • (sub)-event reconstruction and event rejection before event building

  6. TPC event(only about 1% is shown)

  7. Regions-of-interest and partial readout • Example: selection of TPC sector and -slice based on TRD track candidate

  8. Data compression:Entropy coder Probability distribution of 8-bit TPC data Variable Length Coding short codes for long codes for frequent values infrequent values Results: NA49: compressed event size = 72% ALICE: = 65% (Arne Wiebalck, diploma thesis, Heidelberg)

  9. Data compression:Vector quantization compare • Sequence of ADC-values on a pad = vector: code book • Vector quantization = transformation of vectors into codebook entries • Quantization error: Results: NA49: compressed event size = 29 % ALICE: = 48%-64% (Arne Wiebalck, diploma thesis, Heidelberg)

  10. Data compression: TPC-data modeling simple local track model (e.g. helix) track parameters • Fast local pattern recognition: • Track and cluster modeling: comparison to raw data local track parameters analytical cluster model quantization of deviations from track and cluster model Result: NA49: compressed event size = 7 %

  11. Fast pattern recognition Essential part of Level-3 system • crude complete event reconstruction  monitoring • redundant local tracklet finder for cluster evaluation  efficient data compression • selection of (,,pT)-slices  ROI • high precision tracking for selected track candidates • jets, dielectrons, ...

  12. Fast pattern recognition • Sequential approach • cluster finder, vertex finder and track follower • STAR code adapted to ALICE TPC • reconstruction efficiency • timing results • Iterative feature extraction • tracklet finder on raw data and cluster evaluation • Hough transform

  13. Fast cluster finder (1) • timing: 5ms per padrow

  14. Fast cluster finder (2)

  15. Fast cluster finder (3) • Efficiency • Offline efficiency

  16. Fast vertex finder • Resolution • Timing result: • 19 msec on ALPHA (667 MHz)

  17. Fast track finder • Tracking efficiency

  18. Fast track finder • Timing results

  19. Hough transform (1) • Data flow

  20. Hough transform (2) • -slices

  21. Hough transform (3) • Transformation and maxima search

  22. Level-3 system architecture TPC sector #1 TPC sector #36 TRD ITS XYZ ROI local processing subsector/sector data compr. global processing I (2x18 sectors) Level-3 trigger momentum filter global processing II (detector merging) event rejection global processing III (event reconstruction) monitoring

  23. TPC on-line tracking • Assumptions: • Bergen fast tracker • DEC Alpha 667 MHz • Fast cluster finder excluding cluster deconvolution • Note: This cluster finder is sub optimal for the inner sectors and additional work is required here. However in order to get some estimate the computation requirements were based on the outer pad rows. It should be noted that the possibly necessary deconvolution in the inner padrows may require comparably more CPU cycles. • TPC L3 Tracking estimate: • Cluster finder on pad row of the outer sector 5 ms • tracking of all (monte carlo) space points for one TPC sector 600 msNote - this data may not include realistic noise • - tracking to first order is linear with the number of tracks provided there are few overlaps • - assuming one ideal processor below • Cluster finder on one sector (145 padrows) 725 ms • Process complete sector 1,325 s • Process complete TPC 47,7 s • Running at maximum TPC rate (200 Hz), January 2000 9540 CPUs • Assuming 20% overhead 11500 CPUs(parallel computation, network transfer, inner sector additional overhead, sector merging etc.) • Moores Law (60%/a)  @ 2006 – 1a commission x10,5 1095 CPUs

More Related