510 likes | 659 Views
Status of the LLTT(TPU) project G.Punzi (Pisa) on behalf of the LLTT group Meeting with INFN referees 20/3/2014. Brief recap. - Feb. 2013 : LHCb Trigger workshop: First LHCb presentation - June 2013 : Feasibility study presented @LLT workshop
E N D
Status of the LLTT(TPU) project G.Punzi (Pisa)on behalf of the LLTT groupMeeting with INFN referees20/3/2014
Brief recap - Feb. 2013: LHCb Trigger workshop: First LHCb presentation - June 2013: Feasibility study presented @LLT workshop • - Received an official list of questions from management - 4 Oct 2013: LLT workshop: Answers to questions + simulation of a baseline system - Asked to be reviewed to become a project for the upgrade - LHCb outlook for the online evolves considerably - December 2013: Presentation to LHCb week. - 1-2 Feb. 2014: Presentations to LHCb Trigger Workshop - 1 Mar 2014: Talk at international instrumentation conference (INSTR-14) - 10 Mar 2014: Internal note presented for review - 18 Mar 2014: Presentation to Technical Board meeting TODAY: presentation to INFN referees. - 31 Mar 2014: Presentation to LHCb external review committee
Effects of changes in readout design • DAQ structure evolved to bi-directional EB • Baseline readout has moved FPGA cards to EB • LLT hardware shrunk, most (or all) functions moved into EB CPUs (“software LLT”) • BIG strategy choice: push investments upward • But HLT wants to keep a “safety net” (LLT) • LLTT more substantial hardware: → Track Processing Unit (TPU), connected to EB • Regarded by trigger group mostly as HLT co-processor, or pre-processor • Raises the bar considerably ! • Can still fuel a software-LLTT functionality • Safety net as well...
Architecture switching network Cellular Engines Fitter Tracking layers Separate trigger-DAQ path Custom switching network delivers hits to appropriate cells Data organized by cell coordinates Blocks of cellular processors Track finding and parameter determination To DAQ
Basic Principles For each cellular unit in the parameter space (u,v) calculate a weighed response summing over all hits and all layers. Tracks are peaking structures in the parameter space. Find a track as clusters of excited cells Trigger&Tracking Workshop- M.J. Morello
Hit delivery by the switching logic Hits must be delivered only to the cell that need them (they can be more than one) The switch network “knows” where to deliver hits All information about the network of connections is embedded in the network via distributed LUTs
Cellular engine Performs calculation of weights for a hit into a cell Deals with surrounding cells as well. Handles time-skew between events In second stage performs local clustering in parallel, and queues results to output
Track parameter estimation by cluster Center-of-Mass ENGINE ENGINE ENGINE ENGINE ENGINE ENGINE ENGINE ENGINE ENGINE ENGINE ENGINE ENGINE (ENGINE OUTPUT) MUX MUX 16 MUX -1,0,1 -1,0,1 REQ ACK DATA 16 CoM UNIT 22 REG REG REG z + p - z + p 0 z + p + d PIPELINED DIVIDER - 0 z 0 p - z 0 p 0 z 0 p + + z - p - z - p 0 z - p + Due to data reduction out of the engine, a 1:12 ratio is sufficient to keep up with the data flow Final parameter determination can be done of EB CPUs to achieve full “offline-compliance”
Studies of Stratix-V capacity TPU completely implemented in firmware All main components: - Switch - Engines - CoM implemented in VHDL and placed in FPGA • Fit ~750 engines/chip on Stratix-V • exact number depends on details (time-ordering of pixel data, etc.) • Arria 10 allows double the logic at the same price, with lower power consumption
Simulation and Timing Ready CELL Hit_data LAY X Y address Intersect_x_y d_x d_y d_x^2 d_y^2 sum_square weight Exceed 350 MHz clock freq 40MHz throughput Total latency <1µs ! Much better than AM : Originally intended as “Low Level Track Trigger” (Not accounting for I/O)
Tracking ConfigurationVELO+UT, 16+2 layers - Split into two separate telescopes for ease of cabling - Covers longable tracks
Integration in the DAQ to Event Builder TPU to Event Builder - ATC40 scheme (not the baseline anymore) - Shows the map of connections - Additional optical links needed to copy data to the TPU cards
TPU integrated in the EB TPU appears to the EB as an additional “virtual detector” producing tracks
Data flow inside EB Tracks in post-EB Tracks in pre-EB Small flows in TPU boxes TPU behaves as a virtual “track detector” - Local CPUs can be used to refine FPGA output - Availability of TRACKS in the Event Builder • Can control rate by confirming LLT muon(hadron) with stiff track • In the “partial reconstruction” scheme, could have HLT1 inside EB
Lab test with TEL62 - We have a plan for testing the retina algorithm with real FPGAs (Stratix-3), in a simplified configuration. - This is lower-speed, but helps us demonstrate that we can put together and operate a complete system - We exploit TEL62 boards, that are compatible with current LHCb DAQ, and can be easily inserted in the system (agreement with local DAQ experts) - TEL62 boards have been ordered together with NA62 order, and will arrive soon (~month). - Pisa has lab space for a bench test. TEL62 is used for both sequence generation and “retina” implementation. - Work ongoing in Pisa on connection boards and logistics.
Preparations for test on silicon telescope @Milano - Second stage of testing planned with CR in a Si telescope being built in Milano - Details in UT talk yesterday.
Simulation • At least 3 VELO hits in the last 8 VELO stations • At least 1 hit in each axial UT layer. Fiducial region: |u|<0.35, |v|<0.35 (about theta< 50mrad); |z| < 15 cm; Electron rejection. Some details on LHCb simulation used: Ebeam = 7TeV nu = 7.6 (L=2x1033) and nu=11.4 (L=3x1033), bunch crossing: 25 ns, with spillover Geometry: DDDB : dddb-20131025, CONDDB : sim-20130830-vc-md10. VeloUT offline reconstruction Brunel v44r9 with default setting. Performances on small angle telescope with 8VELO +2UT.
Mapping of detector to receptor cell array Intersection of “base tracks” with detectors gives a map of “nerve endings”. This encodes the information about the geometry Every hit on the detector produces a signal on nearby receptors, depending on distance (I skip on several subtleties. For instance, effective operation require distribution to be non-uniform) Not unlike the distribution of photoreceptors in visual system – but it is all virtual in our case, that is, implemented in the internal LUT of the system.
Simulated full LHCb events (µ=7.6) Generated Out of acceptance Used ~45,000 cell engines C++ code, can be inserted in standard analysis code
Efficiency/Uniformity p , pT
Efficiency/Uniformity z , IP
Efficiency/Uniformity (u,v) ~ (θx , θy)
Momentum resolution σk = 0.0126 σk = 0.0102
Detailed cost estimate from the online group • Estimated at current prices: 940 kCHF • Does not account for savings from moving to Arria-10 • Assumes using identical boxes to the EB for simplicity • Some further savings still possible
Is the TPU cost effective, TODAY ? Timing of piece of code yielding the performances we have been comparing to: 3.8 ms (standalone CPU-2012) It was later understood that this piece of code performs further extra work (backward VELO layers) Various estimates: (16/26)*2.3+1.5 = 2.9 ms (%GEC) 60% * 3.8 = 2.3 ms 3.8ms -(VELO10) = 2.4 ms We have no piece of code doing exactly the TPU work on the same sample, with the same performance Cost of (naked) CPU: ~120SWF/core → 1ms@40MHz = 4.8 MCHF TPU equivalent: 10 ÷15 MCHF
Is the TPU cost effective, TODAY ? Timing of piece of code yielding the performances we have been comparing to: 3.8 ms (standalone CPU-2012) It was later understood that this piece of code performs further extra work (backward VELO layers) Various estimates: (16/26)*2.3+1.5 = 2.9 ms (%GEC) 60% * 3.8 = 2.3 ms 3.8ms -(VELO10) = 2.4 ms We have no piece of code doing exactly the TPU work on the same sample, with the same performance Cost of (naked) CPU: ~120SWF/core → 1ms@40MHz = 4.8 MCHF TPU equivalent: 10 ÷15 MCHF TPU clearly cost-effective solution at present time
Projections to 2020 “It is always difficult to make predictions, especially about the future” Yogi Berra
How the HLT group plans to use the TPU Does not include: - Multicore inefficiency - Data/MC effects
Assumptions behind the 8ms - TPU price 2020 = 2014 - CPU price drop 16x - No inefficiency factor for 400 jobs/node - Additional 2x to CPU for other uses - Full cost of TPU vs. “scalable” cost of CPU
Summary • We designed a system capable of track reconstruction at 40MHz with offline-like performance and ~1µs latencies. • Cost of TPU is an order of magnitude smaller than today's CPU solutions • Projections to the upgrade era made by the HLT group and online group predict that the CPU solution will become more convenient.Based on some assumptions. • The TB recommended a CPU-only solution as baseline for TDR.
People Many thanks to all people who contributed to the development of this design • A. Abba (MI) • F. Bedeschi (PI) • F. Caponio (MI) • M. Citterio (MI) • D. Corbino (CERN) • A. Cusimano (MI) • A. Geraci (MI) • S. Leo (PI) • F.Lionetto (PI) • P.Marino (PI) • M.J. Morello (PI) • N. Neri (MI) • A. Piucci (PI) • G. Punzi (PI) • L. Ristori (PI) • F. Ruffini (PI) • F. Spinella (PI) • S.Stracka (PI) • D.Tonelli (CERN) • J.Walsh (PI)
Basic principle We inject real hits (xr,yr)k in the detector layers. For each cellular unit ith in the parameter space (u,v) calculate Ri response summing over all hits and all layers. Tracks are peaking structures in the parameter space. Find a track as clusters of excited cells 44 Trigger&Tracking Workshop- M.J. Morello 3/17/14
Tracking Efficiency Reconstructed “offline” VELO+UT tracks using Official LHCb-MC Bs→φφ with mu=7.6. • Require on offline reconstructed tracks • p >3 GeV/c • pT > 500 MeV/c • and a geometrical acceptance (retina acceptance) • 20 < theta < 60 mrad Found that ~95% of offline tracks have a compatible match within the geometric acceptance of our track processor. • All VELO and UT hits without any requirements sent to the LLTT.
Simulated full LHCb events (µ=7.6) Out of acceptance Full LHCb-MC Generated Out of acceptance NB: it is simulable with 100% accuracy , C++ code available to users
Software simulation • 4/10/13: Benchmark study on 6 VELOPIX layers + 2 UT planes • Used 36,000 cell units in (r,j) parameters • (r,j) polar coordinates on virtual plane of track intersection. • Mapping using LHCb-MC ParticleGun. • Tracks from Official Production Bs→φφ LHCb-MC. • L=2 × 1033 cm-2s-1, sqrt(s)=14 TeV, mu=7.6 • DDDBtag =“dddb-20130408“. • CondDBtag = “simcond-20121001-vc-md100” • No kinematics cuts applied. • No requirement on hits.