90 likes | 247 Views
Global Track Trigger status. Items reviewed: reminder of what the GTT is, who is working on what, GTT rate and latency measurement status, STT interface status, algorithm status, and conclusions. Reminder of what the GTT is. What is the GTT: MVD second level trigger (SLT) receiving:
E N D
Global Track Trigger status • Items reviewed: • reminder of what the GTT is, • who is working on what, • GTT rate and latency measurement status, • STT interface status, • algorithm status, and • conclusions. C.Youngman
Reminder of what the GTT is • What is the GTT: • MVD second level trigger (SLT) receiving: • MVD data (cluster information), • CTD SLT data (axial+stereo, but no z-by-time), and • STT SLT data (in preparation). • MVD, CTD and STT interfaced via PowerPC VME cpus. • GTT PC farm provided by Yale (12 dual 1 GHz). • Data from a single event are processed by a reconstruction algorithm on a farm PC. • Data and decisions transferred with p-2-p TCP/Ethernet through switches: • FastEthernet (PowerPC, to EVB event sink and receive GSLT decision), and • GigaEthernet (all farm connections). • GTT decision sent to GSLT should improve: • basic track parameter resolutions and primary vertex z resolution, and • at a later stage, possibly, tag events with secondary vertices. • GTT latency must be comparable with the current CTD latency. • ~12ms mean and tail extending to 25-30ms. at GFLT accept rates ≤500Hz. C.Youngman
All GTT hardware is in the ZEUS hall and connected. PPC pushing CTD-SLT event data. PPC pushing STT-SLT event data. 3 PPC pushing MVD cluster event data. 12 Farm PCs running processing algorithms. Network switches. PC sending MVD and GTT data to EVB. PPC sending GTT result to GSLT. PC receiving GSLT result from EVB C.Youngman
Who is working on what • MVD readout, EVB and GSLT interface. • A.Polini. • CTD+MVD tracking algorithm. • M.Sutton, B.West*, R.Hall-Wilton and B.Straub (geometry). • CTD SLT interface. • S.Topp-Jørgenssen, A.Polini and M.Sutton. • STT tracking algorithm. • M.Soares. • STT SLT interface. • A.Stiefoutkin, A.Polini and H.-P.Jakob. • GTT farm and network hardware. • S.Dhawan. • Run control, histogramming, etc. • M.Hayes* and C.Youngman. * now at …. C.Youngman
GTT rates and latency status • GTT participated successfully in the January, low rate, cosmic runs. • No systematic rate and latency measurements tests have been made since the last collaboration meeting (excuses: increased MVD workload, availability of CTD in DAQ chain, cleanup of MVD/GTT software, etc.). • results presented Oct. 2001: • with: • di-jet MC or real events with MVD hits added along offline reconstructed tracks, • event data stored in and readout from CTD and MVD front-end buffers on RATETEST triggers, • GSLT rejection 1:10 and 6 GTT algorithms running. • Latency at GSLT is acceptable ie. ≤ CTD GSLT latency. C.Youngman
GTT rates and latency status • Results still OK? Check by comparing results from measurement runs with fixed length MVD ADC only input data: • Previous 2001 measurement: • New measurement: • Inference: • Throughput rate has increased from ~475 to 690Hz (!?). Why ?: • MVD ADC readout with faster PPC. • Each PPC connected directly to the switch. • Tidy up of software (SLT result formatting on GTT not GSLT interface, etc.). • Conclude that rate/latency with MVD+CTD input should still be “almost” OK. C.Youngman
A video conference, with Bonn, reviewing STT interfacing to the GTT was held 18/01/02. Decisions made: Hardware interface: The 9 cables (8 links, 1 reset) would be laid from the STT SLT to the GTT interface rack. (done) A VME crate, plus PowerPC and Nikhef 2TP board would be installed at the interface rack and connected to the STT input cables. (done) Test the cable/2TP connections using worm. (done) Software interface: The identical handshake mechanism, used by the GTT-CTD interface, would be implemented at the 2TP TPM (H.-P.Jakob+A.Polini by end of Feb.). A GTT specific STT bootable would be produced (H.-P.Jakob done). Data format sent on the link: Phase 1. Offline format and if this proves to be too big (high latency), then Phase 2. Send only the data needed by the STT algorithm. Where to strip data remains contentious. Bulk stripping cannot be done at the PPC which polls the TPM whereas on-the-fly stripping in the TP network is attractive. Algorithm: M.Soares will continue work on the algorithm within the test environment framework (standalone/debugging and GTT embedded/rate and latency measurements). A.Antonov agreed to provide MC datasets (incl. CTD,MVD and STT) to use with the test environment. (needed in March a first file is now available). Conclude – work underway, but an algorithm will not appear overnight. STT interface status C.Youngman
Extensive algorithm status presented at last collaboration meeting. Minor changes made since then: Memory allocation of buffer space done once only. Geometry files modified to include constants derived by Bruce. More DQM histograms and status information added. Minor online/offline compatibility issues resolved. “day one” algorithm is ready (but which day is day one?). GTT algorithm status C.Youngman
Conclusions • GTT with “day one” CTD and MVD algorithm has been ready for some time. • The GTT participated successfully in the cosmic runs. • Need 2-3 months of stable data taking to: • Prove GTT rates and latencies under real conditions. • To get a first look at how the algorithm performs. • To get some understanding of how the results can be used at the GSLT. • STT interfacing and algorithm preparation has begun. C.Youngman