240 likes | 249 Views
Explore the use of GPU-accelerated triggers for underwater neutrino astronomy. Learn about the NEMO Phase II Tower and propose parallel trigger methods. Test muon triggers in GPUs for KM3-NEMO Telescope data handling.
E N D
Parallel Neutrino Triggers using GPUs for an underwater telescope KM3-NEMO Bachir Bouhadef, Mauro Morganti, G. Terreni (KM3 Collaboration) GPUs in High Energy Physics Workshop 2014. INFN Pisa & Physics Department of Pisa
Outlines • Underwater telescope for neutrino astronomy • NEMO Phase II Tower • DAQ system for NEMO Tower • Showing a possibility of using a CPU-GPU DAQ for an online muon-track selection. • Proposing a method for Parallelizing the online trigger software NEMO-II Tower. • Test muon triggers in GPUs. • Proposition for KM3 NEMO Tower Trigger data handling. 2 GPUs in HEP INFN Pisa & Physics Department of Pisa
g g Cherenkov e- (b decay) 40K 40Ca Underwater telescope for neutrino astronomy muon track Potassium Decay rate 55kHz bioluminescent organisms Neutrino track Muon tracks (4- 80) 10-5 Hz + 55 kHz of background (10 inch PMTs) INFN Pisa & Physics Department of Pisa 4
8 floor tower ~ 500 m tall 40 m Italian peninsula Sicily Portopalo di Capo Passero 4 PMT/Floor 32 PMT total ~100 km EO cable ~ 3.5 km deep 8 m INFN Pisa & Physics Department of Pisa 5
8 floors 4 PMT/Floor 8m Floor Arm length NEMO Tower Phase II. KM3 - NEMO Tower . Floor 14 Floor 13 Floor 1 14 floors 6 PMT/Floor 6m Floor Arm 84 PMTs 32 PMTs X 55 kHz 1.7 Mhit/s 6 INFN Pisa & Physics Department of Pisa
Trigger and Data Acquisition System (NEMO Phase II) T. Chiarusi & F. Simeone VLVNT 2013 INFN Pisa & Physics Department of Pisa 7
Gbit swich TS 1 TS 1 TS 0 TS 0 Trigger and Data Acquisition System (NEMO Phase II) Onshore FC SC time AFC TS 0 TS 0 HM 0 TCPU 0 TCPU 1 HM 1 TS 1 TS 1 time TS is Time Slice of 200ms. The trigger in TCPU: sorting time’s PMTs hits and applying a charge threshold and time coincidences (simple coincidence, Floor coincidence). INFN Pisa & Physics Department of Pisa 8
SC AFC AFC Tower D.U. Why GPU ? Scalable Programming Model A GPU uses blocks and threads for parallel programming INFN Pisa & Physics Department of Pisa 9
Triggers for muon detection • After space, time and a charge calibrations, triggers are the first step for reaserching candidate muon tracks. • Most of triggers are based on arrival time of hits and a charge threshold, and consedered that all PMTs are equivalent. - In Addition to time coinsidence and charge trigger, is it possible to use a parallel trigger that uses only time hits rather than with the charge information? Or uses the charge information in a safe way. Reasons could be: not well calibrated PMTs charges,... 10 INFN Pisa & Physics Department of Pisa
A new parallel trigger for muon detection based on time difference of muon’s hits Most of the muon track hits are Within a Time Windows (TD). We need at least 5 hits in different PMTs to reconstruct the muon track. We must look for a number of hits N in a fixed time windows. INFN Pisa & Physics Department of Pisa 11
Onshore Gbit swich Storage Unit We propose a DAQ system using CPU-GPU Architecture INFN Pisa & Physics Department of Pisa 13
TGPU-CPU TCPU is replaced by TGPU-CPU, and every second the TGPU-CPU will receive 5 Time slices of 200ms each. INFN Pisa & Physics Department of Pisa 14
Network THRD TTS4 TTS4 TTS3 TTS3 TTS2 TTS2 TTS1 TTS1 TTS0 TTS0 Step 1 5 TTS from network thread 5 CPU threads to put PMT hits in correct time interval, and the hits are time-order by thread Structure is ready to be treated in GPU Step 2 CPU work The size of hits are not fixed, we must prepare a new structure for the GPU Number of threads must be multiple of 5 and 32 We can not predict how many hits per thread there are, so we fixed a max hits number by thread using the nominal rate X 3 or 6 We have considered also the edge effect between threads and TTSs. We should avoid threads with a few hits by choosing the optimal thread time interval. INFN Pisa & Physics Department of Pisa GPUs in HEP 15
TTS0 TTS1 TTS2 TTS3 Sort all PMT hits using classical Algorithms (shell) 1 4 Step 1 2 Structure is ready to Trigger tagging 3 1 4 Step 2 2 3 Trigger L0 In L1 Trigger all possible trigger can be implemented, and according to L0+L1 efficiencies the best is chosen to tag the event to be saved. GPU work • 1 (L0) - N7TW1000 • (L0) SC or AFC • 2 (L1) - N7TW1000 & SC • 3 (L1) - N7TW1000 & AFC • 4 (L2) - N7TW1000 & SC & AFC • 5 (L2) - N7TW1000 & ( (SC & AFC ) || (SC>1) || (AFC>1)) • 6 (L2) - N7TW1000 & (SC || AFC)) • 7 (L0) Charge > Charge_THRHD INFN Pisa & Physics Department of Pisa GPUs in HEP 16
Muon triggers tests in GPUs GPU cards GPUs in HEP INFN Pisa & Physics Department of Pisa 17
Muon triggers tests in GPUs Trigger Time execution for one tower of 32PMT/ ~55kHz. Using 2 different GPU cards and one 1 second of raw data for 32 PMTs at background rate ~ 55 kHz and applying all triggers. The back times are measured while the CPU in Idle. The read times are measured times while the CPU executes other processes GPUs in HEP INFN Pisa & Physics Department of Pisa 18
Muon triggers tests in GPUs Using the same two GPU cards, we use one second data of 84 PMTs (6PMT X 14 Floors) at background rate 55 kHz and applying all triggers. The back numbers are the measured timeswhile the CPU in Idle. The red numbers are the measured times while the CPU executes other processes INFN Pisa & Physics Department of Pisa 19
Proposed DAQ CPU-GPU for 8 KM3-Ita Tower INFN Pisa & Physics Department of Pisa 20
Conclusion and future work - GPUs can also take place in optical neutrino telescopes. - Still other online more selective algorithms can be applied. - Muon track filtering is the next step. - Both Tesla 20c50 and GTX TITAN can be a good choice. INFN Pisa & Physics Department of Pisa 21
Thank you for attention INFN Pisa & Physics Department of Pisa 22
BACKUP SLIDES INFN Pisa & Physics Department of Pisa 23
@ 57kHz of Bkg GPUs in HEP INFN Pisa & Physics Department of Pisa 24