260 likes | 412 Views
ATLAS – wstępna selekcja przypadków LHC. dr hab.Krzysztof Korcyl zakład XIV eksperymentu ATLAS. Fizyka oddziaływań pp. Akcelerator LHC w CERN. Pierwszy światowy projekt akceleratorowy. pp 7 TeV x 7 TeV; L nom ~ 10 34 cm -2 s -1. LHC. 2835 pęczków po 10 11 cząstek każdy.
E N D
ATLAS – wstępna selekcja przypadków LHC dr hab.KrzysztofKorcyl zakład XIV eksperymentu ATLAS
Akcelerator LHC w CERN Pierwszy światowy projekt akceleratorowy pp 7 TeV x 7 TeV; Lnom ~ 1034 cm-2s-1 LHC 2835 pęczków po 1011cząstek każdy Ib= 0.53A, czas życia~10 godzin SPS CERN Site (Meyrin)
Eksperymenty LHC CERN European Organisation for Nuclear Reseach Large Hadron Collider LHCb Atlas CMS Alice
Eksperyment ATLAS ATLAS w zestawieniu z 5 piętrowym budynkiem 40 w CERN
Inner Detector Channels Fragment size - kB Muon Spectrometer Channels Fragment size - kB Pixels 1.4x108 60 MDT 3.7x105 154 SCT 6.2x106 110 CSC 6.7x104 256 TRT 3.7x105 307 RPC 3.5x105 12 TGC 4.4x105 6 Calorimetry Channels Fragment size - kB Trigger Channels Fragment size - kB LAr 1.8x105 576 LVL1 28 Tile 104 48 ATLAS - rozmiar przypadku Reduce data online by 5*10-6 from 60 TB/s down to 300 MB/s ~ 300 MB/s is affordable (still 3 PetaBytes/year to store) ATLAS will record at ~200 Hz Atlas total event size: 1.5 Mbytes140 Mio Channels. 40 MHz * 1.5 Mb = 60 TB/s Need a fast, highly selective and yet efficient trigger system
dominating 40 MHz bunch crossing 100 kHz Level 1 Accept 1-2 kHz Level 2 Accept 100-200 Hz storage rate ATLAS – organizacja systemu • At LHC energies interesting events are rare 1 in 107 – 109 (except bb) • @ 40 MHz bunch crossing event rate beyond current offline processing and storage capabilities Level 1 • calorimeter and muon trigger chambers • counts multiplicities of clusters, jets, and muon tracks, compares to various threshold levels • synchronous with LHC, latency: 2.5 μs Hardware Level 2 • access to full granularity detector data • uses only regions around level 1 trigger objects as seed for reconstruction (10% of detector) • asynchronous - 10 ms per event Software Event Filter • accesses complete detector data (after events are fully built) • asynchronous - runs offline algorithms (1s per event)
Sychronous with LHC Latency 2.5s Hardware based Calorimeters and muons only Coarse granularity detector data Output rate up to ~75 kHz Output data: decision LVL1Acc 256 bit pattern RoIs Other detectors Calo MuTrig 40 MHz ROD ROD ROD Pipelines 2.5 ms LVL1 2.5 ms Muon Trigger Calorimeter Trigger LVL1 Acc. 75 kHz RoIs CTP ROB ROB ROB ROIB L2SV H L T RoI requests LVL2 ~10ms ROS L2P L2P L2P RoI data L2N 3 GB/s Event Filter ~1sec LVL2 Acc. EB Event Builder 2 kHz EFN EFP EFP EFP EF Acc. 200 Hz Trigger DAQ 300 MB/s Pierwszy poziom systemu filtracji 1 PB/s 120 GB/s Event Size ~1.5 MB
Level-1 Calorimeter EM/Tau Trigger The algorithm is based on a sliding 4x4 window of TriggerTowers (~7200 objects with 0.1x0.1 granularity: analogue sum of calo cells). An Em/Tau RoI is produced if the window satisfies all of the following conditions: • The central 2x2 "core" cluster (EM+had) is a LocalETMaximum. This ensures that overlapping clusters cannot both produce RoIs. • The most energetic of the 4 2-tower EM clusters is greater than the EM cluster threshold (EM trigger), OR the sum of the most energetic EM cluster plus the central 2x2 hadronic towers is greater than the Tau cluster threshold (Tau trigger). • The summed ET in the outer ring of 12 EM towers is less than or equal to the EM isolation threshold • The summed ET in the outer ring of 12 had towers is less than or equal to the hadronic isolation threshold • The summed ET in the central 2x2 had towers is less than or equal to the hadronic veto threshold (EM trigger only)
Level-1 Calorimeter Jet Trigger • 2x2 groups of towers are summed to form JetElements • sliding 4x4 window of JetElements (0.8 x 0.8) • a particular window will produce an Jet RoI if two conditions are met: • The central 2x2 JetElement "core" cluster is a LocalETMaximum. • The total ET in the jet cluster is greater than the jet threshold. • 8 central and 4 forward jet thresholds
Level-1 Calorimeter Missing ET and Total ET Triggers • for Missing ET, an (optional) threshold is applied to each JetElement, to provide further noise suppression. The phi coordinate of the JetElement is used to convert its ET to Ex and Ey components, which are then summed. Finally the global sum is compared with a set of trigger thresholds. • for Total ET, an (optional) threshold is applied to each JetElement, independent of that for the missing ET trigger. The total ET in all JetElements above this threshold is then summed and compared with a set of trigger thresholds. • The TriggerMenu may contain up to 8 missing ET thresholds and up to 4 total ET thresholds. The main output of EnergyTrigger is a transient EnergyRoI object, which contains the 3 RoI words produced by the trigger hardware, summarising the Ex, Ey and ET sums as well as the trigger thresholds passed.
ATLAS Level-1 Muon Trigger • Looking for coincidences in chamber layers within programmable roads (road width related to momentum) • 6 programmable coincidence windows determine momentum threshold (using B-field deflection) Dedicated muon chambers with goodtiming resolution • Barrel: Resistive Plate Chambers (RPC) • Endcaps: Thin Gap Chambers (TGC)
Key features of ATLAS trigger strategy Regions of Interest • HLT uses Regions of Interest • Reduce data bandwidth at LVL2 • Reduce processing time • Early rejection • Three level trigger • Steps within LVL2 and EF • Reduces processing time • Reduces decision latency
Asynchronous – software based ~500 nodes: four dual-CPU 2GHz cores Full detector granularity in Regions of Interests (RoIs) seeded by LVL1 Fast reconstruction Average execution time ~10 ms Output rate up to ~3.5 kHz Event Builder: ~100 nodes dual-CPU 2GHz cores ROD ROD ROD RoIs ROB ROB ROB ROIB L2SV L2P L2P L2P Event Filter ~1sec EB EFN EFP EFP EFP Drugi poziom systemu filtracji Other detectors Calo MuTrig 40 MHz Pipelines 2.5 ms 1 PB/s LVL1 2.5 ms Muon Trigger Calorimeter Trigger LVL1 Acc. 75 kHz CTP 120 GB/s H L T RoI requests LVL2 ~10ms ROS RoI data L2N 3 GB/s LVL2 Acc. Event Builder 3.5 kHz EF Acc. Event Size ~1.5 MB 200 Hz Trigger DAQ 300 MB/s
Modelowanie architektury LVL2 Switch based Bus based • skalowalność (ROBIN) • niezawodność • liczba i typ przełączników • granularność ruchu sieciowego • potencjalne miejsca przeciążeń
ROS18 … … ROS24 ROS19 L2P01 L2P14 ROS01 Foundry FastIron 800 BATM T6 Foundry EI L2SV01 … … SFI01 ….. L2SV06 SFI(O)1 - 16 … DFM pROS Testbed setup (Combined) up to 18 ROSs up to 16 SFIs up to 12 L2PUs ROS emulated Slink input
Verification– testbedsimulations XXX-th IEEE-SPIE Join Symposium Wilga2012
Asynchronous – software based ~500 nodes: four dual-CPU 2GHz cores Full detector granularity in Regions of Interests (RoIs) seeded by LVL1 Fast reconstruction Average execution time ~10 ms Output rate up to ~3.5 kHz Event Builder: ~100 nodes dual-CPU 2GHz cores ROD ROD ROD RoIs ROB ROB ROB ROIB L2SV L2P L2P L2P Event Filter ~1sec EB EFN EFP EFP EFP Wyższe poziomy systemu filtracji Other detectors Calo MuTrig 40 MHz Pipelines 2.5 ms 1 PB/s LVL1 2.5 ms Muon Trigger Calorimeter Trigger LVL1 Acc. 75 kHz CTP 120 GB/s H L T RoI requests LVL2 ~10ms ROS • Event Filter (EF): • ~1600 nodes four dual-CPU 2GHz cores • Seeded by level 2 • Full detector granularity • Potential full event access • Offline algorithms • Average execution time ~1s • Output rate up to ~200 Hz RoI data L2N 3 GB/s LVL2 Acc. Event Builder 3.5 kHz EF Acc. Event Size ~1.5 MB 200 Hz Trigger DAQ 300 MB/s
Model and thereal system L2PU data collection time from ROS (run 191190) Evolution of L2 decision time (no algorithms) Evolution of EventBuilding time EventBuilding time distribution (run 191190)
Zdalne przetwarzanie danych systemu filtracji ATLASa PF PF PF PF SFI SFI SFI PF PF SFOs Switch Copenhagen Edmonton Kraków Manchester RemoteProcessing Farms Packet Switched WAN: GEANT lightpath Back End Network Local Event Processing Farms mass storage CERN CERN Computing Center
Testy infrastruktury sieciowej pomiędzy CERN-em a Krakowem Manchester Copenhagen Edmonton GEANT PIONIER SFI EFD CYFRONET-Kraków RB CERN WN SM WN
SFI WN - PT RemoteWorker WN - PT RemoteWorker WN - PT RemoteWorker WN - PT RemoteWorker Node n SharedHeap 10011101010001001001000100010100011110100010010100100010010100010000100010010101011110000010111001100100100100101001101010100010001000100010010001001010001000010001001010101111000001011100110010010010010100110101010001000100010101010101010001011110100110100111000101000111 Ev z Ev y Ev x PT #1 PTIO EFD PT #n PTIO SFO Wykorzystanie zasobów Gridu w przetwarzaniu danych ATLAS-a Grid farm CERN SiteManager RealTimeDispatcher Grid farm ResourceBroker ATLAS operator