80 likes | 271 Views
ECFA - TDOC. ECFA HL-LHC workshop Trigger/DAQ/Offline/Computing preparatory group Wesley H. Smith U. Wisconsin - Madison David Rousseau LAL- Orsay May 22, 2013. Membership. ALICE: Pierre Vande Vyvre , Thorsten Kollegger , Predrag Buncic
E N D
ECFA - TDOC • ECFA HL-LHC workshopTrigger/DAQ/Offline/Computingpreparatory group • Wesley H. Smith U. Wisconsin - Madison • David Rousseau • LAL-Orsay May 22, 2013
Membership • ALICE: Pierre Vande Vyvre, ThorstenKollegger, PredragBuncic • • ATLAS: David Rousseau, Benedetto Gorini, Nikos Konstantinidis • • CMS: Wesley Smith, Christoph Schwick, Ian Fisk, Peter Elmer • • LHCb: Renaud Legac, Niko Neufeld • First Meeting: Yesterday, May 21.
Mandate • FromDraft 2nd May: • « The overall goals are similar to thosedefined for the detector systems. The group shouldassess the requirements for the trigger and subsequent data processing, and the benefitfromtracktriggering (especially for ATLAS and CMS) and higher rate at the input and output of the High Level Trigger. It shouldassess the availability and potential for technical solutions on the time scale of the projects, includingcostconsiderations. It will propose future actions, possiblycommon to all experiments. »
Documentation • Relevant documents identified so far • ALICE Upgrade LoI: LHCC-2012-012 • ATLAS Phase 2 Upgrade LoI: LHCC-2012-022 • CMS Draft Phase 2 Upgrade Document available over the summer 2013 • LHCbFramework Upgrade TDR: LHCC-2012-007. • LHCC common document on computing resources for Run 2 being prepared for summer 2013. • For technology forecasting: document from Bernd Panzer (CERN-IT) regularly updated: • https://espace.cern.ch/WLCG-document-repository/Technical_Documents/Technology_Market_Cost_Trends_2012_v23.pdf
Topics I • Resource Estimates • Parallelisation & vectorisation. • Using full power of available processors. • Use of heterogenous resources, in particular GPU & ARM processors • Both online and offline • maintainability (no code rewrite for new processor generation) • Merging of Higher Level trigger and offline software development • Future of Server PC architecture • Database evolution • Production tools evolution • Cloud technologies • In particular, use online (HLT plus Tier0) resources for offline reco during down time, even short one
Topics II • Increase of rate from LVL0 to HLT to read out • L1 complexity vs. HLT input rates • L1 Trigger Latency • L1 Track triggers • Impact of higher bandwidth links & denser optical interconnects in L1 Trigger • Use of FPGA in L1 Trigger • Event building architectures • HLT Specialized Track Processing • depends on resources available: cpu but also link speed • Simulation of HLT • Use of GPU in HLT • Impact of detector timing improvements ( ~100 ps) • e.g. crystal calorimeters (CMS: PbWO3 has ~ 150 ps,LYSO < 100 ps)
Common Tool Developments • Identified so far (not meant to be complete): • GaudiHive • atlas + lhcb • also concurrency working group, chaired by peremato • CVMFS: all 4 experiments • Frontier squid : atlas + cms • xrootd federation common development • FTS common development • Grid monitoring tools • Production tools, some commonalities: • E.g. CMS is benchmarking panda (atlas) for analysis jobs • Grid middleware: how will it evolve? • External SW with experiments' participation: • root, geant4, fastjet • Other areas to be identified
Comments & Questions • Need a common agreed scenario on LHC running conditions • for example: center of mass energy 14TeV, lumi L=5E34, bunch spacing, 25ns, pileup average 140, peak 192, running efficiency 10E7s per year. • Support for physics studies for the workshop? • What do they need? • We need to provide feedback on luminosity leveling & length of luminous region (longer makes it easier for track trigger, but not too long to hurt acceptance), also insist on 25ns vs 50ns • Organisation of 10th June workshop?