1 / 14

LHCb Computing Status Report DAQ , ECS , Software, Facilities

LHCb Computing Status Report DAQ , ECS , Software, Facilities. John Harvey CERN / EP Meeting with LHCC Referees 27 - Nov – 2000. LHC-B Detector. Data rates. VDET TRACK ECAL HCAL MUON RICH. 40 MHz. Level 0 Trigger. 40 TB/s. Level-0 Front-End Electronics Level-1. 1 MHz.

gcarlson
Download Presentation

LHCb Computing Status Report DAQ , ECS , Software, Facilities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LHCb Computing Status ReportDAQ , ECS , Software, Facilities John Harvey CERN / EP Meeting with LHCC Referees 27 - Nov – 2000

  2. LHC-B Detector Data rates VDET TRACK ECAL HCAL MUON RICH 40 MHz Level 0 Trigger 40 TB/s Level-0 Front-End Electronics Level-1 1 MHz Timing & Fast Control L0 Fixed latency 4.0 ms 1 TB/s 40 kHz L1 Level 1 Trigger LAN 1 MHz Front-End Multiplexers (FEM) Front End Links 6 GB/s Variable latency <1 ms RU RU RU Read-out units (RU) Throttle Read-out Network (RN) 6 GB/s SFC SFC Sub-Farm Controllers (SFC) Variable latency L2 ~10 ms L3 ~200 ms Control & Monitoring Storage 50 MB/s Trigger Level 2 & 3 Event Filter CPU CPU CPU CPU LHCb Trigger/DAQ/ECS Architecture

  3. Clock fanout LHC clock BC and BCR TTCrx TTCrx L0 trigger Local trigger L1 trigger (optional) 17 17 L1 L0 L1 L0 Readout Readout Readout Supervisor Supervisor Supervisor L0 Throttle L1 Throttle TFC switch switch switch L1 trigger system SD1 TTCtx SD2 TTCtx SDn TTCtx L0 TTCtx L1 TTCtx Optical couplers Optical couplers Optical couplers Optical couplers TTC system TTCrx TTCrx TTCrx TTCrx FEchip FEchip FEchip FEchip FEchip FEchip FEchip FEchip FEchip FEchip FEchip FEchip FEchip FEchip L0E L0E L0E L0E ADC ADC ADC ADC TTCrx ADC TTCrx ADC ADC ADC ADC ADC TTCrx TTCrx ADC ADC FEchip FEchip FEchip Throttle OR FEchip FEchip Throttle OR FEchip L1 buffer L1 buffer FEchip Control FEchip Control L1 buffer L1 buffer Control Control ADC ADC ADC ADC ADC DSP ADC DSP L1E ADC L1E ADC DSP DSP L1E L1E DAQ DAQ LHCb TFC system • Readout and Throttle Switches • Programmable for ‘Partitioning’ • Design reviewed in October ‘00 • Prototypes in February ’01 • Readout Supervisor • Functional Specification done • Design in progress • Review scheduled for April ’01 • Prototype scheduled for Oct ‘01 • TFC system test • Acquire components • Test L1 broadcast via channel B • Start June ‘01

  4. LHCb Readout Unit • New version of RU design (v2) • less chips, fewer layers, lower cost • First prototype in first week of December ‘00 • Programming of FPGA code for implementing readout protocols underway • Working and tested modules expected by end Jan ’01 • Integration tests will start Mar ‘01

  5. PC/Linux PC/Linux CPU CPU GbE NIC GbE NIC Mem Mem PCI PCI NIC Nic 2 Nic throughput vs. frame size NIC 140 120 100 80 Throughput [Bytes/us] 60 Data 40 Fit 20 Extrapolation w/o min frame size 0 1 10 100 1000 10000 Frame size [bytes] LHCb Event Builder • Studied Myrinet (buffers needed) • Now focusing on Gbit Ethernet • Test setup between 2 PCs • Use >95% nominal bandwidth for frames >512 bytes (512 ->230 kHz) • Can send out frames at frequencies up to 1.4 MHz for 64 byte frames • Implement event building in NIC • Frequency of ~100 kHz demonstrated • EB at Gbit speeds for > ~1kB frames demonstrated • Tested 1 on 1 event building over a switch in CMS test bed • Fixed protocol with RU • Presented results at DAQ2000 • Now studying flow control in switch and making a full scale test on CMS test stand (16 on 16) • More detailed simulation of a full scale GbE readout network to be done CERN Network

  6. JTAG Credit I 2 C card Par Ethernet PC JTAG Serial I 2 C slave Master Par PC Master PC ECS interface to Electronic Modules • Select a reduced number of solutions • Support (HW and SW) for the integration of the selected solutions • Ethernet to credit card PC– for use in counting room • Test board being developed • Study i/f of CC-PC to Parallel bus, I2C, JTAG • Test functionality (RESET) and measure noise • Prototype expected in January ‘01 • Results end of March ‘01 • N.B. two other solutions considered for use in high level radiation areas • SPAC + Long Distance I2C/JTAG • CMS tracker CCU

  7. ECS Control System Prototype • Aim is to distribute with the SCADA license a framework where users can easily implement sub-systems. • First prototype comprises: • Configuration rules and conventions (naming, colors, etc) • Tools for Device Integration • Hierarchical Control & Partitioning • Based on Finite State Machines and SCADA • Automatic UI Generation • Based on SCADA • Plans are to use prototype in the LHCb test beam

  8. Online team members Leaving In 2000 • DAQ • Beat Jost – project leader CERN staff • Jean-Pierre Dufey – Readout Network CERN staff • Marianna Zuin – Readout Network technical student • Richard Jacobsson – TFC CERN staff • Niko Neufeld – Readout Network CERN Fellow • EP/ED group – Readout Unit CERN staff • Zbigniew Guzik – engineer Warsaw • ECS • Clara Gaspar – project leader CERN Staff • Wolfgang Tejessy – JCOP CERN Staff • Richard Beneyton – SCADA in test beam Cooperant • Sascha Schmeling – SCADA test beam CERN Fellow Arriving in 2000

  9. Software Framework - GAUDI • GAUDI v6 released on Nov 10th • Enhanced features e.g. event tag collections, XML browser, detector geometry and event display • 110,000 lines of code, 460 classes • Good collaboration with ATLAS • New services by ATLAS – auditors, histograms in ROOT, HEPMC • Moving to experiment-independent repository • The most urgent tasks for next release are : • Event model – with subdetector groups • Detector description – with subdetector groups • Conditions database – with CERN/IT • Consolidation and enhancements (code documentation Doxygen) • Further contributions expected from ATLAS • Scripting language for interactive work • HARP, GLAST and OPERA also users of GAUDI

  10. Software Applications • GAUDI-based event reconstruction (BRUNEL) • BRUNEL v1 r5 released this month • Physics functionality entirely based on wrapped FORTRAN code • First public release of C++ track fit integrated & tested • Pile-up implemented and spill-over being implemented • Available for production tests • Migration of detector software to C++ • Progress in all areas - digitisation, geometry description, … • Tracking - digitisation and pattern recognition almost ready for public release • e.g. VELO, CAL ~ complete event model and geometry description • Current activities reflect TDR schedule • VELO, MUON (and Tracking) now on hold until after TDR’s produced • RICH and CAL – new software getting higher priority

  11. Software Applications – GEANT4 • Developing interface to G4 for GAUDI applications (GiGa) • Isolates G4 code from GAUDI • Way to input detector geometry and kinematics to G4 • Handles passing of commands to G4 and retrieval of events from G4 • More generally Geant 4 physics being tested • Now - by BaBar, ATLAS, ALICE, Space applications • Some disagreements with data and G3 seen and being studied • Plans in LHCb • Calorimeter – simulation of shower production in prototype and comparison with existing simulations (G3) and data from testbeam • RICH – studying production and detection of Cherenkov photons in RICH1 using TDR geometry – compare results • Integration of these developments in GAUDI using GiGa • Measure performance

  12. LHCb CORE Software Team Left In 2000 • Pere Mato – project leader CERN staff • Florence Ranjard – code librarian CERN staff • Marco Cattaneo – BRUNEL CERN staff • Agnieszka Jacholowska - SICb Orsay • Markus Frank – GAUDI CERN staff • Pavel Binko – GAUDI CERN staff • Rado Chytracek – GAUDI doctoral student • Gonzalo Gracia – GEANT4 CERN fellow • Stefan Probst - GAUDI technical student • Gloria Corti – GAUDI CERN fellow • Sebastien Ponce - GAUDI doctoral student • Ivan Belyaev (0.5) – GAUDI/GEANT4 ITEP • Guy Barrand (0.5) – event display ORSAY Arriving in 2000

  13. Computing Facilities • Estimates of computing requirements updated and submitted to Hoffmann LHC Computing Review • NT farms being decommissioned at CERN and at RAL • Migrating production tools to Linux now • Starting production of 2M B-inclusive events at Liverpool • Farm of 15 PCs for LHCb use at Bologna early 2001 • Long term planning in INFN going on(location,sharing etc…) • Farm of 10 PCs to be set up early 2001 in NIKHEF for LHCb use • Developing overall Nikhef strategy for LHC computing /GRIDs • Grid Computing in LHCb • Participation in EU Datagrid project – starts Jan 2001 (3 years) • Deploy grid middleware (Globus) and develop production application • Started mini-project between Liverpool/RAL/CERN for testing remote production of simulation data and transfer between sites

  14. LHCb Computing Infrastructure Team • Frank Harris – coordination Oxford • Eric van Herwijnen – MC production/Grid CERN staff • Joel Closier – system support/bookkeeping CERN staff

More Related