1 / 19

Simulations and Prototyping of the LHCb L1 and HLT triggers

Simulations and Prototyping of the LHCb L1 and HLT triggers. Tara Shears For A. Barczyk, J.P. Dufey, B. Jost, T. Kechadi, R.McNulty, N. Neufeld, T. Shears On behalf of the LHCb Collaboration. Overview. Introduction to LHCb trigger and data transfer requirements Network simulation studies

cade-moon
Download Presentation

Simulations and Prototyping of the LHCb L1 and HLT triggers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Simulations and Prototyping of the LHCb L1 and HLT triggers Tara Shears For A. Barczyk, J.P. Dufey, B. Jost, T. Kechadi, R.McNulty, N. Neufeld, T. Shears On behalf of the LHCb Collaboration

  2. Overview • Introduction to LHCb trigger and data transfer requirements • Network simulation studies • Evaluating performance in software • Evaluating performance in hardware • Conclusions

  3. LHCb Trigger System Must reduce 10MHz visible interaction rate  200 Hz output rate  3 level trigger using hardware + software: L01 MHz (~4 ms) L1  40 kHz (~50 ms) HLT  200 Hz At L1, 125kB must be received each 25 ms by one of the destinations

  4. 1MHz HLTTraffic Level-1Traffic Switch Switch Switch Switch Switch Front-end Electronics TRM FE FE FE FE FE FE FE FE FE FE FE FE 323Links 4 kHz 1.6 GB/s 126Links 44 kHz 5.5 GB/s Multiplexing Layer 29 Switches 62 Switches 64 Links 88 kHz 32 Links Readout Network L1-Decision Sorter SFC SFC SFC SFC SFC Switch Switch Switch Switch Switch TFCSystem 94 Links 7.1 GB/s StorageSystem Big disk 94 SFCs L1 / HLT Farm CPUFarm CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU Level-1 Traffic Gb Ethernet ~1800 CPUs HLT Traffic Mixed Traffic Data Flow Level 0 trigger

  5. Methods of evaluating trigger networks Software simulation: • Parametrise dataflow, switch response  predicted system behaviour • Two approaches; Custom (models specific network topology), Ptolemy Hardware simulation: • Evaluate network / switching response in situ • Can examine microscopic or macroscopic properties of system

  6. Software Simulation Studies Objectives: Simulate performance of switch + cpu network arrangement Predict data loss, latency, network robustness Provide facility for testing alternative network arrangements Approach: Parametrise data packing / transfer with specified network to study response in custom simulation Develop alternative, flexible simulation to allow other networks to be studied (Ptolemy)

  7. Simulation; Custom code Crossbar switches Existing custom code: Implemented in C LHCb TDR network configuration modelled and response studied (LHCb_2003_079) Network simulated by parametrising data packing / queuing sources multiplexors destinations

  8. Custom simulation results • L1 latency for events in subfarm: • TDR network configuration modelled • 25 events in a MEP • Simulated processing times cut off at 50 ms •  If processing time cut at 30 ms < 0.5% of events lost.

  9. multiplexors Simulation; Ptolemy Ptolemy: Freeware Java based package Uses ‘actors’, self contained units that execute simple tasks when called upon Program then depends upon the way in which these actors are linked together in the graphical editor Crossbar switches

  10. Custom Ptolemy Ptolemy simulation results Investigate buffer occupancies: • Change inter-trigger delays • Change data routingsource – destination • Essentially L1 data flow. • Results of custom code/Ptolemy identical so far  Facility available to test any network configuration (Example buffer occupancy)

  11. Hardware simulation studies Switch fabric and logic – hard to simulate Another approach is to characterise performance in hardware • Can evaluate each network component separately • Or evaluate performance of given network configuration (cpu + ethernet link + switch) • Results: • parameters for input to software simulation • study errors / data loss for LHCb traffic patterns • performance characterisation in own right

  12. SRC SRC 125MB/s 125MB/s SW 125MB/s DST Testing switch behaviour • Data source: • PCI card with NP4GS3 network processor, 3 GbE ports per card. • Programmable data patterns • Up to 24 sources available • Synchronisation: • frame rate: up to ~75kHz • Synchronisation: O(100ns) • Software control: • Use Python interface • RH9 + 2.6.0 kernel • Connect ≥24 port switch to data sources to test switch characteristics + …. ( 24)

  13. Results • Check buffer occupancies, switch latencies, packet loss and error rates. • Eg. Buffer occupancy • 4MB (as expected) for this switch • Eg. Switch latency • linear, switching time negligible wrt.L1 latency for this switch • Studies ongoing

  14. Large-scale network behaviour Hardware: 100 3GHz cpu (source or dest.) 500 Mb/s cpu ethernet links 168 port switch (1 Gb/s links) Software: Synchronise data sends / receives with MPI RH9 2.4.24 kernel Tests: Measure data transfer rate Investigate different traffic patterns 500 Mb/s

  15. Median receive time for 50 destinations [s] Results 50 sources send synchronously to each destination in turn • Find median time for data to be received at all 50 destinations • Each experiment repeated 1000 times • Different data sizes tested Results: • Measurements limited by system bandwidth • Data transfer rates ~2.8 GB/s achieved at LHCb L1 timings (1KB/25ms per source) • Studies ongoing ~LHCb L1 rate (1 kB sent per source takes 1.25 ms for 50 destinations to receive) Source data size [B]

  16. Conclusions • Software simulation: • Simulations of LHCb trigger network developed to evaluate response / performance • LHCb TDR network parametrised and studied in custom simulation • Alternative Ptolemy-based simulation developed • Hardware simulation: • Allow simulation of network response where details of fabric unknown • Testbed developed to analyse switch characteristics • Large scale testbed devised to study general network response

  17. Backup

  18. Example: Building a switch • Implementation • Routing • Output queuing • Timing • Buffer occupancy • Scaleable

  19. Banyan Network Topology Sources Multiplexors Crossbars Destinations

More Related