1 / 33

High Throughput Performance for e-VLBI in Europe: Multi-Gigabit over GÉANT2

This talk discusses the performance of high throughput data flows for e-VLBI in Europe using multi-gigabit connections over the GÉANT2 network.

colej
Download Presentation

High Throughput Performance for e-VLBI in Europe: Multi-Gigabit over GÉANT2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Performance of High Throughput Data Flows for e-VLBI in EuropeMulti-Gigabit over GÉANT2 Richard Hughes-Jones The University of Manchesterwww.hep.man.ac.uk/~rich/ then “Talks” EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  2. But will 10 Gigabit Ethernet work on a PC? EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  3. High-end Server PCs for 10 Gigabit • Boston/Supermicro X7DBE • Two Dual Core Intel Xeon Woodcrest 5130 • 2 GHz • Independent 1.33GHz FSBuses • 530 MHz FD Memory (serial) • Parallel access to 4 banks • Chipsets: Intel 5000P MCH – PCIe & MemoryESB2 – PCI-X GE etc. • PCI • 3 8 lane PCIe buses • 3* 133 MHz PCI-X • 2 Gigabit Ethernet • SATA EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  4. Histogram FWHM ~1-2 us 10 GigE Back2Back: UDP Latency • Motherboard: Supermicro X7DBE • Chipset: Intel 5000P MCH • CPU: 2 Dual Intel Xeon 5130 2 GHz with 4096k L2 cache • Mem bus: 2 independent 1.33 GHz • PCI-e 8 lane • Linux Kernel 2.6.20-web100_pktd-plus • Myricom NIC10G-PCIE-8A-R Fibre • myri10ge v1.2.0 + firmware v1.4.10 • rx-usecs=0 Coalescence OFF • MSI=1 • Checksums ON • tx_boundary=4096 • MTU 9000 bytes • Latency 22 µs & very well behaved • Latency Slope 0.0028 µs/byte • B2B Expect: 0.00268 µs/byte • Mem 0.0004 • PCI-e 0.00054 • 10GigE 0.0008 • PCI-e 0.00054 • Mem 0.0004 EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  5. 10 GigE Back2Back: UDP Throughput • Kernel 2.6.20-web100_pktd-plus • Myricom 10G-PCIE-8A-R Fibre • rx-usecs=25 Coalescence ON • MTU 9000 bytes • Max throughput 9.4 Gbit/s • Notice rate for 8972 byte packet • ~0.002% packet loss in 10M packetsin receiving host • Sending host, 3 CPUs idle • For <8 µs packets, 1 CPU is >90% in kernel modeinc ~10% soft int • Receiving host3 CPUs idle • For <8 µs packets, 1 CPU is 70-80% in kernel modeinc ~15% soft int EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  6. 10 GigE UDP Throughput vs packet size • Motherboard: Supermicro X7DBE • Linux Kernel 2.6.20-web100_pktd-plus • Myricom NIC10G-PCIE-8A-R Fibre • myri10ge v1.2.0 + firmware v1.4.10 • rx-usecs=0 Coalescence ON • MSI=1 • Checksums ON • tx_boundary=4096 • Steps at 4060 and 8160 byteswithin 36 bytes of 2n boundaries • Model data transfer time as t= C + m*Bytes • C includes the time to set up transfers • Fit reasonable C= 1.67 µs m= 5.4 e4 µs/byte • Steps consistent with C increasing by 0.6 µs • The Myricom driver segments the transfers, limiting the DMA to 4096 bytes – PCI-e chipset dependent! EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  7. 10 GigE X7DBEX7DBE: TCP iperf Web100 plots of TCP parameters • No packet loss • MTU 9000 • TCP buffer 256k BDP=~330k • Cwnd • SlowStart then slow growth • Limited by sender ! • Duplicate ACKs • One event of 3 DupACKs • Packets Re-Transmitted • Iperf TCP throughput 7.77 Gbit/s EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  8. OK so it works !!! UDP Performance on the 4Gbit Light Path EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  9. ESLEA-FABRIC:4 Gbit flows over GÉANT2 • Set up 4 Gigabit Lightpath Between GÉANT2 PoPs • Collaboration with DANTE • GÉANT2 Testbed London – Prague – London • PCs in the DANTE London PoP with 10 Gigabit NICs • VLBI Tests: • UDP Performance • Throughput, jitter, packet loss, 1-way delay, stability • Continuous (days) Data Flows – VLBI_UDP and udpmon • Multi-Gigabit TCP performance with current kernels • Multi-Gigabit CBR over TCP/IP • Experience for FPGA Ethernet packet systems • DANTE Interests: • Multi-Gigabit TCP performance • The effect of (Alcatel 1678 MCC 10GE port) buffer size on bursty TCP using BW limited Lightpaths EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  10. The GÉANT2 Testbed • 10 Gigabit SDH backbone • Alcatel 1678 MCCs • GE and 10GE client interfaces • Node location: • London • Amsterdam • Paris • Prague • Frankfurt • Can do lightpath routingso make paths of different RTT • Locate the PCs in London EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  11. Photos at The PoP Test-bed SDH Production SDH 10 GE ProductionRouter Optical Transport EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  12. Provisioning the lightpath on ALCATEL MCCs • Some jiggery-pokery needed with the NMS to force a “looped back” lightpath London-Prague-London • Manual XCs (using element manager) possible but hard work • 196 needed + other operations! • Instead used RM to create two parallel VC-4-28v (single-ended) Ethernet private line (EPL) paths • Constrained to transit DE • Then manually joined paths in CZ • Only 28 manually created XCs required !! EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  13. Provisioning the lightpath on ALCATEL MCCs • Paths come up • (Transient) alarms clear • Result: provisioned a path of 28 virtually concatenated VC-4sUK-NL-DE-NL-UK • Optical path ~4150 km • With dispersion compensation~4900 km • RTT 46.7 ms EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  14. 4 Gig Flows on GÉANT: UDP Throughput • Kernel 2.6.20-web100_pktd-plus • Myricom 10G-PCIE-8A-R Fibre • rx-usecs=25 Coalescence ON • MTU 9000 bytes • Max throughput 4.199 Gbit/s • Sending host, 3 CPUs idle • For <8 µs packets, 1 CPU is >90% in kernel modeinc ~10% soft int • Receiving host3 CPUs idle • For <8 µs packets, 1 CPU is ~37% in kernel modeinc ~9% soft int EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  15. Lab Tests: • Peak separation 86 µs • ~40 µs extra delay • Lightpath adds no unwanted effects 4 Gig Flows on GÉANT: 1-way delay • Kernel 2.6.20-web100_pktd-plus • Myricom 10G-PCIE-8A-R Fibre • Coalescence OFF • 1-way delay stable at 23.435 µs • Peak separation 86 µs • ~40 µs extra delay EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  16. Packet separation 300 µs Packet separation 100 µs Lab Tests: Lightpath adds no effects 4 Gig Flows on GÉANT: Jitter histogram • Kernel 2.6.20-web100_pktd-plus • Myricom 10G-PCIE-8A-R Fibre • Coalescence OFF • Peak separation ~36 µs • Factor 100 smaller EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  17. 4 Gig Flows on GÉANT: UDP Flow Stability • Kernel 2.6.20-web100_pktd-plus • Myricom 10G-PCIE-8A-R Fibre • Coalescence OFF • MTU 9000 bytes • Packet spacing 18 us • Trials send 10 M packets • Ran for 26 Hours • Throughput very stable3.9795 Gbit/s • Occasional trials have packet loss ~40 in 10M – due to loopback in Prague • Our thanks go to all our collaborators • DANTE really provided “Bandwidth on Demand” • A record 6 hours ! including • Driving to the PoP • Installing the PCs • Provisioning the Light-path EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  18. Buffer size on Alcatel 1678 MCC EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  19. Alcatel Buffer size: Method • Classic Bottleneck • 10 Gbit/s input 4 Gbit/s output • Use udpmon to send a stream of spaced UDP packets • Measure packet number of first lost frame as function of w packet spacing EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  20. Alcatel Buffer size: Plots • Slope gives buffer size • ~57 kBytes EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  21. TCP Performance EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  22. 4 Gig Flows on GÉANT: TCP iperf • TCP Throughput from iperf • PathLon-Ams_FF-Prague-Paris-Lon • Rtt 55.5 ms • Window for 1 Gbit/s6.94 Mbytes • Rate: 449 Mbit/s EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  23. TCP iperf with Web100 • TCP Throughput from iperf • PathLon-Ams_FF-Prague-Paris-Lon • Rtt 55.5 ms • Window for 1 Gbit/s6.94 Mbytes • 1st second with slow start • By 3rd round trip:TCP sent 25 packetslost some transmitted packetsDup ACK 5 • By 4th round trip:TCP sent 49 packetslost more transmitted packetsDup ACK 31 EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  24. Use UDP to emulate TCP slowstart • udpmon sends bursts of spaced packets: • 32 packets • Jumbo 8000 bytes • back2back • 4 ms between bursts • PathLon-Ams_FF-Prague-Paris-Lon • Rtt 55.5 ms • See 13 packets then loose 1 in 3 • Confirm the TCP problem! EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  25. 10 Gigabit Ethernet from the iBoB Slides from Jonathan’s talk EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  26. iBOB Under Test EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  27. iBOB as a Network Testing Device CX4 10Gbps up to 15m iBOB Configured as network testing device Network PC Or Switch Optional second CX4 JTAG RS232 10/100/1000 Ethernet Local PC Download FPGA firmware over JTAG Local ‘TinySH’ control over RS232 Removed when firmware is stable Remote PC Remote login to network PC to run tests from JBO, Manchester or elsewhere EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  28. Network testing device: Simulink design EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  29. UDP Throughput vs. Packet Spacing • PC • Kernel 2.6.20-web100_pktd-plus • Myricom 10G-PCIE-8A-R CX4 • rx-usecs=25 Coalescence ON • MTU 9000 bytes • UDP Packets • Max throughput 9.4 Gbit/s • iBoB • Packet 8234Data: 8192+ Header: 42 • 100 MHz clock • Max rate 6.6 Gbit/s • See 6.44Gbit/s EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  30. Any Questions? EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  31. Introduction What is EXPReS? • EXPReS = Express Production Real-time e-VLBI Service • Three year project, started March 2006, funded by the European Commission (DG-INFSO), Sixth Framework Programme, Contract #026642 • Objective: to create a distributed, large-scale astronomical instrument of continental and inter-continental dimensions • Means: high-speed communication networks operating in real-time and connecting some of the largest and most sensitive radio telescopes on the planet • Additional Information http://expres-eu.org/ [note: only one “s”] http://www.jive.nl EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  32. Introduction EXPReS Partners Radio Astronomy Institutes • Joint Institute for VLBI in Europe (Coordinator), The Netherlands • Arecibo Observatory, National Astronomy and Ionosphere Center, Cornell University, USA • Australia Telescope National Facility, a Division of CSIRO, Australia • Institute of Radioastronomy, National Institute for Astrophysics (INAF), Italy • Jodrell Bank Observatory, University of Manchester, United Kingdom • Max Planck Institute for Radio Astronomy (MPIfR), Germany • Metsähovi Radio Observatory, Helsinki University of Technology (TKK), Finland • National Center of Geographical Information, National Geographic Institute (CNIG-IGN), Spain • Hartebeesthoek Radio Astronomy Observatory, National Research Foundation, South Africa • Netherlands Foundation for Research in Astronomy (ASTRON), NWO, The Netherlands • Onsala Space Observatory, Chalmers University of Technology, Sweden • Shanghai Astronomical Observatory, Chinese Academy of Sciences, China • Torun Centre for Astronomy, Nicolaus Copernicus University, Poland • Transportable Integrated Geodetic Observatory (TIGO), University of Concepción, Chile • Ventspils International Radio Astronomy Center, Ventspils University College, Latvia National Research Networks • AARNet, Australia • DANTE, United Kingdom • Poznan Supercomputing and Networking Center, Poland • SURFnet, The Netherlands EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

  33. Introduction Participating EXPReS Telescopes EVN-NREN Meeting 19 September 2007, R. Hughes-Jones Manchester

More Related