260 likes | 275 Views
Explore the latest developments in eVLBI at the Joint Institute for VLBI in Europe (JIVE), covering protocols, tuning issues, tests, and results. Discover the move to disk-based operations, enhanced reliability, and the potential of real-time eVLBI using fiber optics. This overview delves into network monitoring, centralised control, and future advancements in VLBI technology.
E N D
Recent eVLBI developments at JIVE Arpad Szomoru Joint Institute for VLBI in Europe
Outline • Introduction • Protocols, tuning issues • Tests and results • ftp-based • Dual buffered • Single buffered • Real-time • Jumbo frames • Network monitoring • Centralised control
Introduction Disk based recording • Move to disk-based operations nearly complete • More reliable, data quality • Cheaper to maintain • Ease of use • Direct access compared • Unattended operations • eVLBI: the future? • No consumables • High bandwidth • Fast turn-around eVLBI using fiber
POC targets: • for the EVN and JIVE • Feasibility of eVLBI:- Costs, timescales, logistics. • Standards:- Protocols, parameter tuning, procedures at telescope and correlator. • New Capabilities:- Higher data rates, improved reliability, quicker response. • for GÉANT and the NRENs • To see significant network usage with multiple Gbit streams converging on JIVE. • Minimum is three telescopes (not including Westerbork) • Must be seen to enable new science and not just solve an existing data-transport problem • Real-time operation is seen as the ultimate aim, buffered operation accepted as a development stage.
Current status: • JIVE: 6 lambdas via Netherlight, each capable of 1 Gbps • 1 Gbps connections to Westerbork, Torun, Onsala, (Metsahovi?), 2.5 to Jodrell • 155 Mbps to Arecibo • Connection of Medicina at 1 Gbps planned later this year GE lines to correlator LX Optics converters Cisco ONS 15252 Optical Splitters Fibre pair from Amsterdam Correlator Interface
Protocols • Use of existing protocols, available hardware • TCP: maximal reliability, very sensitive to congestion • UDP: connectionless, fast(er) but not reliable (enough?) • Different protocols will be tested in the near future (tsunami?)
Tuning issues • Dramatic improvement of TCP performance possible by adjusting default settings: • congestion window: product of roundtrip time and bandwidth • size of queues between kernel and NIC • SACK (Selective Acknowledgment) implementation • MTU size (jumbo frames) • Interrupt moderation: cpu may be bottleneck http://sravot.home.cern.ch/sravot
Tuning issues • Dramatic improvement of TCP performance possible by adjusting default settings: • congestion window: product of roundtrip time and bandwidth • size of queues between kernel and NIC • SACK (Selective Acknowledgment) implementation • MTU size (jumbo frames) • Interrupt moderation: cpu may be bottleneck
Mark 5 Motherboard Upgrade • Intel(R) SE7501BR2 • Intel(R) 7501 chip set • Dual Intel(R) Xeon (single, 2.8G, 512K L2 cash installed) • Dual channel DDR266 registered SDRAM (512MB) • Dual peer PCIX 64/100, 2+2 slots32/33 PCI, 2 slots • On-board TX1000 and T10/100 NICs • On-board Ultra320 SCSI • On-board PATA
Huygens descent tracking • Data recorded at Mopra and Parkes, flown to Sydney, transported via dedicated 1Gbps lightpath to Jive • Peak transfer rate ~400Mbps • Use of modified tsunami protocol • “slow start” algorithm • Transfer freezes occasionally • Quite buggy (but not everybody thinks so..)
eVLBI Transfer modes • ftp-based: bandwidth not critical, has greatly improved response to technical problems at telescopes • Dual buffered: recorded on Mk5 diskpack, transferred through disk2net and net2disk, played back at correlator • Single buffered: streamed directly from formatters to diskpacks at JIVE • Real-time: directly from formatters to correlator without disk buffering
First real-time eVLBI Image eVLBI Transfer modes • ftp-based: bandwidth not critical, has greatly improved response to technical problems at telescopes • Dual buffered: recorded on Mk5 diskpack, transferred through disk2net and net2disk, played back at correlator • Single buffered: streamed directly from formatters to diskpacks at JIVE • Real-time: directly from formatters to correlator without disk buffering
IRC+10420 eVLBI, September 2004 MERLIN, March 2002
Jumbo frames • Tested with: • Onsala @4470 • Torun @8192 • Westerbork @9000 • Results inconclusive, but… • Transfer between Mk5’s (bench) • UDP: <500 Mbps • TCP: >500 Mbps, ONLY with jumbo frames • Other hardware platform needed?
Network monitoring • BWCTL installed at most stations • More measuring points along the way needed • Script-based tools for inspection and visualisation of state of network
Centralised control • New operating model for EVN; correlator as integral part of network
Centralised control • New operating model for EVN; correlator as integral part of network • Monitoring tools at stations • Reliability of correlator, stability of Mark5 units • Choice of platform?
Conclusions • Mark5 not very well suited for eVLBI • Hardware limitations (interrupt conflicts, outdated motherboards, proprietary hardware) • Software limitation (stuck to kernel 2.4 because of Jungo drivers, proprietary software) • Next platform? • support for 4Gbps, PCI Xpres? • as much off-the-shelf as possible • as simple as possible (no disks!) • Dedicated lightpaths…