390 likes | 409 Views
e-EVN developments in 2006. Arpad Szomoru. Outline. The past Current status Expansion of e-EVN EXPReS: first results Connectivity improvements The future. e-VLBI Milestones. September 2002: 2 X 1 Gbit Ethernet links to JIVE Demonstrations at igrid2002 and ER2002
E N D
e-EVN developments in 2006 Arpad Szomoru
Outline • The past • Current status • Expansion of e-EVN • EXPReS: first results • Connectivity improvements • The future
e-VLBI Milestones • September 2002: • 2 X 1 Gbit Ethernet links to JIVE • Demonstrations at igrid2002 and ER2002 • UDP data rates over 600 Mbit/s
OnsalaSweden Chalmers University of Technology, Gothenburg • November 2003: Cambridge – Westerbork fringes detected, only 15 minutes after observations were made. • 64Mb/s, with disk buffering at JIVE only. • October 2003: first light on Westerbork – JIVE 1 Gb/s connection • May 2003: First use of FTP for VLBI session fringe checks. • September: e-VLBI data transfer between Bologna and JIVE – 300Mb/s • November 2003: OnsalaSpace Observatory (Sweden) connected at 1Gb/s. • July:10 Gbit access GEANT-Surfnet • 6 X 1 Gbit links to JIVE e-VLBI Milestones: 2003
June 2004: network stress test (iperf) involving Bologna, Torun, Onsala and JIVE • April 2004: Three-telescope, real-time fringes at 64Mb/s (On, Jb, Wb). • First real-time EVN image at 32Mb/s • September 2004: First e-EVN science session (Ar, Cm, Tr, On, Wb) • Spectral line observations at 32 Mb/s • December 20 2004:connection of JBO to Manchester at 2 x 1 Gb/s • e-VLBI test with Tr, On and Jb • Jb - Tr fringes at 256Mb/s • January 2004: Disk buffered e-VLBI • On, Wb, Cm at 128Mb/s for first e-VLBI image • On – Wb fringes at 256Mb/s • September 2004: Four telescope real-time e-VLBI (Ar, Cm, Tr, Wb) • First fringes to Ar at 32 Mb/s • March 2004: first real-time fringes Westford-GGAO to Haystack • Intercontinental real-time fringes, Wf -On, 32 Mb/s • June 2004: Torun connected at 1Gb/s. e-VLBI Milestones: 2004
March 2005: e-VLBI science session • First continuum science observations at 128 and 64 Mb/s, involving 6 radio telescopes (Wb, Ar, Jb, Cm, On, Tr) • Spring 2006: Metsahovi connected at 10Gb/s February 2005: network transfer test (BWCTL) employing various network monitoring tools involving Jb, Cm, On, Tr, Bologna and JIVE • January 2005: Huygens descent tracking, salvage of Doppler experiment • Use of dedicated lightpath Australia-JIVE, data transferred at ~450 Mb/s • Summer 2005: trench for “last mile” connection to Medicina dug e-VLBI Milestones: 2005
1 Gbps 10 Gbps 155 Mbps 2.5 Gbps
Why bother? (change is bad…) • Target of Opportunity - unscheduled observations triggered by sudden astronomical events. This capability will become much more important when LOFAR comes online • Adaptive Observing - Use e-VLBI as a finder experiment • Or, e-VLBI sessions a few days apart, adapt schedules for later observations based on results (rapid results on large sample, focus in detail on best candidates) • Automatic Observing - small number of telescopes observing for extended periods doing spectral line observations of large galactic samples • Interface with other real-time arrays – e-MERLIN, LOFAR, SKA.. Also function as SKA-pathfinder • Bandwidth no longer limited by magnetic media: 10Gbps technology already becoming mainstream • Because we can…
Recent developments • Regular science/test sessions throughout the year • First open calls for e-VLBI science proposals • First science run completely lost, but, first ever real-time fringes to Mc (128 Mbps) • Second and third science runs: many hours of smooth sailing at 128 Mbps. No excitement, no drama. JIVE becoming an observatory? • Fourth run: 16 hours at 256 Mbps. However, nearly 25% of time lost to technical problems..
Current status • Technicaltests: • 6-station fringes at 256 Mbps • first European 512 Mbps fringes (Jb and Wb, May 18) • 3-station 512 Mbps fringes (Cm, Wb, On, August 21) • first fringes using new 5 GHz receiver at Mc • Current connectivity: • Ar: 64 Mbps in the past, but <32 Mbps this year • European telescopes: 128 Mbps always, 256 Mbps often, 512 Mbps to Wb, Jb and On
Tr connectivity bottleneck – (partially) solved • Black Diamond 6808 switches: • New interfaces (10GE) system in old architecture (1GE) • Originally 8x1GE interface per card • 10GE NIC served by 8 x 1GE queues • Queuing regime – RR (packet based) and flow-based • Flow based: • Max. flow capacity – 1Gbit/s – backround traffic. • There is no known reordering workaround to solve this problem.
e-VLBI to South America? SMART-1 SMART-1 factsheet Testing solar-electric propulsion and other deep-space technologiesName SMART stands for Small Missions for Advanced Research in Technology. Description SMART-1 is the first of ESA’s Small Missions for Advanced Research in Technology. It travelled to the Moon using solar-electric propulsion and carrying a battery of miniaturised instruments. As well as testing new technology, SMART-1 is making the first comprehensive inventory of key chemical elements in the lunar surface. It is also investigating the theory that the Moon was formed following the violent collision of a smaller planet with Earth, four and a half thousand million years ago.Launched 27 September 2003 Status Arrived in lunar orbit, 15 November 2004. Conducting lunar orbit science operations. Notes SMART-1 is the first European spacecraft to travel to and orbit around the Moon. This is only the second time that ion propulsion has been used as a mission's primary propulsion system (the first was NASA's Deep Space 1 probe launched in October 1998). SMART-1 is looking for water (in the form of ice) on the Moon. To save precious xenon fuel, SMART-1 uses 'celestial mechanics', that is, techniques such as making use of 'lunar resonances' and fly-bys.
And other continents.. Australia: • Telescopes connected • PCEVN-Mk5 interface needed China: • Shanghai Observatory connected at 2.5Gbps • Connection via TEIN (622Mbps), ORIENT? Issues with CERNET, CSTNet • Direct lightpath Hong Kong-Netherlight?
Switch from Cisco to Nortel/Avici equipment has been completed: for now, 7 * 1 Gbps, ultimately 16 * 1 Gbps lightpaths + 10 Gbps IP connection
EXPReS: getting underway SA1: new hires at JIVE; two software engineers, one network engineer (finally!), one e-VLBI postdoc Inclusion of e-MERLIN telescopes in e-EVN Operational improvements (deliverable driven): • Robustness • Reliability • Speed • Ease of operation • Station feedback And still, pushing data rates, protocols, UDP, Circuit TCP? Get rid of fairness… Better usage of available bandwidth.
Ongoing New control computers (Solaris AMD servers) • Cut down dramatically on (re-)start time • Powerful code development platform • Tightening up of existing code Other hardware upgrades: • SX optics (fibres + NICs), managed switch at JIVE • Mark5A→B: motherboards, memory, power supplies, serial links, CIBs
And coming.. FABRIC: (Huib Jan van Langevelde) • Distributed software correlation • High bandwidth data transport (On part of e-MERLIN @ 4Gbps) • Two new hires at JIVE SCARIe: • Collaboration with SARA and UvA • Distributed software correlation using Dutch grid • Lambda switching, dynamical allocation of lightpaths, collaboration with DRAGON project • JIVE postdoc hired, still looking for UvA postdoc
user correlator parameters FABRIC components GRID resources data observing schedule in VEX format earth orientation parameters field system controls antenna and acquisition resource allocation and routing correlator control including model calculation DBBC VSI output data FABRIC = The GRID VSIe?? on?? PC-EVN #2
Connectivity improvements Martin Swany
2 Heavy duty gamer PCs • Tyan Thunder K8WE Motherboards • Dual AMD Opteron 2.4GHz processors • 4GB RAM • 2 1Gb PCI-Expres Nics • First one at Torun, back-to-back to Mark5 • Second one located at Poznan Supercomputing Centre
Protocol work in Manchester: (Richard Hughes-Jones, Ralph Spencer & collaborators) • Protocol Investigation for eVLBI Data Transfer • Protocols considered for investigation include: • TCP/IP • UDP/IP • DCCP/IP • VSI-E RTP/UDP/IP • Remote Direct Memory Access • TCP Offload Engines • Work in progress – Links to ESLEA UK e-science • Vlbi-udp – UDP/IP stability & the effect of packet loss on correlations • Tcpdelay – TCP/IP and CBR data
Protocols (1) Mix of High Speed and Westwood TCP (Sansa)
Protocols (2) Circuit TCP (Mudambi, Zheng and Veeraraghavan) Meant for Dedicated End-to-End Circuits, fixed congestion window No slow start, no backoff: finally, a TCP rude enough for e-VLBI?
Protocols (3) Home-grown version of CTCP using pluggable TCP congestion avoidance algorithms in newer Linux kernels (Mark Kettenis) Rock-steady 780 Mbps transfer using iperf from Mc to JIVE Serious problem with new version of Mk5A software under newer kernels
Aim: 16 * 1 Gbps production e-EVN network IP: not possible/affordable 10 Gbps lightpath across Europe: currently ~20k€/year Lightpaths across GÉANT terminating at JIVE If possible, all the way from telescopes. If not, overprovisioned IP connections from telescopes to GÉANT, lightpaths from there on. Guaranteed bandwidth, possibility to use ethernet frames, no more need to worry about congestion.. Towards a true connected-element interferometer e-EVN: the future
External LPs N GE Mk5 16 GE OME network OME network Switch Mk5 Mk5 Dynamic capabilities through DRAC 10 G IP Proposed connection Surfnet-JIVE