180 likes | 311 Views
eVLBI Developments at Jodrell Bank Observatory. Ralph Spencer, Richard Hughes-Jones, Simon Casey, Paul Burgess, The University of Manchester. eVLBI Development at JBO and Manchester:. eVLBI correlation tests using actual astronomy data; both pre-recorded and real time data (see talk by Arpad)
E N D
eVLBI Developments at Jodrell Bank Observatory Ralph Spencer, Richard Hughes-Jones, Simon Casey, Paul Burgess, The University of Manchester
eVLBI Development at JBO and Manchester: • eVLBI correlation tests using actual astronomy data; both pre-recorded and real time data (see talk by Arpad) • Network research: • Why? • How? • Results?
Why should a radio astronomer be interested in network research? • Optical fibres have huge bandwidth capability: eMERLIN, eVLA, ALMA, SKA will use >>GHz bandwidths: we need increased bandwidth for VLBI • Fibre networks are (were) under utilized – can VLBI use spare capacity? So why study networks? • What are the bandwidth limits? • How reliable are the links? • What’s the best protocol? • Interaction with end hosts? • What’s happening as technology changes? Can we get more throughput using switched light paths?
How? Network Tests: Manchester/JBO to Elsewhere • High Energy physics (LHC data) and VLBI have the same aims for internet data usage – collaboration! • iGRID 2002 Manchester-Amsterdam-JIVE, showed that >500 Mpbs flows are possible • UDP tests on production network in 2003/4 • ESLEA Project 2005- use of UKLight • GEANT2 Launch 2005 RESULTS ------------------
EVN-NREN Gbit link Chalmers University of Technology, Gothenburg OnsalaSweden Gbit link TorunPoland Jodrell BankUK WesterborkNetherlands DedicatedGbit link MERLIN Dwingeloo DWDM link CambridgeUK MedicinaItaly
UDP Throughput Manchester-Dwingeloo (Nov 2003) • Throughput vs packet spacing • Manchester: 2.0G Hz Xeon • Dwingeloo: 1.2 GHz PIII • Near wire rate, 950 Mbps • Tests done at different times • Packet loss • CPU Kernel Load sender • CPU Kernel Load receiver • 4th Year project • Adam Mathews • Steve O’Toole
Packet loss distribution: Cumulative distribution Long range effects in the data? Poisson Cumulative distribution of packet loss, each bin is 12 msec wide
Exploitation of Switched Lightpaths for E Science Applications: • Multi disciplinary project involve collaboration between many research groups: network scientists, computer science, medical science, high energy physicists and radio astronomers: using UKLight network • Protocol and control plane development • High performance computing • eHealth (e.g. radiology) • High Energy Physics data transfer (LHC) • eVLBI: funds a post-doc (ad out – apply now!)
26th January 2005 UDP TestsSimon Casey (PhD project) Between JBO and JIVE in Dwingeloo, using production network Period of high packet loss (3%):
Dwingeloo DWDM link Jodrell BankUK MedicinaItaly TorunPoland e-VLBI at the GÉANT2 Launch Jun 2005
UDP Performance: 3 Flows on GÉANT • Throughput:5 Hour run 1500 byte MTU • Jodrell: JIVE2.0 GHz dual Xeon – 2.4 GHz dual Xeon670-840 Mbit/s • Medicina (Bologna):JIVE 800 MHz PIII – Mk5 (623)1.2 GHz PIII330 Mbit/s limited by sending PC • Torun:JIVE 2.4 GHz dual Xeon – Mk5 (575)1.2 GHz PIII245-325 Mbit/s limited by security policing (>400Mbit/s 20 Mbit/s) ? • Throughput:50 min period • Period is ~17 min
UDP Performance: 3 Flows on GÉANT • Packet Loss & Re-ordering • Each point 10 secs, 660k packets • Jodrell: 2.0 GHz Xeon • Loss 0 – 12% • Reordering significant • Medicina: 800 MHz PIII • Loss ~6% • Reordering in-significant • Torun: 2.4 GHz Xeon • Loss 6 - 12% • Reordering in-significant
18 Hour Flows on UKLightJodrell – JIVE, 26 June 2005 • Throughput: • Jodrell: JIVE2.4 GHz dual Xeon – 2.4 GHz dual Xeon960-980 Mbit/s • Traffic through SURFnet • Packet Loss • Only 3 groups with 10-150 lost packets each • No packets lost the rest of the time • Packet re-ordering • None
Conclusion • Max data rates depends on the path: • Limited by end hosts? : lack of cpu power in end host jumbo packets will help here • Local limits e.g. security : work with the network providers to achieve the bandwidth we need • Networks have the capacity for >500 Mbps flows • Evidence for network bottlenecks somewhere : more evidence being collected • Packet loss will limit TCP flows – explains limits to data rates in EVN eVLBI tests: new protocols will help here • More needs to be done before we can reliably get 512 Mbps eVLBI in EVN – especially study of end hosts.