240 likes | 393 Views
eLBA – Current Developments. Chris Phillips eVLBI Project Scientist 16 June 2008. The LBA. ASKAP. ATCA. Mopra. Ceduna. Parkes. Sydney. Tidbinbilla. Hobart. LBADR – LBA Data Recorder. Cousin of MRO/EVN-PC Commodity PC with VSIB input card Primarily record onto Apple Xserve RAID
E N D
eLBA – Current Developments Chris Phillips eVLBI Project Scientist 16 June 2008
The LBA ASKAP ATCA Mopra Ceduna Parkes Sydney Tidbinbilla Hobart
LBADR – LBA Data Recorder • Cousin of MRO/EVN-PC • Commodity PC with VSIB input card • Primarily record onto Apple Xserve RAID • Control software highly modified from original MRO • Mark5b emulation mode • eVLBI with TCP or UDP • Very flexible • Data written to normal Linux filesystem • Realtime sampler statistics • Flexible realtime fringe checking CSIRO. eVLBI-Aus
LBA status • Using DiFX software correlator for all LBA operations • All disk correlation has been done at Swinburne University of Technology (Melbourne) • eVLBI correlated at Parkes • Disk correlation will transfer to Curtin University (Perth) by August 2008 CSIRO. eVLBI-Aus
DiFX • All LBA correlation now runs on DiFX • Distributed FX • MPI parallelization on Beowulf style cluster • Written by Adam Deller at Swinburne University of Technology • Active development also from Walter Brisken • Available free of charge for scientific research • Supports LBADR, Mark5a, Mark5b, Mark5c • Crucial part of VLBA sensitivity upgrade • Supports mixed mode correlation • Easy to add support for new formats CSIRO. eVLBI-Aus
DiFX: Architecture • Based on MPI • Written in C++ • Uses IPP libraries extensively • 2 types of processes • I/O processes (DataStream) • Compute processes (Core) • Incoming data stream is time divided and copied round robin from DataStream to Core • Asynchronous sends mitigate swarming effect • Supports disk and eVLBI operation • TCP & UDP, filesystem and native Mark5 CSIRO. eVLBI-Aus
APSR Cluster • ATNF Parkes Swinburne Recorder • Backend for Pulsar processing at Parkes • 18 node Dell cluster • Dual processor quad core • VLBI front end I/O cluster PAMHELA • 5 node, dual processor dual core • 4 (Intel) NICs for network flexibility • Job control from PAMHELA, APSR used as slave Image Credit: Shaun Amy CSIRO. eVLBI-Aus
APSR PAMHELA Telescope Connections
Case Study: How not to implement a cluster • PAMHELA running 32bit Debian Etch • APSR running 64bit CentOS • No shared filesystem • MPI/DiFX does not like mixed 32 and 64bit binaries • DataStream sending 4 times more data than receiving • Many issues getting MPI • Mpich would not scale to use full cluster • Openmpi seemed to get confused by multiple interfaces CSIRO. eVLBI-Aus
ATNF Observatory Network • Provided by AARNet • Pair of dark fiber, carrying multiple 10 Gbps SDH • 2 x 1 Gbps connection to each observatory • Used for commodity traffic (web, email, ftp, ssh, observing etc) and eVLBI • Each link effectively limited to 512 Mbps for eVLBI traffic • Sydney to observatory can be run at line rate using bog standard PCs • 989 Mbps user data (evlbi software) • 20 Mbps typical for scp CSIRO. eVLBI-Aus
ATCA Mopra To Seattle 10 Gbps Parkes Sydney 10 Gbps AARNET3 To Perth To Swinburne
Current eVLBI status • eVLBI being offered via normal “call for proposals” and actively encouraged • 3x512 Mbps ATCA-Parkes-Mopra • 2x1 Gbps ATCA-Parkes (or Parkes-Mopra) • Implemented as 2x 2x512 Mbps • Exclusively layer 2 • Hand crafted traffic engineering for “routing”, using multiple VLANS • Connection to Hobart problematic • Effectively single 155 Mbps shared IP network • Cannot sustain 64 Mbps TCP reliably CSIRO. eVLBI-Aus
http://noc.atnf.csiro.au/ http://noc.atnf.csiro.au
Network Upgrades • 1 Gbps link to Swinburne imminent… • Multi-gigabit access to 10 Gbps AARNet3 backbone in progress • Implementing VPLS “Layer2 over Layer3” • eVLBI to Curtin • New cluster for ATCA • 3x1 Gbps using distributed approach • Still waiting on upgraded bandwidth to Hobart CSIRO. eVLBI-Aus
EXPRES-Oz • AARNet and ATNF Partners on EXPReS project • Demonstrate realtime eVLBI from Australia to JIVE • AARNet provided 1 Gbps light paths from Parkes, Mopra and ATCA to JIVE • CENIC, Pacific Wave, CANARIE, SurfNet • Science observation of SN1987a at 1.6 GHz • 11hr observation at 512 Mbps sustained • Using Mark5b UDP • 340msec RTT CSIRO. eVLBI-Aus
Image Credit: Paul Boven, Image created by Paul Boven Satellite image: Blue Marble Next Generation, courtesy of NASA Visibible Earth
e-APT Demo • Connecting Shanghai and Kashima to Parkes at 512 Mbps • AARNet have provisioned 3x 622 Mbps circuits • Mixture of SDH and layer 2/3 • AARNet, CENIC, JGN2, CSTNet, Pacific Wave, HKOEP • Dedicated Mopra-Parkes link • Presented as gigabit Ethernet CSIRO. eVLBI-Aus
e-APT Demo: Implementation • ATNF telescopes using LBADR data format and TCP • Kashima and Shanghai using JIVE UDP format • Raw Mark5 data stream with 64bit sequence # • Shanghai using jive5a control on Mark5a • Kashima written realtime Mark5b converter and also generate JIVE UDP packets • 1500 MTU used exclusively CSIRO. eVLBI-Aus
Image created by Paul Boven Satellite image: Blue Marble Next Generation, courtesy of NASA Visibible Earth
Internet2: Wave of the Future • Winners of inaugural award! • 10 Gbps connection on Internet2 (continental USA) for 1 year • Possible usage: • eVLBI to Haystack • eVLBI to JIVE • Details still being worked through CSIRO. eVLBI-Aus
Contact Us Phone: 1300 363 400 or +61 3 9545 2176 Email: enquiries@csiro.au Web: www.csiro.au Thank you ATNF Chris Phillips eVLBI Project Scientist Phone: +61 2 93724608 Email: Chris.Phillips@csiro.au Web: www.atnf.csiro.au/vlbi