240 likes | 364 Views
TransPAC High-performance connectivity between the US and the Asia-Pacific region. James Williams williams@iu.edu TransPAC Executive Investigator Indiana University February 7, 2002. The TransPAC Project is funded by the US National Science Foundation
E N D
TransPAC High-performance connectivity between the US and the Asia-Pacific region James Williams williams@iu.edu TransPAC Executive Investigator Indiana University February 7, 2002 The TransPAC Project is funded by the US National Science Foundation and the Japan Science and Technology Corporation
Topics to be discussed • TransPAC background • Network-enabled science • TransPAC technical overview
Background The TransPAC Project provides high-performance network connectivity between the Asia-Pacific region and the United States for the purpose of encouraging educational and scientific collaboration among scientists and researchers in these respective areas. Specifically, TransPAC connects the Asia-Pacific Advanced Network (APAN) to the US high-performance infrastructure (Abilene, the vBNS and “Fednets”) and to other international high-performance networks (Canarie, and EU networks).
Background 2 The TransPAC Project is jointly funded by the US National Science Foundation and the Japan Science and Technology Corporation. Indiana University provides technical and administrative support for TransPAC in the US. KDDI provides similar support for TransPAC in Japan.
Network-enabled science and research in the 21st century • Science and research are becoming progressively more global with network-enabled world wide collaborative communities rapidly forming in a broad range of areas • Many are based around a few expensive – sometimes unique – instruments or distributed complexes of sensors that produce vast amounts of data • These global communities will carry out research based on this data
Network-enabled science and research in the 21st century • This data will be analyzed by supercomputers and large computer clusters, visualized with advanced 3-D display technology, and stored in massive or large data storage systems – all of this will be distributed globally • Note the tight interaction between computation, storage and networking
Some examples of global science • NSF-funded Grid Physics Network’s (GriPhyN) need for petascale virtual data grids (i.e., capable of analyzing petabyte datasets) (http://www.griphyn.org/) • The Large Hadron Collider (LHC) located at CERN (http://lhc.web.cern.ch/lhc/) • Earthscope Geological and Seismic Collaboratory (http://www.earthscope.org) • Sloan Digital Sky Survey (SDSS) (http://www.sdss.org/)
Earthscope Geological and Seismic Collaboratory • Earthscope applies the latest observational, analytic and telecommunications technologies to investigate the structure and evolution of the North American continent and the physical processes controlling earthquakes and volcanic eruptions • Four components of a network-based instrument collaboratory • USArray - continental scale seismic array to provide a coherent 3-D image of the lithosphere and deeper Earth • SAFOD - San Andreas Fault Observatory at Depth • PBO - Plate Boundary Observatory • InSAR synthetic aperture radar images of tectonically active regions
Earthscope - International Connections • “The U.S. scientific community is poised to implement the Earthscope initiative that would provide urgently needed observations on a global scale.”1 • Some project funding from the International Continental Scientific Drilling Program (ICDP); members include Canada, China, Germany, Japan, Mexico, Poland, and the US • Array extensions in Canada and Mexico • Large ground motion sensor array in Japan • Taiwan sensor array modeled after US efforts 1Testimony before Congress, 3/21/2001 by M. Miller, U. Central Washington
Tier2 Center Tier2 Center Tier2 Center Tier2 Center HPSS HPSS HPSS HPSS Data distribution from the Large Hadron Collider (LHC) at CERN ~PByte/sec ~100 MBytes/sec Online System Experiment Offline Farm,CERN Computer Ctr ~25 TIPS Tier 1 Tier 0 +1 HPSS IN2P3 Center ~2.5 Gbits/sec INFN Center RAL Center FNAL Center ~0.6-2.5 Gbps Tier 2 ~0.6-2.5 Gbps Institute ~0.25TIPS Tier 3 Institute Institute Institute 100 - 1000 Mbits/sec Tier 4 Source: Harvey Newman Workstations
* http://www.ivdgl.org and http://igoc.iu.edu * H. Newman
Data rates for some selected projects 1 Data rates for these two instruments only. A minimum of three are required for spatial resolution.
Our challenge is to design, build and manage the reliable, stable networks needed for scientists to collect and analyze their data globally.
TransPAC Technical Overview • What did the network look like before October 2001? • What does the network look like after October 2001? • New OC-12 POS circuit from Tokyo to Seattle • New OC-12 ATM circuit from Tokyo to Chicago • PVC Configuration of Southern ATM Route • BGP relationship and traffic engineering
What did the network look like before October 2001? Prior to 15 October 2001, the TransPAC network consisted of 155Mbps ATM service from Tokyo to the STAR TAP in Chicago
What does the network look like after October 2001? On 15 October 2001, TransPAC was upgraded to 1.244Gbps. The new TransPAC network has dual 622Mbps connections from Tokyo to Seattle (Pacific Wave Connection Point) [POS] and to Chicago (StarLight Connection Point) [ATM]. The Tokyo-Seattle link is supplied by Teleglobe. The Tokyo-Chicago is supplied by KDDI.
New OC-12 POS circuit from Tokyo to Seattle • Trans-Pacific and west-coast circuit provided by Teleglobe • Terminates into a Juniper M10 at the Pacific Northwest Gigapop Weekly Traffic graph from – 1/7/02 1/14/02
New OC-12 ATM circuit from Tokyo to Chicago • Trans-Pacific link provided by KDDI • Contains multiple PVCs to provide direct peering with US HPRENs Weekly Traffic graph from – 1/7/02 1/14/02
Abilene BGP relationship and traffic engineering • Abilene receives APAN routes from three sources: (in order of preference) • From the colocated TransPAC router at PNWG to the Seattle core node • From the direct ATM PVC to the Indianapolis core node • From the Chicago node’s peering with STARTAP • APAN advertises community 11537:40 on the direction PVC and through STARTAP • Abilene’s route maps, in turn, localpref these routes at 40 • The default localpref for ITN peers will remain at 100 in Seattle • APAN localpref’s Abilene routes from Seattle higher than the other two connections
Questions and Comments • Useful Links: • Main TransPAC Web page: http://www.transpac.org • TransPAC NOC Web page: • http://noc.transpac.org • TransPAC traffic graphs: • http://loadrunner.uits.iu.edu/mrtg-monitors/transpac • TransPAC router proxy: http://loadrunner.uits.iu.edu/~routerproxy/transpac • Contact Information: • Administrative: Jim Williams <william@indiana.edu> • Technical: Chris Robb <chrobb@indiana.edu> • 24x7 NOC: (317) 278-6630 <noc@transpac.org>