180 likes | 331 Views
ESnet Update Summer 2007 Joint Techs Workshop. Joe Burrescia ESnet General Manager. Energy Sciences Network Lawrence Berkeley National Laboratory. July 16,2007. Networking for the Future of Science. ESnet 3 with Sites and Peers (Early 2007). SINet (Japan) Russia (BINP). CERN
E N D
ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager Energy Sciences Network Lawrence Berkeley National Laboratory July 16,2007 Networking for the Future of Science
ESnet 3 with Sites and Peers (Early 2007) SINet (Japan) Russia (BINP) CERN (USLHCnetDOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc PNNL SEA NERSC SLAC BNL ANL MIT LIGO INEEL LLNL LBNL MAN LANAbilene Starlight SNLL CHI-SL TWC JGI OSC GTNNNSA Lab DC Offices CHI ATL PPPL JLAB FNAL AMES Equinix Equinix ORNL SRS SNV SDN LANL SNLA DC DOE-ALB NASAAmes PANTEX NOAA ORAU OSTI ARM NSF/IRNCfunded YUCCA MT BECHTEL-NV GA Abilene Abilene Abilene SDSC MAXGPoP KCP Allied Signal UNM R&Enetworks AMPATH(S. America) AMPATH(S. America) NREL SNV Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren ESnet Science Data Network (SDN) core CA*net4 France GLORIAD (Russia, China)Korea (Kreonet2 MREN Netherlands StarTapTaiwan (TANet2, ASCC) PNWGPoP/PAcificWave AU NYC ESnet IP core: Packet over SONET Optical Ring and Hubs MAE-E SNV PAIX-PA Equinix, etc. AU ALB 42 end user sites ELP Office Of Science Sponsored (22) International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) Specific R&E network peers Other R&E peering points commercial peering points ESnet core hubs IP Abilene high-speed peering points with Internet2/Abilene
ESnet 3 Backbone as of January 1, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle New York City Sunnyvale Chicago Washington DC San Diego Albuquerque Atlanta El Paso
ESnet 4 Backbone as of April 15, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Boston New York City Clev. Sunnyvale Chicago Washington DC San Diego Albuquerque Atlanta El Paso
ESnet 4 Backbone as of May 15, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Boston Boston New York City Clev. Clev. Sunnyvale Chicago Washington DC SNV San Diego Albuquerque Atlanta El Paso
ESnet 4 Backbone as of June 20, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Boston Boston New York City Clev. Sunnyvale Denver Chicago Washington DC Kansas City San Diego Albuquerque Atlanta El Paso Houston
ESnet 4 Backbone Target August 1, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Boston Boston New York City Clev. Clev. Sunnyvale Denver Chicago Washington DC Kansas City Los Angeles San Diego Albuquerque Atlanta El Paso Houston Houston
ESnet 4 Backbone Target August 30, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Boston Boston Boise New York City Clev. Clev. Sunnyvale Denver Chicago Washington DC Kansas City Los Angeles San Diego Albuquerque Atlanta El Paso Houston Houston
ESnet 4 Backbone Target September 30, 2007 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Boston Boston Boise New York City Clev. Clev. Denver Sunnyvale Chicago Washington DC Kansas City Los Angeles Nashville San Diego Albuquerque Atlanta El Paso Houston Houston
ESnet 4 Backbone Target September 15, 2008 DC 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub Seattle Boston New York City Clev. Sunnyvale Denver Chicago Washington DC Kansas City Los Angeles Nashville San Diego Albuquerque Atlanta El Paso Houston Houston
ESnet4 1625 miles / 2545 km Production IP core (10Gbps) SDN core (20-30-40-50 Gbps) MANs (20-60 Gbps) or backbone loops for site access International connections Primary DOE Labs High speed cross-connectswith Ineternet2/Abilene Possible hubs 2700 miles / 4300 km Core networks 50-60 Gbps by 2009-2010 (10Gb/s circuits),500-600 Gbps by 2011-2012 (100 Gb/s circuits) Canada (CANARIE) CERN(30+ Gbps) Canada (CANARIE) Asia-Pacific Asia Pacific CERN(30+ Gbps) GLORIAD(Russia and China) USLHCNet Europe (GEANT) Asia-Pacific Science Data Network Core Seattle Boston Chicago IP Core Boise Australia New York Kansas City Cleveland Denver Washington DC Sunnyvale Atlanta Tulsa LA Albuquerque Australia South America (AMPATH) San Diego Houston South America (AMPATH) IP core hubs Jacksonville SDN hubs Core network fiber path is~ 14,000 miles / 24,000 km
Typical ESnet 4 Hub OWAMP Measurement Device Power Controllers Secure Term Server Peering Router 10G Performance Tester M320 Router 7609 Switch
ESnet 4 Factiods as of July 16, 2007 • Installation to date: • 10 new 10Gb/s circuits • ~10,000 Route Miles • 6 new hubs • 5 new routers 4 new switches • Total of 70 individual pieces of equipment shipped • Over two and a half tons of electronics • 15 round trip airline tickets for our install team • About 60,000 miles traveled so far…. • 6 cities • 5 Brazilian Bar-B-Qs/Grills sampled
ESnet Traffic Now Exceeding 2 PetaBytes/Month 2.4PBytes in May 2007 1 PBytes in April 2006 TeraBytes ESnet traffic historically has increased 10x every 47 months
ESnet Availability “4 nines” (>99.95%) “5 nines” (>99.995%) “3 nines” (>99.5%) Dually connected sites Note: These availability measures are only for ESnet infrastructure, they do not include site-related problems. Some sites, e.g. PNNL and LANL, provide circuits from the site to an ESnet hub, and therefore the ESnet-site demarc is at the ESnet hub (there is no ESnet equipment at the site. In this case, circuit outages between the ESnet equipment and the site are considered site issues and are not included in the ESnet availability metric.
OSCARS Overview On-demand Secure Circuits and Advance Reservation System OSCARS Guaranteed Bandwidth Virtual Circuit Services • Path Computation • Topology • Reachability • Contraints • Scheduling • AAA • Availability • Provisioning • Signalling • Security • Resiliency/Redundancy
OSCARS Status Update • ESnet Centric Deployment • Prototype layer 3 (IP) guaranteed bandwidth virtual circuit service deployed in ESnet (1Q05) • Layer 2 (Ethernet VLAN) virtual circuit service under development • Inter-Domain Collaborative Efforts • Terapaths • Inter-domain interoperability for layer 3 virtual circuits demonstrated (3Q06) • Inter-domain interoperability for layer 2 virtual circuits under development • HOPI/DRAGON • Inter-domain exchange of control messages demonstrated (1Q07) • Initial integration of OSCARS and DRAGON has been successful (1Q07) • DICE • First draft of topology exchange schema has been formalized (in collaboration with NMWG) (2Q07), interoperability test scheduled for 3Q07 • Drafts on reservation and signaling messages under discussion • UVA • Integration of Token based authorization in OSCARS under discussion
ESnet Measurement Update • perfSONAR Services Deployed • Measurement Archives serving Utilization information (Demo later today) • E2EMP serving end-to-end circuit status information • Visualization tools • Bandwidth & Latency Measurement Points • Deployed at ESnet4 hubs as they are configured • Working with the LHC community • To define a set of perfSONAR services that can meet the network measurement & monitoring needs of large distributed science application communities.