710 likes | 728 Views
ESnet is a large-scale IP network that enables the sharing of massive amounts of data, supports thousands of collaborators worldwide, and provides network and collaboration services to Office of Science laboratories and DOE programs.
E N D
ESnet4: Networking for the Future of DOE ScienceDecember 5, 2006 William E. Johnston ESnet Department Head and Senior ScientistLawrence Berkeley National Laboratory wej@es.net, www.es.net
DOE Office of Science and ESnet – the ESnet Mission • “The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, … providing more than 40 percent of total funding … for the Nation’s research programs in high-energy physics, nuclear physics, and fusion energy sciences.” (http://www.science.doe.gov) • This funding supports some 15,000 graduate students and post docs. • ESnet’s primary mission is to enable the large-scale science that is the mission of the Office of Science (SC): • Sharing of massive amounts of data • Supporting thousands of collaborators world-wide • Distributed data processing • Distributed data management • Distributed simulation, visualization, and computational steering • ESnet provides network and collaboration services to Office of Science laboratories and many other DOE programs
What ESnet Is • A large-scale IP network built on a national circuit infrastructure with high-speed connections to all major US and international research and education (R&E) networks • An organization of 30 professionals structured for the service • An operating entity with an FY06 budget of $26.6M • A tier 1 ISP (direct peerings will all major networks) • The primary DOE network provider • Provides production Internet service to all of the major DOE Labs* and most other DOE sites • Based on DOE Lab populations, it is estimated that between 50,000 -100,000 users depend on ESnet for global Internet access • additionally, each year more than 18,000 non-DOE researchers from universities, other government agencies, and private industry use Office of Science facilities * PNNL supplements its ESnet service with commercial service
Office of Science US CommunityDrives ESnet Design for Domestic Connectivity Institutions supported by SC Major User Facilities DOE Specific-Mission Laboratories DOE Program-Dedicated Laboratories DOE Multiprogram Laboratories Pacific Northwest National Laboratory Idaho National Laboratory Ames Laboratory Argonne National Laboratory Brookhaven National Laboratory Fermi National Accelerator Laboratory Lawrence Berkeley National Laboratory Stanford Linear Accelerator Center Princeton Plasma Physics Laboratory Lawrence Livermore National Laboratory Thomas Jefferson National Accelerator Facility General Atomics Oak Ridge National Laboratory Los Alamos National Laboratory Sandia National Laboratories National Renewable Energy Laboratory
Footprint of Largest SC Data Sharing CollaboratorsDrives the International Footprint that ESnet Must Support • Top 100 data flows generate 50% of all ESnet traffic (ESnet handles about 3x109 flows/mo.) • 91 of the top 100 flows are from the Labs to other institutions (shown) (CY2005 data)
What Does ESnet Provide? - 1 • An architecture tailored to accommodate DOE’s large-scale science • Move huge amounts of data between a small number of sites that are scattered all over the world • Comprehensive connectivity • High bandwidth access to DOE sites and DOE’s primary science collaborators: Research and Education institutions in the US, Europe, Asia Pacific, and elsewhere • Full access to the global Internet for DOE Labs • ESnet is a tier 1 ISP managing a full complement of Internet routes for global access • Highly reliable transit networking • Fundamental goal is to deliver every packet that is received to the “target” site
What Does ESnet Provide? - 2 • A full suite of network services • IPv4 and IPv6 routing and address space management • IPv4 multicast (and soon IPv6 multicast) • Primary DNS services • Circuit services (layer 2 e.g. Ethernet VLANs), MPLS overlay networks (e.g. SecureNet when it was ATM based) • Scavenger service so that certain types of bulk traffic can use all available bandwidth, but will give priority to any other traffic when it shows up • Prototype guaranteed bandwidth and virtual circuit services
What Does ESnet Provide? - 3 • New network services • Guaranteed bandwidth services • Via a combination of QoS, MPLS overlay, and layer 2 VLANS • Collaboration services and Grid middleware supporting collaborative science • Federated trust services / PKI Certification Authorities with science oriented policy • Audio-video-data teleconferencing • Highly reliable and secure operation • Extensive disaster recovery infrastructure • Comprehensive internal security • Cyberdefense for the WAN
What Does ESnet Provide? - 4 • Comprehensive user support, including “owning” all trouble tickets involving ESnet users (including problems at the far end of an ESnet connection) until they are resolved – 24x7x365 coverage • ESnet’s mission is to enable the network based aspects of OSC science, and that includes troubleshooting network problems wherever they occur • A highly collaborative and interactive relationship with the DOE Labs and scientists for planning, configuration, and operation of the network • ESnet and its services evolve continuously in direct response to OSC science needs • Engineering services for special requirements
ESnet History transitionin progress
ESnet3 Today Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (Fall, 2006) SINet (Japan) Russia (BINP) CERN (USLHCnetDOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc PNNL SEA NERSC SLAC ANL MIT BNL LIGO INEEL LBNL LLNL MAN LANAbilene Starlight SNLL CHI-SL TWC JGI OSC GTNNNSA Lab DC Offices ATL CHI PPPL JLAB FNAL AMES ORNL SRS SNV SDN SNLA LANL DC DOE-ALB NASAAmes PANTEX NOAA ORAU OSTI ARM YUCCA MT NSF/IRNCfunded BECHTEL-NV GA Abilene Abilene Abilene SDSC MAXGPoP KCP Allied Signal UNM R&Enetworks AMPATH(S. America) AMPATH(S. America) NREL SNV Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren ESnet Science Data Network (SDN) core CA*net4 France GLORIAD (Russia, China)Korea (Kreonet2 MREN Netherlands StarTapTaiwan (TANet2, ASCC) PNWGPoP/PAcificWave AU NYC ESnet IP core: Packet over SONET Optical Ring and Hubs MAE-E SNV Equinix PAIX-PA Equinix, etc. AU ALB 42 end user sites ELP Office Of Science Sponsored (22) International (high speed) 10 Gb/s SDN core 10G/s IP core 2.5 Gb/s IP core MAN rings (≥ 10 G/s) Lab supplied links OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less NNSA Sponsored (12) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) Specific R&E network peers Other R&E peering points commercial peering points ESnet core hubs IP Abilene high-speed peering points with Internet2/Abilene
ESnet’s Place in U. S. and International Science • ESnet, Internet2/Abilene, and National Lambda Rail (NLR) provide most of the nation’s transit networking for basic science • Abilene provides national transit networking for most of the US universities by interconnecting the regional networks (mostly via the GigaPoPs) • ESnet provides national transit networking and ISP service for the DOE Labs • NLR provides various science-specific and network R&D circuits • GÉANT plays a role in Europe similar to Abilene and ESnet in the US – it interconnects the European National Research and Education Networks (NRENs), to which the European R&E sites connect • A GÉANT operated, NSF funded like currently carries all non-LHC ESnet traffic to Europe, and this is a significant fraction of all ESnet traffic
ESnet is a Highly Reliable Infrastructure “5 nines” (>99.995%) “4 nines” (>99.95%) “3 nines” Dually connected sites
ESnet is An Organization Structured for the Service Network operations and user support(24x7x365, end-to-end problem resolution Deployment and WAN maintenance Network engineering, routing and network services, WAN security Applied R&D for new network services(Circuit services and end-to-end monitoring) Science collaboration services (Public Key Infrastructure certification authorities,AV conferencing, email lists, Web services) Internal infrastructure, disaster recovery,security Management, accounting, compliance 30.7 FTE (full-time staff) total
ESnet FY06 Budget is Approximately $26.6M Approximate Budget Categories Target carryover: $1.0M Special projects (Chicago and LI MANs): $1.2M SC R&D: $0.5M Carryover: $1M Management and compliance: $0.7M SC Special Projects: $1.2M Other DOE: $3.8M Collaboration services: $1.6 Internal infrastructure, security, disaster recovery: $3.4M Circuits & hubs: $12.7M SC operating:$20.1M Operations: $1.1M Engineering & research: $2.9M WAN equipment: $2.0M Total expenses: $26.6M Total funds: $26.6M
Strategy for Conduct of Business for the Last Few Years • Increasing Openness • Making network data available to the community and the Internet • Outage Calendar and outage reports • web-based GUI for traffic measurements (netinfo.es.net) • Flow stats • Increasing Instrumentation • Performance testers (various levels of access to test circuits) • OWAMP servers (one-way testers - equisitally sensitive. Note: The OWAMP system is pretty much down. It is migrating to perfSONAR and new, more relevant, R&E sites will be selected for continuous monitoring • Establish the goal of “network performance between ESnet sites and Internet2 sites served by Abilene is equivalent to network performance across one of the networks or the other”
Strategy for Conduct of Business for the Last Few Years • Increasing involvement with national and international collaborations and research activities • perfSONAR - standards based monitoring platform • OSCARS - community developed (ESnet leadership) virtual circuit management • Increasing partnership with the R&E community • Joint ESnet/I2/Abileene meetings at Joint Techs • LHC network operations working group participation • DICE meetings • Joint Techs meetings participation • attendance, talks, & program committee • All leading up to partnership with Internet2 for building ESnet4
A Changing Science Environment is the Key Driver of the Next Generation ESnet • Large-scale collaborative science – big facilities, massive data, thousands of collaborators – is now a significant aspect of the Office of Science (“SC”) program • SC science community is almost equally split between Labs and universities • SC facilities have users worldwide • Very large international (non-US) facilities (e.g. LHC and ITER) and international collaborators are now a key element of SC science • Distributed systems for data analysis, simulations, instrument operation, etc., are essential and are now common (in fact dominate data analysis that now generates 50% of all ESnet traffic)
Planning the Future Network - ESnet4 There are many stakeholders for ESnet • SC programs • Advanced Scientific Computing Research • Basic Energy Sciences • Biological and Environmental Research • Fusion Energy Sciences • High Energy Physics • Nuclear Physics • Office of Nuclear Energy • Major scientific facilities • At DOE sites: large experiments, supercomputer centers, etc. • Not at DOE sites: LHC, ITER • SC supported scientists not at the Labs (mostly at US R&E institutions) • Other collaborating institutions (mostly US, European, and AP R&E) • Other R&E networking organizations that support major collaborators • Mostly US, European, and Asia Pacific networks • Lab operations and general population • Lab networking organizations These accountfor 85% of allESnet traffic
Planning the Future Network - ESnet4 • Requirements of the ESnet stakeholders are primarily determined by 1) Data characteristics of instruments and facilities that will be connected to ESnet • What data will be generated by instruments coming on-line over the next 5-10 years? • How and where will it be analyzed and used? 2) Examining the future process of science • How will the processing of doing science change over 5-10 years? • How do these changes drive demand for new network services? 3) Studying the evolution of ESnet traffic patterns • What are the trends based on the use of the network in the past 2-5 years? • How must the network change to accommodate the future traffic patterns implied by the trends?
(1) Requirements from Instruments and Facilities Advanced Scientific Computing Research National Energy Research Scientific Computing Center (NERSC) (LBNL)* National Leadership Computing Facility (NLCF) (ORNL)* Argonne Leadership Class Facility (ALCF) (ANL)* Basic Energy Sciences National Synchrotron Light Source (NSLS) (BNL) Stanford Synchrotron Radiation Laboratory (SSRL) (SLAC) Advanced Light Source (ALS) (LBNL)* Advanced Photon Source (APS) (ANL) Spallation Neutron Source (ORNL)* National Center for Electron Microscopy (NCEM) (LBNL)* Combustion Research Facility (CRF) (SNLL)* Biological and Environmental Research William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) (PNNL)* Joint Genome Institute (JGI) Structural Biology Center (SBC) (ANL) Fusion Energy Sciences DIII-D Tokamak Facility (GA)* Alcator C-Mod (MIT)* National Spherical Torus Experiment (NSTX) (PPPL)* ITER High Energy Physics Tevatron Collider (FNAL) B-Factory (SLAC) Large Hadron Collider (LHC, ATLAS, CMS) (BNL, FNAL)* Nuclear Physics Relativistic Heavy Ion Collider (RHIC) (BNL)* Continuous Electron Beam Accelerator Facility (CEBAF) (JLab)* DOE SC Facilities that are, or will be, the top network users *14 of 22 are characterized by current case studies
The Largest Facility: Large Hadron Collider at CERN LHC CMS detector 15m X 15m X 22m,12,500 tons, $700M human (for scale)
(2) Requirements from Examiningthe Future Process of Science • In a major workshop [1], and in subsequent updates [2], requirements were generated by asking the science community how their process of doing science will / must change over the next 5 and next 10 years in order to accomplish their scientific goals • Computer science and networking experts then assisted the science community in • analyzing the future environments • deriving middleware and networking requirements needed to enable these environments • These were complied as case studies that provide specific 5 & 10 year network requirements for bandwidth, footprint, and new services
Science Network Requirements Aggregation Summary Immediate Requirements and Drivers
3) These Trends are Seen in Observed Evolution of Historical ESnet Traffic Patterns top 100 sites to siteworkflows Terabytes / month • ESnet Monthly Accepted Traffic, January, 2000 – June, 2006 • ESnet is currently transporting more than1 petabyte (1000 terabytes) per month • More than 50% of the traffic is now generated by the top 100 sites — large-scale science dominates all ESnet traffic
ESnet Traffic has Increased by10X Every 47 Months, on Average, Since 1990 Apr., 2006 1 PBy/mo. Nov., 2001 100 TBy/mo. 53 months Jul., 1998 10 TBy/mo. 40 months Oct., 1993 1 TBy/mo. 57 months Terabytes / month Aug., 1990 100 MBy/mo. 38 months Log Plot of ESnet Monthly Accepted Traffic, January, 1990 – June, 2006
Requirements from Network Utilization Observation • In 4 years, we can expect a 10x increase in traffic over current levels without the addition of production LHC traffic • Nominal average load on busiest backbone links is ~1.5 Gbps today • In 4 years that figure will be ~15 Gbps based on current trends • Measurements of this type are science-agnostic • It doesn’t matter who the users are, the traffic load is increasing exponentially • Predictions based on this sort of forward projection tend to be conservative estimates of future requirements because they cannot predict new uses • Bandwidth trends drive requirement for a new network architecture • New architecture/approach must be scalable in a cost-effective way
Large-Scale Flow Trends, June 2006Subtitle: “Onslaught of the LHC”) Traffic Volume of the Top 30 AS-AS Flows, June 2006(AS-AS = mostly Lab to R&E site, a few Lab to R&E network, a few “other”) Terabytes FNAL -> CERN traffic is comparable to BNL -> CERNbut on layer 2 flows that are not yet monitored for traffic – soon)
Traffic Patterns are Changing Dramatically total traffic,TBy total traffic, TBy • While the total traffic is increasing exponentially • Peak flow – that is system-to-system – bandwidth is decreasing • The number of large flows is increasing 1/05 6/06 2 TB/month 2 TB/month 7/05 2 TB/month 1/06 2 TB/month
The Onslaught of Grids Question: Why is peak flow bandwidth decreasing while total traffic is increasing? Answer: Most large data transfers are now done by parallel / Grid data movers • In June, 2006 72% of the hosts generating the top 1000 flows were involved in parallel data movers (Grid applications) • This is the most significant traffic pattern change in the history of ESnet • This has implications for the network architecture that favor path multiplicity and route diversity plateaus indicate the emergence of parallel transfer systems (a lot of systems transferring the same amount of data at the same time)
Network Observation – Circuit-like Behavior Look at Top 20 Traffic Generator’s Historical Flow Patterns Over 1 year, the work flow / “circuit” duration is about 3 months Gigabytes/day (no data) LIGO – CalTech (host to host)
Network Observation – Circuit-like Behavior (2) Look at Top 20 Traffic Generator’s Historical Flow PatternsOver 1 year, work flow /“circuit” duration is about 1 day to 1 week Gigabytes/day (no data) SLAC - IN2P3, France (host to host)
What is the High-Level View of ESnet Traffic Patterns? ESnet Inter-Sector Traffic Summary, Mar. 2006 7% Commercial 48% 5% ESnet Inter-Labtraffic ~10% 12% R&E (mostlyuniversities) DOE sites 3% Peering Points 58% 23% International(almost entirelyR&E sites) 43% Traffic coming into ESnet = Green Traffic leaving ESnet = Blue Traffic between ESnet sites % = of total ingress or egress traffic • Traffic notes • more than 90% of all traffic Office of Science • less that 10% is inter-Lab
Requirements from Traffic Flow Observations • Most of ESnet science traffic has a source or sink outside of ESnet • Drives requirement for high-bandwidth peering • Reliability and bandwidth requirements demand that peering be redundant • Multiple 10 Gbps peerings today, must be able to add more bandwidth flexibly and cost-effectively • Bandwidth and service guarantees must traverse R&E peerings • Collaboration with other R&E networks on a common framework is critical • Seamless fabric • Large-scale science is now the dominant user of the network • Satisfying the demands of large-scale science traffic into the future will require a purpose-built, scalable architecture • Traffic patterns are different than commodity Internet
Changing Science Environment New Demands on Network Requirements Summary • Increased capacity • Needed to accommodate a large and steadily increasing amount of data that must traverse the network • High network reliability • Essential when interconnecting components of distributed large-scale science • High-speed, highly reliable connectivity between Labs and US and international R&E institutions • To support the inherently collaborative, global nature of large-scale science • New network services to provide bandwidth guarantees • Provide for data transfer deadlines for • remote data analysis, real-time interaction with instruments, coupled computational simulations, etc.
ESnet4 - The Response to the Requirements I) A new network architecture and implementation strategy • Rich and diverse network topology for flexible management and high reliability • Dual connectivity at every level for all large-scale science sources and sinks • A partnership with the US research and education community to build a shared, large-scale, R&E managed optical infrastructure • a scalable approach to adding bandwidth to the network • dynamic allocation and management of optical circuits II) Development and deployment of a virtual circuit service • Develop the service cooperatively with the networks that are intermediate between DOE Labs and major collaborators to ensure and-to-end interoperability
Next Generation ESnet: I) Architecture and Configuration • Main architectural elements and the rationale for each element 1) A High-reliability IP core (e.g. the current ESnet core) to address • General science requirements • Lab operational requirements • Backup for the SDN core • Vehicle for science services • Full service IP routers 2) Metropolitan Area Network (MAN) rings to provide • Dual site connectivity for reliability • Much higher site-to-core bandwidth • Support for both production IP and circuit-based traffic • Multiply connecting the SDN and IP cores 2a) Loops off of the backbone rings to provide • For dual site connections where MANs are not practical 3) A Science Data Network (SDN) core for • Provisioned, guaranteed bandwidth circuits to support large, high-speed science data flows • Very high total bandwidth • Multiply connecting MAN rings for protection against hub failure • Alternate path for production IP traffic • Less expensive router/switches • Initial configuration targeted at LHC, which is also the first step to the general configuration that will address all SC requirements • Can meet other unknown bandwidth requirements by adding lambdas
ESnet Target Architecture: IP Core+Science Data Network Core+Metro Area Rings 1625 miles / 2545 km IP core hubs SDN hubs Primary DOE Labs Possible hubs 2700 miles / 4300 km international connections international connections international connections Loop off Backbone Seattle SDN Core Cleveland Chicago New York Denver IP Core Sunnyvale Washington DC MetropolitanArea Rings Atlanta LA international connections Albuquerque international connections San Diego 10-50 Gbps circuits Production IP core Science Data Network core Metropolitan Area Networksor backbone loops for Lab access International connections international connections
ESnet4 • Internet2 has partnered with Level 3 Communications Co. and Infinera Corp. for a dedicated optical fiber infrastructure with a national footprint and a rich topology - the “Internet2 Network” • The fiber will be provisioned with Infinera Dense Wave Division Multiplexing equipment that uses an advanced, integrated optical-electrical design • Level 3 will maintain the fiber and the DWDM equipment • The DWDM equipment will initially be provisioned to provide10 optical circuits (lambdas - s) across the entire fiber footprint (80 s is max.) • ESnet has partnered with Internet2 to: • Share the optical infrastructure • Develop new circuit-oriented network services • Explore mechanisms that could be used for the ESnet Network Operations Center (NOC) and the Internet2/Indiana University NOC to back each other up for disaster recovery purposes
ESnet4 • ESnet will build its next generation IP network and its new circuit-oriented Science Data Network primarily on the Internet2 circuits (s) that are dedicated to ESnet, together with a few National Lambda Rail and other circuits • ESnet will provision and operate its own routing and switching hardware that is installed in various commercial telecom hubs around the country, as it has done for the past 20 years • ESnet’s peering relationships with the commercial Internet, various US research and education networks, and numerous international networks will continue and evolve as they have for the past 20 years
ESnet4 • ESnet4 will also involve an expansion of the multi-10Gb/s Metropolitan Area Rings in the SanFrancisco Bay Area, Chicago, Long Island, Newport News (VA/Washington, DC area), and Atlanta • provide multiple, independent connections for ESnet sites to the ESnet core network • expandable • Several 10Gb/s links provided by the Labs that will be used to establish multiple, independent connections to the ESnet core • currently PNNL and ORNL
ESnet Metropolitan Area Network Ring Architecture for High Reliability Sites MANsiteswitch USLHCnetswitch MANsiteswitch USLHCnetswitch SDNcoreswitch SDNcoreswitch T320 SDNcoreeast ESnet production IP core hub IP coreeast IP corewest IP core router SDN corewest ESnetIP core hub ESnet SDNcore hub MAN fiber ring: 2-4 x 10 Gbps channels provisioned initially,with expansion capacity to 16-64 ESnet managedvirtual circuit services tunneled through the IP backbone Large Science Site ESnet production IP service ESnet managedλ / circuit services ESnet MANswitch Independentport card supportingmultiple 10 Gb/s line interfaces Site ESnet switch Virtual Circuits to Site Virtual Circuit to Site Siterouter Site gateway router SDN circuitsto site systems Site LAN Site edge router
Internet2 / Level3 / Infinera Optical Infrastructure Seattle 1000 Denny Way Level 3 Pacific Northwest GP 2001 6th Ave Westin Bldg Albany 316 N Pearl Level 3 Rieth Cleveland TFN 4000 Chester Level 3 Portland Oregon GP 707 SW Washington Qwest Boise 435 W McGregor Drive Portland 1335 NW Northrop Level 3 Cambridge NOX 300 Bent St Level 3 Syracuse Chicago CIC/MREN MERIT BOREAS Internet2 710 N Lakeshore Starlight Rochester Buffalo New York 111 8th Level 3 New York NYSERNET 32 Ave of the Americas Tionesta Detroit Chicago 600 W Chicago Level 3 MC Omaha Rawlins Ogden Philadelphia MAGPI 401 N Broad Level 3 Eureka Reno Pittsburgh Pittsburgh GP 143 S 25th Level 3 Edison Sacramento Indianapolis 1902 S East St Level 3 Oakland Washington MAX 1755 Old Meadow Lane McLean, VA Level 3 Salt Lake Inter-Mountain GP 572 S DeLong Level 3 Denver Front Range GP 1850 Pearl Level 3 Cincinnati San Francisco Louisville 848 S 8th St Level 3 St. Louis Sunnyvale CENIC 1380 Kifer Level 3 Kansas City GPN 1100 Walnut Level 3 Tulsa OneNet 18 W Archer Level 3 Raleigh NCREN 5301 Departure Dr Level 3 San Luis Obispo Charlotte Nashville Tennessee GP 2990 Sidco Dr Level 3 Albuquerque New Mexico GP 104 Gold Ave SE Level 3 Raton Los Angeles 818 W 7th Level 3 Santa Barbara Atlanta, SLR 345 Courtland Atlanta 180 Peachtree St NE Level 3 MC Los Angeles CENIC 600 W 7th Equinix Phoenix Rancho De La Fe (tentative) Tucson San Diego 8929 Aero Drive Kearny Mesa, CA Birmingham Jacksonville FLR 4814 Phillips Hwy Level 3 Dallas Mobile El Paso 501 W Overland Level 3 Valentine Tallahassee Austin Sanderson Infinera equipment sites regen site Internet2 Core Node Internet2 networkDWDM node (common) DWDM node(ESnet only) New Orleans Orlando Baton Rouge LONI 9987 Burbank Level 3 Tampa San Antonio Houston LEARN 1201 N I-45 Level 3 other Level 3 node Level 3 Fiber extension from Internet2 network to a RON/Core connector node Miami South Florida GP 45 NW 5th Level 3
ESnet4 2009 Configuration(Some of the circuits may be allocated dynamically from shared a pool.) ESnet IP switch/router hubs ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ESnet IP switch only hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) Seattle (28) (? ) Portland (8) 3 Boise (29) Boston (9) 3 Chicago (7) Clev. 2 3 (10) (11) 3 NYC Pitts. 3 (25) (13) (32) 3 Denver Sunnyvale (12) Philadelphia (14) KC Salt Lake City (15) 3 3 (26) 2 (16) (21) Wash. DC 2 3 Indianapolis (27) 2 (23) 2 (30) (22) (0) Raleigh 3 Tulsa LA Nashville 2 Albuq. OC48 (24) 2 2 (4) 2 (3) San Diego 1 (1) Atlanta (2) (20) (19) 2 Jacksonville El Paso 2 (17) (6) (5) BatonRouge Houston ESnet SDN switch hubs
ESnet4 2009 Configuration(Some of the circuits may be allocated dynamically from shared a pool.) ESnet IP switch/router hubs ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number ESnet IP switch only hubs Layer 1 optical nodes not currently in ESnet plans Layer 1 optical nodes at eventual ESnet Points of Presence Lab site (20) Long Island MAN West Chicago MAN 600 W. Chicago USLHCNet Seattle Starlight 32 AoA, NYC (28) (? ) Portland BNL (8) 3 Boise (29) Boston USLHCNet 111-8th (9) 3 Chicago (7) Clev. 2 3 (10) (11) NYC FNAL ANL Pitts. 3 (25) (13) (32) 3 Denver Sunnyvale (12) Philadelphia (14) KC Salt Lake City (15) 3 3 (26) (16) 2 (21) Wash. DC San FranciscoBay Area MAN 2 3 Indianapolis (27) 2 (23) 2 (30) (22) (0) Raleigh 3 Tulsa LA Nashville 2 JGI Albuq. OC48 (24) 2 2 (4) 2 (3) LBNL San Diego Newport News - Elite 1 (1) Atlanta SLAC (2) (20) NERSC (19) Wash., DC 2 Jacksonville El Paso 2 Atlanta (17) (6) MATP LLNL (5) BatonRouge Houston Nashville SNLL JLab Jacksonville 180 Peachtree ESnet SDN switch hubs ELITE Atlanta ODU 56 Marietta
Internet2 and ESnet Optical Node SDNcoreswitch M320 T640 opticalinterface to R&E Regional Nets various equipment and experimental control plane management systems ESnet Internet2 IPcore ESnetmetro-areanetworks groomingdevice CienaCoreDirector dynamically allocated and routed waves (future) • support devices: • measurement • out-of-band access • monitoring • security • support devices: • measurement • out-of-band access • monitoring • ……. Network Testbeds Future access to control plane fiber east fiber west Internet2/Level3National Optical Infrastructure Infinera DTN fiber north/south
The Evolution of ESnet Architecture Other IP networks ESnet IP core ESnet IP core ESnet Science Data Network (SDN) core • ESnet to 2005: • A routed IP network with sites singly attached to a national core ring • ESnet from 2006-07: • A routed IP network with sites dually connected on metro area rings or dually connected directly to core ring • A switched network providing virtual circuit services for data-intensive science • Rich topology offsets the lack of dual, independent national cores ESnet sites ESnet hubs / core network connection points Metro area rings (MANs) Circuit connections to other science networks (e.g. USLHCNet)
ESnet4 Planed Configuration 1625 miles / 2545 km IP core hubs Production IP core (10Gbps) ◄ SDN core (20-30-40Gbps) ◄ MANs (20-60 Gbps) or backbone loops for site access International connections SDN (switch) hubs Primary DOE Labs High speed cross-connectswith Ineternet2/Abilene Possible hubs 2700 miles / 4300 km Core networks: 40-50 Gbps in 2009-2010, 160-400 Gbps in 2011-2012 Europe (GEANT) Canada (CANARIE) CERN(30 Gbps) Canada (CANARIE) Asia-Pacific Asia Pacific CERN(30 Gbps) GLORIAD(Russia and China) Europe (GEANT) Asia-Pacific Science Data Network Core Seattle Cleveland Boston Chicago IP Core Boise Australia New York Kansas City Denver Washington DC Sunnyvale Atlanta Tulsa LA Albuquerque Australia South America (AMPATH) San Diego Houston South America (AMPATH) Jacksonville Core network fiber path is~ 14,000 miles / 24,000 km