1 / 31

Abilene Update Session

Abilene Update Session. Steve Corbat ó Director, Backbone Network Infrastructure HENP Working Group Washington DC. Agenda. Network status & events of interest 10-Gbps  u pgrade plans Optical networking on the regional & national scale. Abilene – May, 2002.

bogan
Download Presentation

Abilene Update Session

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Abilene Update Session Steve Corbató Director, Backbone Network Infrastructure HENP Working Group Washington DC

  2. Agenda • Network status & events of interest • 10-Gbps  upgrade plans • Optical networking on the regional & national scale

  3. Abilene – May, 2002 • IP-over-SONET backbone (OC-48c, 2.5 Gbps) 53 direct connections • 4 OC-48c connections • 1 Gigabit Ethernet trial • 23 will connect via at least OC-12c (622 Mbps) by 1Q02 • Number of ATM connections decreasing • 215 participants – research universities & labs • All 50 states, District of Columbia, & Puerto Rico • 15 regional GigaPoPs support ~70% of participants • Expanded access • 50 sponsored participants • New: Smithsonian Institution, Arecibo Radio Telescope • 23 state education networks (SEGPs)

  4. Abilene international connectivity • Transoceanic R&E bandwidths growing! • GÉANT – 5 Gbps between Europe and New York City now • Key international exchange points facilitated by Internet2 membership and the U.S. scientific community • STARTAP & STAR LIGHT – Chicago (GigE) • AMPATH – Miami (OC-3c  OC-12c) • Pacific Wave – Seattle (GigE) • MAN LAN - New York City (GigE/10GigE EP soon) • CA*NET3/4: Seattle, Chicago, and New York • CUDI: CENIC and Univ. of Texas at El Paso • International transit service • Collaboration with CA*NET3 and STARTAP

  5. 09 March 2002 Sacramento Washington Los Angeles Abilene International Peering STAR TAP/Star Light APAN/TransPAC, Ca*net3, CERN, CERnet, FASTnet, GEMnet, IUCC, KOREN/KREONET2, NORDUnet, RNP2, SURFnet, SingAREN, TAnet2 Pacific Wave AARNET, APAN/TransPAC, CA*net3, TANET2 NYCM BELNET, CA*net3, GEANT*, HEANET, JANET, NORDUnet SNVA GEMNET, SINET, SingAREN, WIDE LOSA UNINET OC3-OC12 San Diego (CALREN2) CUDI AMPATH REUNA, RNP2 RETINA, ANSP, (CRNet) El Paso (UACJ-UT El Paso) CUDI * ARNES, CARNET, CESnet, DFN, GRNET, RENATER, RESTENA, SWITCH, HUNGARNET, GARR-B, POL-34, RCST, RedIRIS

  6. Packetized raw High Definition Television (HDTV) • Raw HDTV/IP – single UDP flow of 1.5 Gbps • Project of USC/ISIe, Tektronix, & U. of Wash (DARPA) • 6 Jan 2002: Seattle to Washington DC via Abilene • Single flow utilized 60% of backbone bandwidth • 18 hours: no packets lost, 15 resequencing episodes • End-to-end network performance (includes P/NW & MAX GigaPoPs) • Loss: <0.8 ppb (90% c.l.) • Reordering: 5 ppb • Transcontinental 1-Gbps TCP requires loss of • <30 ppb (1.5 KB frames) • <1 ppm (9KB jumbo)

  7. End-to-End Performance:‘High bandwidth is not enough’ • Bulk TCP flows (transfers > 10 Mbytes) • Current median flow rate over Abilene: 1.9 Mbps • 95th percentile: 7.0 Mbps

  8. Netflow information sources • Weekly summaries • http://netflow.internet2.edu/weekly/ • Raw data manipulation • http://www.itec.oar.net/abilene-netflow/

  9. Jumbo frames are supported here • Default Abilene MTU: 4.5 kB • Now we also support 9 kB MTUs on per connector basis • Motivation: support for HPC computing • Interested connectors? • Contact the NOC

  10. Future of Abilene • Original UCAID/Qwest agreement amended on October 1, 2001 • Extension of MoU for another 5 years – until October, 2006 • Originally expired March, 2003 • Upgrade of Abilene backbone to optical transport capability - ’s (unprotected) • x4 increase in the core backbone bandwidth • OC-48c SONET (2.5 Gbps) to 10-Gbps DWDM

  11. Key aspects of next generation Abilene backbone - I • Native IPv6 • Motivations • Resolving IPv4 address exhaustion issues • Preservation of the original End-to-End Architecture model • p2p collaboration tools, reverse trend to CO-centrism • International collaboration • Router and host OS capabilities • Run natively - concurrent with IPv4 • Replicate multicast deployment strategy • Close collaboration with Internet2 IPv6 Working Group on regional and campus v6 rollout • Addressing architecture

  12. Key aspects of next generation Abilene backbone - II • Network resiliency • Abilene ’s will not be ring protected like SONET • Increasing use of videoconferencing/VoIP impose tighter restoration requirements (<100 ms) • Options: • MPLS/TE fast reroute (initially) • IP-based IGP fast convergence (preferable)

  13. Key aspects of next generation Abilene backbone - III • New & differentiated measurement capabilities • Significant factor in NGA rack design • 4 dedicated servers at each nodes • Additional provisions for future servers • Local data collection to capture data at times of network instability • Enhance active probing • Now: latency & jitter, loss, reachability (Surveyor) • Regular TCP/UDP throughput tests – ~1 Gbps • Separate server for E2E performance beacon • Enhance passive measurement • Now: SNMP (NOC) & traffic matrix/type (Netflow) • Routing (BGP & IGP) • Optical splitter taps on backbone links at select location(s)

  14. Abilene Observatories • Currently a program outline for better support of computer science research • Influenced by discussions with NRLC members • 1) Improved & accessible data archive • Need coherent database design • Unify & correlate 4 separate data types • SNMP, active measurement data, routing, Netflow • 2) Provision for direct network measurement and experimentation • Resources reserved for two additional servers • Power (DC), rack space (2RU), router uplink ports (GigE) • Need process for identifying meritorious projects • Need ‘rules of engagement’ (technical & policy)

  15. Next generation router selection • Extensive router specification and test plan developed • Team effort: UCAID staff, NOC, NC and Ohio ITECs • Chris Heerman, Matt Davy, Lee Graham, John Moore, Paul Schopis, Matt Zekauskas • Discussions with four router vendors • Tests focused on next gen advanced services • High performance TCP/IP throughput • High performance multicast • IPv6 functionality & throughput • Classification for QoS and measurement • 3 routers tested & comm. ISPs referenced •  New Juniper T640 platform selected

  16. Two leading national initiatives in the U.S. • Next Generation Abilene • Advanced Internet backbone • connects entire campus networks of the research universities • 10 Gbps nationally • TeraGrid • Virtual machine room for distributed computing (Grid) • Connecting 4 HPC centers initially • Illinois: NCSA, Argonne • California: SDSC, Caltech • 4x10 Gbps: Chicago  Los Angeles • Ongoing collaboration between both projects

  17. Deployment timing • Ongoing – Backbone router procurement • Detailed deployment planning • July – Rack assembly (Indiana Univ.) • Aug/Sep – New rack deployment at all 11 nodes • Fall – First Wave ’s commissioned • Fall meeting demonstration events • iGRID 2002 (Amsterdam) – late Sep. • Internet2 Fall Member Meeting (Los Angeles) – late Oct. • SC2002 (Baltimore) – mid Nov. • Remaining ’s commissioned in 2003 • Please let us know now of 2002 upgrade plans

  18. Abilene cost recovery model

  19. Abilene program changes • 10-Gbps (OC-192c POS) connections •  backhaul available wherever needed & possible • Only required now for 1 of 4 OC-48c connections • 3-year connectivity commitment required • Gigabit and 10-Gigabit Ethernet • Available when connector has dark fiber access into Abilene router node • Backhaul not available • ATM connection & peer support • TAC recommended ending ATM support by fall 2003 • Two major ATM-based GigaPoPs have migrated • 2 of 3 NGIXes still are ATM-based • NGIX-Chicago @ STAR LIGHT is now GigE • Urging phased migration for connectors & peers

  20. Conclusions – Abilene future • Backbone upgrade project underway • Partnership with Qwest extended thru 2006 • Juniper T640 routers selected for backbone • 10-Gbps backbone  deployment starts this fall • Advanced service foci • Native, high-performance IPv6 • Enhanced, differentiated measurement • Network resiliency • Incremental, non-disruptive transition • Complementary to and collaborative with NSF’s TeraGrid

  21. For more information • Web: www.internet2.edu/abilene • E-mail: abilene@internet2.edu • Again please let us know now of 2002 connection upgrade plans

  22. Optical network project differentiation

  23. Regional optical networking • Regional (state-based) optical networking projects are critical for next generation architecture: • Three-level hierarchy: • National backbones, GigaPoPs, Campuses • Leading examples of state-based initiatives • CENIC ONI (California), I-WIRE (Illinois), I-LIGHT (Indiana), NC • Close collaboration with the Quilt Project • Regional Optical Networking effort • U.S. carrier DWDM access is now not nearly as widespread as with SONET • 30-60 cities for DWDM vs. ~120 cities for SONET (ca. 1998)

  24. Pacific Light Rail (Source: Greg Scott, CENIC/UCSC)

  25. National optical networking options • 1 – Provision incremental wavelengths • Obtain 10-Gbps ’s as with SONET • Exploit smaller incremental cost of additional ’s • 1st  cost is ~10x than subsequent ’s • 2 – Build dim fiber facility • Partner with a facilities-based provider • Acquire 2 fiber pairs on a national scale • Outsource operation of transmission equipment • Needs lower-cost optical transmission equipment • Find ELH/ULH optical kit partner • The classic ‘buy vs. build’ decision in Information Technology • Option 1 selected for TeraGrid and Next Gen Abilene

  26. National Light Rail • Project objectives • form lightweight, but highly coordinated, collaboration to provision, acquire, and/or operate optical networking assets and services • leverage collective buying power and experience of the consortium (ANL, CENIC, P/NW, UCAID) from the metropolitan to the national scales • serve as optical infrastructure substrate for e-science projects proposing to a diverse array of funding agencies • facilitate advanced network measurement and academic research • Initial collaboration • TeraGrid (Argonne), UCAID, CENIC and P/NW GigaPoPs • UCSD, UIC

  27. National Light Rail – an evolvingview • Key Functions •  brokerage service using established relationships with multiple facilities-based carriers • Ongoing evaluation of potential acquisition and operation of national fiber optical network facility in partnership with the corporate sector

  28. www.internet2.edu

More Related