110 likes | 273 Views
NGIX/East Update. Dan Magorian, MAX Director of Engineering and Operations magorian@maxgigapop.net. Somehow NGIX/E seems to be an R&E exchange point that Quilt folks know less about. Maybe that’s because we’re smaller, or maybe we don’t do a good job getting the word out..
E N D
NGIX/East Update Dan Magorian, MAX Director of Engineering and Operations magorian@maxgigapop.net Quilt Peering Workshop, St Louis MO Oct 12, 2006
Somehow NGIX/E seems to be an R&E exchange point that Quilt folks know less about • Maybe that’s because we’re smaller, or maybe we don’t do a good job getting the word out.. • Anyway, in 1998 the Fed Joint Engineering Team nets (JETnets) solicited XPes for the east, central, and west; and selected NGIX/E (MAX/UMD), StarTap (MREN), and NGIX/W (NASA Ames). Sort of a successor to FIX/East. • Topology was full L2 mesh of p-t-p atm pvcs (hold the boos, please). No transit peerings, no common route pool. Didn’t scale to larger peer points, but kept life simple. Quilt Peering Workshop, St Louis MO Oct 12, 2006
In 2002, conversion from atm to ethernet • Atm began to go out of style, and customer nets began push for frame-based (ethernet) peering fabrics. • Converting a national XP from atm to ethernet isn’t like changing over a campus net at midnight. Replacement lines have to be arranged, peer partners have to coordinate ints, downtime has to be minimal because they’re paying for uptime • The biggest issue was how to do the transition without massive flag-day changeover nightmare • Used RFC 1483 Cisco 6500 OSM blade bridged-routing to allow us to single-endedly change over each peer at midnight without full-mesh of peers needing to be there. Quilt Peering Workshop, St Louis MO Oct 12, 2006
NGIX/E R&E peers today • Abilene (10 gb), Newnet in progress (10G) • GEANT2 (OC192 from Frankfurt via WANphy) • Energy Sciences net (ESnet) (10 gb) • Defense Research & Edu Net (DREN) (1 gb) • NASA’s NISN and NREN (1 gb ea) • USGS/DOI net (1gb) • Still have vBNS (barely) (1 gb) • AtlanticWave distributed peer fabric, (10G) Quilt Peering Workshop, St Louis MO Oct 12, 2006
Mid-Atlantic Crossroads DREN vBNS GEANT ESnet Abilene DRAGON NREN Cogent Qwest NISN MAX Regional Infrastructure National/International Peering Networks Regional Network Participants NGIX-East GIG-EF Univ System Md Network Virginia Institutional Connectors Quilt Peering Workshop, St Louis MO Oct 12, 2006
In a taxonomy of Exchange Point architectures • NGIX/E is layer 2 with full mesh p-t-p vlans, rather than shared vlans or layer 3. • This has worked out well down the years. Responsibility remains with the peers to run their routers and control their own policies. • Caveats that a full mesh doesn’t scale to hundreds of peers, either in assignment work (we preassign all vlans for each peer) or vlan #s. Badness doesn’t melt everyone, no need match everyone’s MTUs, do separate vlans for multicast, etc • Recently, doing a lot of layer 1 optical backhaul provisioning for NGIX/E peers, kind of a convenience offering, because we have dwdm in pops that peers often have to get from. Provide optical backhaul to Abilene/Newnet, ESnet, NASA. Quilt Peering Workshop, St Louis MO Oct 12, 2006
Equi6IX Cogent NewNet Qwest Qwest R&E Nets Abilene MAX Production dwdm rings State of MD pop, Baltimore 1. Original Zhone dwdm over Qwest fiber BALT M40E 2. Movaz dwdm over State Md fiber 3. Gige on HHMI dwdm over Abovenet fiber Ring 4 4. USM dwdm over State Md fiber Level3 pop, Mclean VA CLPK NGIX Ring 2 T640 MCLN CLPK T640 UMD pop, College Park MD CLPK MCLN Ring 1 Ring 3 DCGW DCNE ASHB ARLG Equinix pop, Ashburn VA GWU & Qwest DC pops, ISI/E Arlington VA pop
IPv6 transit from Equinix Ashburn • Based on Internet2’s request for east-coast transit to a v6 IX to reduce Abilene v6 latency, we decided to be a good net citizen since we were already at Equinix Ashburn. • Equinix provides “experimental” public Equi6IX switch for v6 and EquiMIX v4 multicast for free. So we didn’t charge for it either (“With enough net goodness, maybe there is a free lunch”). Interface limited to 100M, but hey, it’s v6. • 15 commercial peers took us up on v6, transiting to Abilene and soon to Newnet. • Available to other NGIX/E peers as well, several of which already have bilateral v6 and v4 multicast peerings. Quilt Peering Workshop, St Louis MO Oct 12, 2006
NGIX/E becomes a GOLE: MAX partnership with Ciena for CoreDirectors • Although AtlanticWave Phase 1 is ethernet, haven’t given up on next-gen sonet potential for using vcat and lcas for dynamic lightpaths, in addition to just static peerings. • We held sonet switch vendor presentations by Ciena, Nortel, Sycamore, and Alcatel (used by GEANT2), got quotes. • Ciena out of blue decided to >>donate<< CoreDirector switch & sonet interfaces to MAX, do co-development gmpls MOU with DRAGON. Talked Ciena into 2nd free switch & more interfaces. Need to pay only for new 10G ethernet ints, maint, training, etc. Very signification donation to MAX for 12 OC192 ints, 16 OC48 ints, 20 gige ints in 1 full height chassis for NGIX/E & 1 half height chassis for DRAGON. Ciena press release is this week • GEANT’s Alcatel MCC connecting to MAX’s Ciena CD will be first interop we know of between new LANphy ethernet cards. Quilt Peering Workshop, St Louis MO Oct 12, 2006
How static and dynamic activities could coexist via “A-Wave Layered Services” IP Peering Services Statically Provisioned GLIF Light Path Services Dynamically Allocated IP IP IP User defined sonet payload framing Inter-switch VLAN 1 Inter-switch VLAN 2 … IP VLAN(s) Ethernet IP (POS) Ethernet STS-1-nV Light Path 1 STS-1-nV Light Path 2 STS-1-nV Light Path 3 STS-1-nV Light Path n … VCAT/LCAS OC192 Sonet wave Whether this happens over original Sura-provided A-Wave 10G lambdas or others used for GLIF services, many advantages to a hybrid layer 1/layer 2 approach.
Futures • “This is a peering workshop. What do dynamic GLIF lightpaths have to do with static peering?” “Where does GOLE meet peer?” • Well, maybe we’ve drunk too much dynamic provisioning coolaid and gone to too many optical control plane testbed workshops. • But there are a class of R&E applications like ESnet’s large data flows, that are better served by on-demand bandwidth than by expensive statically-provisioned 40G interfaces that sit mostly empty until the large flows that must be handled come along. • Now walk down the road a few years. Once bgp is modified to talk to underlying gmpls provisioning protocols, who can say that peering will remain static and be brokered the way it is today? • “Quilt workshop 2010 topic: syncing our robotic peering coordinators to handle 100k bgp session setups per second.” Quilt Peering Workshop, St Louis MO Oct 12, 2006