140 likes | 154 Views
Fooling around with LSPs: Some Multivendor Gmpls/Mpls testing. Dan Magorian Director, Engineering and Operations magorian@maxgigapop.net. Chris Heermann, Internet2 HOPI project Jon-Paul Herron, IU Abilene engineer John Moore, NC Itec Chris Tracy, MAX Dragon project engineer
E N D
Fooling around with LSPs:Some Multivendor Gmpls/Mpls testing Dan MagorianDirector, Engineering and Operationsmagorian@maxgigapop.net Joint Techs Workshop, Columbus OH
Chris Heermann, Internet2 HOPI project Jon-Paul Herron, IU Abilene engineer John Moore, NC Itec Chris Tracy, MAX Dragon project engineer Aihua Guo, MAX Dragon project programmer Juniper engineers, inc: Brian Kerkhoff, Juniper Abilene SE Dan Nguyen, Juniper Jtac Marion Jackson, Juniper MAX SE Movaz engineers, esp Wes Doonan Dan Magorian, MAX Gigapop/RON Geek Cast of Characters
Everyone’s always telling me I use too many, so here are some definitions for folks who don’t work with it everyday: Multiprotocol Label Switching (“classical” MPLS) over packet-switched networks (routers) Generalized Multiprotocol Label Switching (Gmpls): expanded mpls that controls fiber, lambda, sonet, and packet switching between routers & optical xconnects. OXC: optical cross-connect like lambda or fiber switch LSP: label switched path RON: Regional Optical Net OSPF: Open Shortest Path First (most common IGP) IGP: interior gateway protocol for interface reachability RSVP: Resource reservation protocol TE: traffic engineering extensions to OSPF & RSVP First, some Acronyms
As background, HOPI is involved with architectures for both Internet2’s NLR lambda & mpls over Abilene. So Chris Heermann and I discussed testing between Abilene and MAX of mpls tunnel potential applications. The idea was to test ideas for HOPI on-ramps from gigapops/RONs using Abilene as the testbed. We had a meeting at Juniper and weekly conference calls to figure out what we were doing & testing details. Since MAX has Movaz lambda switches as part of Dragon project, which do gmpls, we decided to stitch a MAX gmpls tunnel to an Abilene mpls tunnel to NC Itec and then across Abilene. What was the Objective?
Anyone who knows gmpls will know that this was naive, but although MAX & Abilene & NC Itec run mpls, most nonvendor folks except Aihua weren’t gmpls experts. Education was the objective: what we would learn in the process about lsp provisioning and interoperability. Abilene and Itec end was straightforward: they already have test Abilene & IU & Itec mpls lsps stitched. So process began: every week, “Dan, how far have you gotten with making your end work?” Turned out to much more time-consuming than I had expected: non-trivial and rather nasty. So it was highly educational. Then the Trouble Began
The test setup, MAX point of view MAX Lab Juniper M160 MAX Lab Juniper M20 MAX/Dragon Movaz GSFC RayExpress MAX/Dragon Movaz CLPK RayExpress MAX/Dragon Movaz WSS lambdaswitch MAX Prod Juniper CLPK-M160 MAX Prod Juniper DCNE-M160 Abilene Juniper T640 WASHng Abilene & NC Itec Juniper & Cisco routers Single jumper connection across campus Oc48 pos Oc48 pos Many links
Initially brought up a gmpls lsp between MAX lab-m20 and lab-m160 using Juniper’s doc example. This took awhile because of needing to learn important “gotchas”: Gmpls NEEDS separate paths for control and data. In-band signalling like we’re all used to doesn’t work. This makes sense: the data channel is being switched If you’re using a broadcast media like ethernet for control instead of a point-to-point media like sonet, you NEED to use gre or other p-t-p tunnels over it because of how OSPF detects the media. It cares. Gmpls uses TE extensions to OSPF and RSVP, as well as new link-management protocol with IDs. The LM establishs a virtual path over the gre virtual path. Connections have 4-6 sets of addresses, confusing. First Step.
I then turned on gmpls and established an lsp on MAX production M160 routers CLPK and DCNE in prep for stitching across MAX and to Abilene over the production peering, which uses frame-relay ccc on a separate dlci. This was the “opportunity” to see if turning on link-management, ospf-te, and rsvp-te caused problems (like interface unreachability). They didn’t and it worked. This connection used gre tunnels over single sonet rather than two physical connections like the lab routers. Then Chris Tracy and I started trying to establish gmpls lsps between two pairs of Movaz Ray Expresses and two Juniper M160s. These did not work. One of biggest complicating headache factors was single jumper across campus from lab routers to Movazes in Dragon cluster. Really needed 2 physical connections. Next Steps
This is where the real fun started. Movaz had completed and documented an interop test using Juniper – Movaz - Movaz – Juniper. But that’s ONLY setup known to work. OXCs aren’t routers, and some assumptions you make working with routers aren’t valid. Need to consider: Separate control interfaces. Once we had campus jumper working, no gige link. Movaz has separate out-of-band 100M ethernet control channel (or proprietary in-band 1310 chan), had connect to that. All other interfaces are switched data channels. Partial implementations of OSPF, RSVP, & LM today. It’s still early days for gmpls and lot of effort and inside knowledge needed to make interoperate. With lot of help from Wes Doonan & Movaz, got working. Juniper/Movaz Gmpls Interoperability
Tried to figure out ways of stitching together our gmpls lsps to mpls lsps across the peering with Abilene. We thought mainly problem of reachability of LM address . No common IGP, always intended a manual stitch. But answer came back from multiple sources that right now gmpls and mpls headers are incompatible, and no method will make them talk. Original plan wouldn’t work. This is an area under active research right now, and lsp stitching is discussed in current IETF drafts (ref at end). So for now, have to use an overlay model: set up mpls tunnels over gmpls tunnels. Gmpls tunnel becomes just another path for mpls to use. Still tricky to get working over Movaz part of tunnels. Classic mpls stitching works. Then the mpls tunnels can be stitched to Abilene & Itec. OK, so gmpls lsps are up. So what?
Testbeds not in same lab cause problems/limitations not related to what trying to accomplish, like campus jumpers. Still actually in the middle of testing, ran out of time before getting the overlay mpls lsps stitched to Abilene. But those known to work. Gigapops and Abilene could use mpls lsps for overlay networks, using several service models. Gmpls is nontrivial and has lots of moving parts. Takes not only patient debugging of each layer (physical int, gre tunnel, ospf, rsvp, te link management, lsp path) to get working, but also secret sauce of what’s known to work, even Juniper-Juniper and especially multivendor. So why use gmpls? If you are a RON, optical switching is in your future, and will use gmpls. Right now optical paths are patched and resources are static, but large body of research is proceeding on dynamically switched resources So what were the lessons learned?
This will be big change for the industry from static resource allocation as it gets deployed in next few years. Right now, not a lot of economic benefit from optical switching: if you’ve paid to provision a path, leave it up. Protocols like BGP expect stability, and get unhappy when resources shifted under their feet. Lots of protocol development needed, as well as “bandwidth brokers”. Lots of researchers working on it, and production folks will start to need switched benefits esp oversubscription soon given traffic growth rates. Look at what switching did for ethernet vs shared hubs. Chances are very good that we’ll be using optical switching in production soon. So why not experiment with it today, both large-scale (NLR) and local testing, so we’ll know what we’re doing? Dynamic resource switching
It’s very useful even for non-implementers to keep up with what’s being proposed and worked on. The links change as the drafts expire, so it’s best to do a search on the name. Generalized MPLS Architecture for Multi-Region Networks, draft-vigoureux-shiomoto-ccamp-gmpls-mrn-04.txt, Feb 04 A Framework for Inter-Domain MPLS Traffic Engineering draft-farrel-ccamp-inter-domain-framework-00.txt, April 04 MPLS Inter-AS Traffic Engineering requirements, draft-ietf-tewg-interas-mpls-te-req-06.txt, Jan 04. Requirements for Support of Inter-Area MPLS Traffic Engineering, draft-boyle-tewg-interarea-reqts-01.txt, Dec 03 References: IETF Mpls/Gmpls Drafts
Thanks!Questions? Dan Magorian magorian@maxgigapop.net