190 likes | 314 Views
Retrofitting the CalREN Optical Network for Hybrid Services. ken lindahl Chair, CENIC High Performance Research network Technical Advisory Council lindahl@berkeley.edu Joint Techs Workshop, 12 Feb 2007, Minneapolis.
E N D
Retrofitting the CalREN Optical Networkfor Hybrid Services ken lindahl Chair, CENIC High Performance Research networkTechnical Advisory Council lindahl@berkeley.edu Joint Techs Workshop, 12 Feb 2007, Minneapolis Joint Techs Workshop, 12 Feb 2007, Minneapolis The Corporation for EducationNetwork Initiatives in California•(714) 220-3400 • info@cenic.org • www.cenic.org
goals • provide “lightpath” (or “lightpath-like”) connections to researchers on campuses connected to CalREN. • between any two or more HPR-connected campuses. • between any campus and NewNet. • between any campus and NLR {PacketNet, WaveNet}. • dynamic, user-controlled set up and tear down • well sure, some day… • initially, manual setup by CENIC NOC on the order of 2-3 days. • later, extend control to researchers. • and of course, continue providing reliable routed IP service to all CalREN-connected campuses. Joint Techs Workshop, 12 Feb 2007, Minneapolis
CalREN Optical Network • two fiber paths, the Coastal path and the Central Valley path, running the length of the state from Corning in the north to San Diego in the south. • Cisco 15808 DWDM gear at most nodes, some newer 15454 gear. • 6500s w/ CWDM optics on Riverside/Palm Desert/El Centro/ San Diego loop. • Cisco 15540 DWDM gear on both ends of campus access fiber. Joint Techs Workshop, 12 Feb 2007, Minneapolis
CalREN Optical Network • Abilene: • 10GE connection in Los Angeles; • 10GE backup (routed IP) in San Jose, via PacificWave to PNWGP, Seattle. • National LambdaRail: • 10GE PacketNet connection in Los Angeles; • 10GE FrameNet connection in San Jose. • Easy access to additional NLR services in Los Angeles and San Jose. • PacificWave: • international peering exchange facility, running on CalREN and NLR waves, 10 Gbps switched fabric with 10GE connections to CalREN at Los Angeles and San Jose. Joint Techs Workshop, 12 Feb 2007, Minneapolis
CalREN service tiers, “technology refresh” opportunities • CalREN-DC: currently being refreshed • CalREN-HPR: 2007-2008 • CalREN Optical Network: 2008-2009 Joint Techs Workshop, 12 Feb 2007, Minneapolis
CalREN-HPR refresh • planned enhancements to routed IP network are not particularly interesting: • capacity for more 10GE campus connections (largely a matter of router real estate); • capability for >10Gbps on backbone connections (multiple 10GEs or 40GE/OC-768 or 100GE). • planning some layer 1 and 2 services, as well: • motivated by requests from researchers over the past 3 years; • “XD Services” white paper written by Board subcommittee (comprised of mostly researchers rather than network guys); • deploying shared infrastructure on the optical backbone, will (hopefully) reduce costs to individual researchers; • reasonably good fit with NewNet and NLR wave and frame services. Joint Techs Workshop, 12 Feb 2007, Minneapolis
caveats • the services and designs in this presentation have not been approved by CENIC engineering staff; • nor have they been approved (and funded) by the CENIC Board of Directors. Joint Techs Workshop, 12 Feb 2007, Minneapolis
requests from researchers • Researchers have requested dedicated, layer 2 private networks between campuses, e.g.: • DETER requested a 1 Gbps layer 2 network connecting labs at Berkeley and USC-ISI, that could be disconnected from any production network. • CITRIS requested a dedicated GE VLAN between labs at Berkeley and UC Davis, for testing/demonstrating video applications they are developing. • CENIC gear in place at the time was not well-suited to delivering 1 Gbps connections; could have provided 10 Gbps connections but at more cost than researchers wanted… • and, in the CITRIS case, we didn’t have nearly enough lead time. Joint Techs Workshop, 12 Feb 2007, Minneapolis
“XD Services” white paper • Requirements: • Standing lambdas available to researchers. • Rapid set up/tear down – 1-2 hours. • Convenient set up/tear down – email to NOC. • “Bypass networks.” • Services: • 1Gbps L2-switched VLANs. • 1Gbps optically switched lambdas. • 10 Gbps optically switched lambdas. • 32 standing lambdas requested: • need to replace all 15808s, 15540s with 15454s. • hopefully we can sneak by with slightly fewer lambdas. Joint Techs Workshop, 12 Feb 2007, Minneapolis
HPRng-L2 service • one 10GE lambda at every campus, broken out into ten GE VLANS. • VLANs trunked over 10GE between switches on the optical backbone. 10GE campus A GE VLANs optical backbone 10GE 10GE campus B GE VLANs • satisfies requests for dedicated, layer 2 private networks; satisfies the “1Gbps L2-switched VLANs” XD service requirement. Joint Techs Workshop, 12 Feb 2007, Minneapolis
ucm ucr ucsd uci ucsf ucd ucsb usc ucla caltech stanford HPRng-L1 topology OAK ucb SAC NLR SVL NLR NewNet LAX RIV Joint Techs Workshop, 12 Feb 2007, Minneapolis
HPRng-L2 VLANs(not all campuses are shown) UC Davis Berkeley SAC OAK 1GE VLANs 1GE VLANs 15540 15540 10Glambda 10GE 10GE campus access link UC Riverside 1GE VLANs LAX RIV UCLA 15808 15808 10Glambda 10GE 10GE 1GE VLANs backbone inter-PoP link Joint Techs Workshop, 12 Feb 2007, Minneapolis
HPRng-L2 management issues • need to avoid over-subscribing backbone segments where multiple VLANs appear. • should be easy since each VLAN is limited to 1 Gbps at the campus interface no more than 10 VLANs on any backbone segment. • in the happy event that the service is popular enough that over-subscription is an issue, we can add additional lambdas on over-subscribed segments. • initially, CENIC NOC will manually configure interfaces and VLANs; • later, install HOPI-style control-plane system. Joint Techs Workshop, 12 Feb 2007, Minneapolis
HPRng-L1 service • 10 Gbps optically switched lambdas. 1 Gbps optically switched lambdas. • most will be 10GE, some GE; • OC-192 or OC-48 may be required in some cases; • very little demand for DWDM wavelength handoff to researchers. • NewNet and NLR access • Lambdas can be switched from any HPRng-connected campus to NewNet or to NLR. • “Optical switches” are actually optical cross-connects (OXCs). • Probably requires upgrading 15808s and 15540s to 15454s. Joint Techs Workshop, 12 Feb 2007, Minneapolis
HPRng-L1 topology ucb OXC ucd ucsf OAK each line represents multiple lambdas, (1..32) OXC SVL stanford NLR ucm NLR ucsb NewNet RIV OXC LAX OXC caltech ucr usc ucla uci ucsd Joint Techs Workshop, 12 Feb 2007, Minneapolis
HPRng-L1 management issues • initially, CENIC NOC will set up/tear down lightpaths by manually configuring the optical switches; • later, install HOPI-style control-plane system. • need to invent (or get someone else to invent and “borrow” from them) a lambda scheduling/reservation/automated setup system • partitionable optical switches are desirable, to allow researchers to modify wave connections between DWDM gear, to set up/tear down lightpaths between campuses. • or, provide partitioning via access, authorization restrictions in the control-plane system. Joint Techs Workshop, 12 Feb 2007, Minneapolis
10GE 10GE 10GE GE VLANs HPRng-L1 λ HPRng-L2 HPRng-L3 HPRng campus handoff DWDM to CalRENoptical backbone CENIC demarc ? how will the campusconnect to these? campus border router Joint Techs Workshop, 12 Feb 2007, Minneapolis
10GE 10GE 10GE GE VLANs λ HPRng campus handoff (2) DWDM to CalRENoptical backbone CENIC demarc optical cross connect border router campus campus fiber to labs Joint Techs Workshop, 12 Feb 2007, Minneapolis
HPRng design committee • Mark Boolootian, UC Santa Cruz • Brian Court, CENIC • John Haskins, UC Santa Barbara • Rodger Hess, UC Davis • Tom Hutton, SDSC • Michael Van Norman, UCLA • Ken Lindahl, UC Berkeley Joint Techs Workshop, 12 Feb 2007, Minneapolis