1.11k likes | 1.42k Views
Welcome!. Wireless cannot access workshop system from Joint Techs WirelessWired connections also available. Welcome!. This is the 7th DCN WorkshopNysernetMAXNASA AmesUniversity of HoustonUniversity of Hawaii (double header)University of Nebraska - LincolnIntroductions. Welcome!. Key objec
E N D
1. Dynamic Circuit Network Hands-On Workshop University of Nebraska-Lincoln
Nebraska Student Union
Lincoln, NE
July 19th and 20th, 2008
2. Welcome! Wireless
cannot access workshop system from Joint Techs Wireless
Wired connections also available
3. Welcome! This is the 7th DCN Workshop
Nysernet
MAX
NASA Ames
University of Houston
University of Hawaii (double header)
University of Nebraska - Lincoln
Introductions
4. Welcome! Key objectives of this workshop are:
Disseminate information to the R&E community regarding the emerging class of Hybrid Network and the associated techniques for Dynamic provisioning and configuration
Review in detail and provide instruction on how to use the control plane software currently in service on the Internet2 Dynamic Circuit Network (DCN), ESnet Science Data Network (SDN), and several regional networks.
Obtain feedback directly from the community on how to improve the technologies…Hopefully, to help guide future development and deployment priorities and speed adoption
Review the state of implementation and deployment of these types of dynamic networks throughout the R&E community.
5. Instructors Tom Lehman (USC/ISI)
Chris Tracy (MAX)
Andy Lake (Internet2)
These people are involved in numerous projects related to deploying dynamic control planes:
Internet2 Dynamic Circuit Network
ESnet OSCARS Project
NSF DRAGON
Internet2 HOPI Testbed
DICE (Dante, Internet2, Canarie, Esnet) – International development activities
6. Why do a workshop? Dynamic Hybrid Networks are new…
The service concepts are still unfamiliar to many networker experts and users… What does one gain with DCN?
The software and hardware implementations are still evolving…
Even the standards are still evolving…
The networks that support these capabilities are few but growing.
The user base is small [for now]…. But will grow as the capabilities mature and become more ubiquitous, persistent, robust, and the utility of both connection oriented services and dynamic provisioning becomes more widely recognized and accepted.
Providing hands-on experience to design and deploy these architectures is one way to broaden and promote adoption.
7. Agenda Day 1
9:00 am Overview of GMPLS and DRAGON
10:00 am Exercise #1: Designing a GMPLS Control Plane for Ethernet Data Planes
10:15-10:45 am Break
12noon Lunch
1:00pm Continue working on Exercise #1
2:00pm Overview of Web Services and OSCARS
2:30-3:00pm Break
3:00pm Exercise #2: IntraDomain provisioning with OSCARS
5:00pm Adjourn
Day 2
9:00am Overview of Inter-Domain implementation in OSCARS
10:00am Exercise #3: Inter-domain Provisioning with OSCARS
10:15-10:30am Break
12noon Lunch
1:00pm Continue working with Exercise #3
2:30-3:00pm Break
3:00pm Use of Internet2 DCN and peering dynamic networks
4pm Adjourn
8. Workshop Perspective In this workshop we focus on implementation
We will design and build a multi-domain GMPLS controlled ethernet network
We have a mobile GMPLS test and evaluation lab consisting of 24 PCs and 12 switches
We will be focused on the GMPLS intra-domain control plane issues
Specifically, OSPF and RSVP protocols and Path Computation
We will do a very brief and cursory review of RSVP and OSPF.
For detailed information on the protocols themselves see the IETF RFCs.
We will not deal with ISIS or CR/LDP or LMP
We will focus on the “DICE” Inter-domain architecture
Web Services based topology distribution and provisioning
We use open source software developed by the NSF DRAGON Project, the DOE OSCARS Project
Intra-domain: Adapted versions of KOM-RSVP and Zebra OSPF plus the NARB for path computing
This software is the only GMPLS software available to support dynamic ethernet services
Uses OSCARS (Dept of Energy) for book-ahead scheduling and AAA
Additional software and interfaces have been developed under auspices of the DICE effort (DANTE, Internet2, Canarie, ESnet)
The code has been adapted to support a wide variety of vendor equipment (e.g. Force10, Extreme, Dell, Ciena, Cisco, Raptor)
9. DCN Workshop Architecture
10. Pod Network Elements Control and Data Planes
11. Dynamic NetworksOverview and Status Objectives and of Dynamic Hybrid Networks
Hybrid Networking and the Global R&E Community
Standardization Efforts
Internet2 Dynamic Circuit Network (DCN)
Control Plane Software
Network Architecture
12. Hybrid Networking There has been interest from many communities for the development of network architectures and mechanisms that utilize lower layers of the protocol stack along with IP at layer 3
This has become known as “hybrid networking”
It is motivated by applications from the research and education community that require greater capabilities
High bandwidth flows (for example, flows that come close to saturating links in the shared IP backbone)
Flows with special requirements related to quality of service, for example jitter requirements
Network and Application Virtualization
13. Hybrid Networks - Motivating Factors Hybrid networks are intended to provide a flexible mix of IP routed service and “lower layer services”
“flexible” means the network can respond quickly to user/application/connector requirements and requests to access both the IP Routed and/or lower layer services
“lower layer services” means access to layer 2 and below paths which can be utilized in a multitude of ways by creative users.
Typical user requirements for these lower layer services are based on:
critical, large bandwidth flows which may require one of more of the following: deterministic network performance, dedicated network resources, guaranteed network capacity, freedom to use protocols other than (congestion control friendly) TCP, privacy/security requirements, scheduled services
User/application communities which desire to build entire topologies which integrate domain specific resources along with dedicated network resources (which have one or more of the above mentioned characteristics)
14. Hybrid NetworksHeterogeneous By Nature Hybrid networks are extremely heterogeneous at several levels
DataPlane can be constructed from
router based Multiprotocol Label Switching (MPLS) tunnels
Ethernet VLAN based Circuits
Synchronous Optical Network / Synchronous Digital Hierarchy (SONET/SDH) circuits
Wavelength Division Multiplexing (WDM) connections
Combinations of the above
15. Hybrid NetworksHeterogeneous By Nature Control Planes can be based on
Multiprotocol Label Switching (MPLS)
Generalized Multiprotocol Label Switching (GMPLS)
Web Services
Management Systems
Combinations of the above
Client (user) services or attachment points could be
Ethernet
SONET
IP Router
InfiniBand
16. Multi-Domain, Multi-Layer Control Planes Key Requirements The “Multi-Layer” is meant to identify several items regarding how hybrid networks may be built. In this context it includes the following:
Multi-Technology - MPLS, Ethernet, Ethernet PBB-TE, SONET, NG-SONET, T-MPLS, WDM
Multi-Level - domains or network regions may operate in different routing areas/regions, and maybe be presented in an abstracted manner across area/region boundaries
Multi-Domain indicates that we want to allow hybrid network service instantiation across multiple domains
And of course all this implies that this will be a Multi-Vendor environment.
Multi-Control – mpls, gmpls, management, vendor proprietary
17. Dynamic Network ServicesIntraDomain
18. Dynamic Network Services InterDomain No difference from a client (user) perspective for InterDomain vs IntraDomain
19. DCN Control Plane IDC usually just a web server
Domain controller can take many formsIDC usually just a web server
Domain controller can take many forms
20. DCN Control Plane Software OSCARS (Web Service)
Started by ESnet, merged with Internet2’s BRUW project in 2006
Web service architecture, interfaces to lower level network specific provisioning systems
Vendor based MPLS L2VPN (Martini Draft)
Internet2 DCS/HOPI
DRAGON (NSF funded project in development by USC/ISI EAST and MAX)
Uses GMPLS protocols to build layer 2 circuits
21. I2 DCN Software Suite OSCARS (IDC)
Web service layer, InterDomain messaging, AAA, Scheduling
DRAGON (DC)
Control of domain network elements (Core Directors and/or Ethernet Switches)
Intra and Inter Domain Path Computation
RSVP based signaling
Version 0.3.1 of DCNSS released April, 2008
https://wiki.internet2.edu/confluence/display/DCNSS On-demand Secure Circuits and Advanced Reservation System
Dynamic Resource Allocation via GMPLS Optical NetworksOn-demand Secure Circuits and Advanced Reservation System
Dynamic Resource Allocation via GMPLS Optical Networks
23. DRAGON Virtual Label Switched Router(VLSR)
PC based control plane software
Manages and provisions various network equipment such as ethernet switches, SDH/SONET
Signaling with RSVP packets
Network Aware Resource Broker (NARB)
Stores topology in OSPF-TE database
Performs inter/intradomain path calculation
Exchanges interdomain topology
24. IDC - Web Service Based Definition -IDC/DC combination
-DC internal domain concern
-two-level hierarchical network view
-Four distinct phases are identified
-Topology Exchange: currently based on abstracted link states, with little to no dynamic information.
-Resource Scheduling: multi-domain, multi-stage path computation process where the specific resources get identified and reserved for a specific signaling event.
-Signaling Phase is where specific network elements are provisioned.
This phase may be initiated by the user, or by the domains.
-Signaling Phase actions are based on resources identified in the Resource Scheduling phase.
-User Request Phase provides a message set for users to request multi-domain circuits
-Current security and authentication models are based on signed soap
messages and X509 Certificates (User to local IDC; IDC to neighbor IDC)-IDC/DC combination
-DC internal domain concern
-two-level hierarchical network view
-Four distinct phases are identified
-Topology Exchange: currently based on abstracted link states, with little to no dynamic information.
-Resource Scheduling: multi-domain, multi-stage path computation process where the specific resources get identified and reserved for a specific signaling event.
-Signaling Phase is where specific network elements are provisioned.
This phase may be initiated by the user, or by the domains.
-Signaling Phase actions are based on resources identified in the Resource Scheduling phase.
-User Request Phase provides a message set for users to request multi-domain circuits
-Current security and authentication models are based on signed soap
messages and X509 Certificates (User to local IDC; IDC to neighbor IDC)
25. Other AAA Models Possible
26. InterDomain Controller (IDC) Protocol (IDCP) Developed via collaboration with multiple organizations
Internet2, ESnet, GEANT2, Nortel, University of Amsterdam, others
The following organizations have implemented/deployed systems which are compatible with this IDCP
Internet2 Dynamic Circuit Network (DCN)
ESNet Science Data Network (SDN)
GÉANT2 AutoBahn System
Nortel (via a wrapper on top of their commercial DRAC System)
Surfnet (via use of above Nortel solution)
LHCNet (use of I2 DCN Software Suite)
Nysernet (use of I2 DCN Software Suite)
University of Amsterdam (use of I2 DCN Software Suite)
DRAGON Network
The following "higher level service applications" have adapted their existing systems to communicate via the user request side of the IDCP:
LambdaStation (FermiLab)
TeraPaths (Brookhaven)
Phoebus
27. DCN – Global NetworkInteroperation via IDCP
28. InterDomain Controller Protocol Standardization Activities Standardization process and increasing community involvement continues
Optical Grid Forum (OGF)
Network Markup Language (NML) Working Group
Standardizing topology schemas (perfsonar and control plane)
Network Services Interface (NIS-WG)
Grid High Performance Networking (GHPN) Research Group
Network Measurement (NM-WG)
Network Measurement Control (NMC-WG)
Information Services (IS-WG)
GLIF
Control Plane Subgroup working on normalizing between various interdomain protocols (IDCP, G-Lambda GNS-WSI, Phosphorus API)
Also other GLIF subgroups in this and related space (global id format, PerfSonar)
29. Internet2 DCN Working Group DCN WG has been formed under NTAC
Chair: Linda Winkler (Argonne National Laboratory)
DCN WG will drive directions and set agenda in this area
Mailing list and Wiki available
dcn-wg@internet2.edu
https://spaces.internet2.edu/display/DCN/Home
DCN WG BOF on Monday, July 21, 12:30 PM 1:50 PM
30. Internet2 DCN Infrastructure
31. Internet2 DCN Services
32. DCN Services - circuits Physical Connection:
1 or 10 Gigabit Ethernet
SONET (Future)
Circuit Service:
Point to Point Ethernet (VLAN) Framed SONET Circuit
Point to Point SONET Circuit (future)
Bandwidth provisioning in 100 Mbps increments
How do Clients Request?
Client must specify [VLAN ID | ANY ID | Untagged | Tunnel], SRC Address, DST Address, Bandwidth
Request mechanism options are Web Service API, Web Page, phone call, email
What is the definition of a Client?
Anyone who connects to an ethernet or SONET port on an Ciena Core Director; could be RON, other wide area networks, domain specific applications
33. DCN Services - topologies Individual circuits are the “atomic” service provided by the DCN and control plane
These circuits could be intra or inter domain
It is envisioned that higher level “services” may be developed which coordinate the instantiation of multiple individual circuits to develop entire “topologies”
co-scheduling/allocation of other resources (compute, data storage) may also be desired
Probably a task for individual science/application domains or someone developing middleware on their behalf
34. Workshop Details
35. DCN Workshop Architecture
36. Pod Network Elements
37. Basic Pod Data Plane
39. Pod Network Elements Control and Data Planes
40. Pod Management Addressing
41. Rack Layout
42. Workshop Pods
43. Red Pod
44. Green Pod
45. Yellow Pod
46. Blue Pod
47. Exercise #1 Intra-Domain Detail(Answer Sheet)
48. Exercise #1 Data and Control links
49. Login information Wireless Network:
SSID: DCNworkshop
WPA Personal Key: Workshop!
Login to all VLSR, ES and NARB
ssh port 22
username: user[1-16]; password: Workshop!
username: root; password: rootme
Login to all switches
telnet port 23
username: admin; password: admin
OSCARS configuration; login to the NARB/IDC machine
ssh port 22
username: tomcat55; password: dragon
OSCARS axis2 login
https://idc.<color>.pod.lan:8443/axis2/axis2-admin/
username: admin; password: axis2
OSCARS web user interface;
https://idc.<color>.pod.lan:8443/OSCARS/
username: oscars-admin; password: oscars
50. Command Line Interface ports
dragond 2611
ospfd 2604 (intra-domain)
narb 2626
rce 2688
> telnet localhost 2611
> password: dragon
51. Workshop Laboratory Four “Pods”: Red, Blue, Yellow, Green
Each Pod represents an independent network domain
Each Pod has two End Systems: ES1 and ES2
Each Pod has three Virtual LSRs (VLSRs)
Each VLSR has a PC (for ctrl plane) and a Ethernet switch (for data plane)
Each Pod has one PC for interdomain routing support of the NARB and OSCARS
The PCs are running Debian Linux
We have installed it and all the software required to download, build, and run the control plane software, and to perform the workshop labs
We installed the DRAGON software and OSCARS software
/usr/local/dragon/{bin,etc}
/usr/local/tomcat, /home/tomcat55
52. Workshop Exercises Exercise 1: Designing a GMPLS Control Plane for Ethernet Data Planes
Exercise 2: Intra-Domain Provisioning with OSCARS
Exercise 3: Inter-Domain Provisioning with OSCARS
53. Exercise #1 Designing a GMPLS Control Plane For Ethernet Data Planes Diagram a control plane for each pod
Construct an addressing scheme for the control plane
Configure the network elements’ data plane
Configure the control plane software
Set up an LSP
…and if that fails…read the instructions.
54. GMPLS Snapshot Generalized Multi-Protocol Label Switching – GMPLS
Evolved from MPLS concepts, and experiences gained from deployments within the IP packet world
GMPLS extends Traffic Engineering (TE) concepts to the multiple layers:
Packet Switching Capable (PSC) – standard MPLS LSPs
Layer2 switch capable (L2SC) – Ethernet and VLANs
TDM switch capable (TDM) – SONET/SDH
Lambda switching (LSC) – Wavelength
Fiber Switch capable (FSC) - Automated Patch Panel
In the GMPLS, any network element that supports one of the above switching capabilities and participates in the GMPLS control plane protocols is referred to as a “Label Switching Router” or LSR.
GMPLS Protocols:
Routing: GMPLS-OSPF-TE
Signaling: GMPLS-RSVP-TE
Link layer: LMP (not widely implemented)
ISIS and CR/LDP are also considered part of the GMPLS protocols
In this workshop we will focus only on OSPF and RSVP
55. What is the Control Plane? The Control Plane is the network facilities and associated protocols that select, allocate/deallocate, and provision network resources to fulfill a user service request.
Typically this includes routing protocols that distribute topology and reachability information among interconnected networks and network elements
It also includes other functions that allocate appropriate resources and put those resources into service (Path computing and signaling)
With GMPLS, routing and signaling messages between LSRs do not travel along the same [physical] path as the circuit being established.
The set of facilities between LSRs that carry the data circuits themselves is called the “Data Plane”
The set of facilities between LSRs that carry the routing and signaling protocols is called the “Control Plane”
It is good practice to design the control plane so as to be highly robust and impervious to effects of other network traffic or malicious activity
In this workshop, our control plane and data plane will be separate as is typically the case for GMPLS networks.
56. Control Plane and Data Plane
57. A [Typical] Label Switching Router – “LSR” What is an “LSR”
In the MPLS world, it is any router capable of recognizing and processing the MPLS shim header in the IP packet
In the GMPLS world, an LSR is any network element that is able to establish “label switched paths” (LSPs) under control of the GMPLS protocol suite:
This now includes fiber switches, wave division multiplexors, sonet (tdm) switches, ethernet switches, and traditional packet switches (MPLS routers)
58. Key Control Plane Features Routing
distribution of "data" between networks. The data that needs to be distributed includes reachability information, resource usages, etc
Path computation
the processing of information received via routing data to determining how to provision an end-to-end path. This is typically a Constrained Shortest Path First (CSPF) type algorithm for the GMPLS control planes. Web services based exchanges might employ a modified version of this technique or something entirely different.
Signaling
the exchange of messages to instantiate specific provisioning requests based upon the above routing and path computation functions. This is typically a RVSP-TE exchange for the GMPLS control planes. Web services based exchanges might employ a modified version of this technique or something entirely different.
59. OSPF – “Open Shortest Path First” OSPF is a “Link State” Routing Protocol
OSPF routers discover each other thru a HELLO protocol exchanged over OSPF interfaces
Routers identify themselves with a “router id” (typically the loopback IP address or another unique IP address is used)
OSPF routers flood Link State Announcements (LSAs) to each other that describe their connections to each other and that specify the current link state of these connections
In the GMPLS and TE extensions to OSPF, the LSA contains information about the available bandwidth, routing metrics, switching capabilities, encoding types, etc.
LSAs are not flooded in the direction from which they are heard
Link State flooding does not scale well
OSPF routing is often divided into “areas” to reduce or limit LSA flooding in large networks
Other routing protocols are used between routing “domains” that distribute reachability information but not link state info
Each OSPF router in an area has a full topological view of its area
SPF identifies the next-hop for each known destination prefix
60. CSPF Constrained Shortest Path First
In OSPF TE, reachability is no longer the only criteria for deciding next-hop
E.g. Bandwidth available on each intemediate link could be a constraint used to identify or select a path
In GMPLS, with multiple switching capabilities, there are many constraints to be considered
Path Computation is used differently for selecting circuit layout than for selecting the next-hop for shortest path packet forwarding
Two identical path requests may generate two completely separate paths (unlike traditional routed IP which would select only the single “best” path for forwarding packets)
Paths are not computed until or unless a path is needed.
Some GMPLS service models do propose precomputing paths (or at least next-hops) based on certain apriori assumptions about the LSP – the tradeoff is generally one of scheduled “book ahead” reservations vs fast “on-demand” provisioning.
61. RSVP – ReSerVation Protocol GMPLS-RSVP-TE is the signaling (provisioning) protocol used to instantiate a Label Switched Path (LSP) thru the network
Five basic RSVP messages we will reference:
PATH = First message issued by the source towards the destination requesting a connection be established
RESV = Response from the destination towards the source accepting the connection
PATH_TEAR = Message sent to tear down an LSP
PATH_ERR = Error message sent when a PATH request is denied or encounters a problem
REFRESH = Message sent between LSRs indicating a connection is still active (prevent timeout and deletion)
62. Path Computation Element In GMPLS, the Path Computation Element (PCE) is separated from the routing protocol.
The routing protocol distributes topology information and builds the topology database that contains all the [visible] resources and their state – the Traffic Engineering Data Base (TEDB)
PCE is responsible for processing the TEDB to select a path through the network that meets the constraints specified in the service request (e.g. BW, encoding, Src/Dst, Policy, etc.)
In GMPLS, the path computed is expressed as an “Explicit Route Object” (ERO).
An ERO is simply a data structure that contains a sequentially ordered list of routers (LSRs) that the path will travels from Source to Destination
A “Loose Hop” ERO specifies a partial set of transit nodes – the path may contain other nodes as long as it passes through the specified nodes in the order specified.
A “Strict Hop” ERO specifies a complete list of transit nodes – no other intervening nodes are allowed.
RSVP includes the ERO in the PATH message to pin the path through specific nodes
63. DRAGON Control Plane - Key Elements Virtual Label Switching Router – VLSR
Open source protocols running on PC act as GMPLS network element (OSPF-TE, RSVP-TE)
Control PCs participate in protocol exchanges and provisions covered switch according to protocol events (PATH setup, PATH tear down, state query, etc)
Network Aware Resource Broker – NARB
Intradomain listener, Path Computation, Interdomain Routing and Path Computation
More information:
dragon.east.isi.edu
dragon.maxgigapop.net
64. The Virtual Label Switching Router “VLSR” The DRAGON Project developed a control plane "proxy" element to cover non-GMPLS capable devices like standard ethernet switches.
65. VLSR(Virtual Label Switching Router) RSVP Signaling module
Originated from Martin Karsten’s C++ KOM-RSVP
Extended to support RSVP-TE (RFC 3209)
Extended to support GMPLS (RFC 3473)
Extended to support Q-Bridge MIB (RFC 2674)
For manipulation of VLANs via SNMP (cross-connect)
Extended to support VLAN control through CLI
OSPF Routing module
Originated from GNU Zebra
Extended to support OSPF-TE (RFC 3630)
Extended to support GMPLS (RFC 4203)
Ethernet switches tested to date
Dell PowerConnect, Extreme, Intel, Raptor, Force10
66. NARB(Network Aware Resource Broker) NARB is an agent that represents a domain
Intra-domain Listener
Listens to OSPF-TE to acquire intra-domain topology
Builds an abstracted view of internal domain topology
Inter-domain routing
Peers with NARBs in adjacent domains
Exchanges (abstracted) topology information
Maintains an inter-domain link state database
Path Computation
Performs intra-domain (strict hop) TE path computation
Performs inter-domain (loose hop) TE path computation
Expands loose hop specified paths as requested by domain boundary (V)LSRs.
Hooks for incorporation of AAA and scheduling into path computation via a “3 Dimensional Resource Computation Engine (3D RCE)”
The Traffic Engineering DataBase (TEDB) and Constrained Shortest Path Computation (CSPF) are extended to include dimensions of GMPLS TE parameters, AAA constraints, and Scheduling constraints.
3D RCE is the combination of 3D TEDB and 3D CSPF
67. Heterogeneous Network Environmentmulti-technology, multi-level, multi-domain, multi-vendor, multi-provision system network environments
68. Exercise #2: Intra-domain Provisioning with OSCARS In this exercise we will bring up the OSCARS software, configure the network topology and candidate paths, and provision LSPs across a single administrative network domain
OSCARS:
“On-demand Secure Circuits and Advanced Reservation System”
Provides Authentication and Authorization for LSP requests
Provides book-ahead scheduling for network path resources
Interim: implements the static topology distribution function and provides precomputed static EROs for provisioning
OSCARS is a Java based application. OSCARS runs on top of Tomcat, uses MySQL and AXIS2.
69. Exercise #3: Inter-domain Provisioning with OSCARS In this exercise we will configure and use OSCARS to accomplish InterDomain provisioning.
Design (and implement) the inter-domain Data plane
Layout the inter-domain control plane
Configure OSCARS for inter-domain
Test
70. IDC - Web Service Based Definition -IDC/DC combination
-DC internal domain concern
-two-level hierarchical network view
-Four distinct phases are identified
-Topology Exchange: currently based on abstracted link states, with little to no dynamic information.
-Resource Scheduling: multi-domain, multi-stage path computation process where the specific resources get identified and reserved for a specific signaling event.
-Signaling Phase is where specific network elements are provisioned.
This phase may be initiated by the user, or by the domains.
-Signaling Phase actions are based on resources identified in the Resource Scheduling phase.
-User Request Phase provides a message set for users to request multi-domain circuits
-Current security and authentication models are based on signed soap
messages and X509 Certificates (User to local IDC; IDC to neighbor IDC)-IDC/DC combination
-DC internal domain concern
-two-level hierarchical network view
-Four distinct phases are identified
-Topology Exchange: currently based on abstracted link states, with little to no dynamic information.
-Resource Scheduling: multi-domain, multi-stage path computation process where the specific resources get identified and reserved for a specific signaling event.
-Signaling Phase is where specific network elements are provisioned.
This phase may be initiated by the user, or by the domains.
-Signaling Phase actions are based on resources identified in the Resource Scheduling phase.
-User Request Phase provides a message set for users to request multi-domain circuits
-Current security and authentication models are based on signed soap
messages and X509 Certificates (User to local IDC; IDC to neighbor IDC)
71. DCN Web Services Web Service Definitions
wsdl - web service definition of message types and formats
xsd – definition of schemas used for network topology descriptions and path definitions
Ongoing work with OGF Working Group(s), PerfSonar, and GLIF with the goal to achieve interoperability amongst all groups.
72. InterDomain SpecificationWeb Services https://wiki.internet2.edu/confluence/display/CPD/OSCARS+Web+Service+Definition
Specification is defined by a Web Service Desciption Language (WSDL) document and XML Schema files containing associated data types.
OSCARS.wsdl - web service definition of OSCARS messages
OSCARS.xsd - data types used by OSCARS.wsdl
nmtopo-ctrlp.xsd - NMWG control plane topology schema used by OSCARS.xsd for topology-related data types
73. AAA and Security OSCARS AAA
SSL Encryption
Authentication
X.509 Certificates
User to Domain
Domain to Domain
Web Service Security by OASIS
SAML assertions about end-user (future)
Authorization
OSCARS attribute based system
74. DCN Control Plane uses OGF Topology Schema
75. Information Services Topology Service and LookUp Service Control Plane uses Information Services Topology Service and LookUp Service
LookUp Service
Provides a mapping from circuit end points to user friendly names
Topology Service
Provides an infrastructure from which to retrieve topologies from other domains
Will be utilized for global path computation On-demand Secure Circuits and Advanced Reservation System
Dynamic Resource Allocation via GMPLS Optical NetworksOn-demand Secure Circuits and Advanced Reservation System
Dynamic Resource Allocation via GMPLS Optical Networks
76. Information Services Topology Service and LookUp Service Administrative control of perfSONAR-PS services (example here shows quick view of network topology)Administrative control of perfSONAR-PS services (example here shows quick view of network topology)
77. DCN Information Service - Lookup Service
78. DCN ProvisioningWeb Page or API
79. DCN – Circuit Status Description
80. DCN – Circuit Status Description
81. Requesting a circuit - Interfaces Web User Interface (WBUI)
Java servlet interface used by OSCARS web page
Not intended for use by other applications
Web Service API
XML-based API intended for use by applications
e.g. Phoebus, LambdaStation, TeraPaths WBUI originally built as proof of concept. Later added web service API but WBUI has continued to exist in essentially its original form.
The WBUI interface is really just a specific implementation. Demonstrates that different UNIs can be used to support the same E-NNI
Web service API is how Phoebus, LambdaStation and TeraPaths all request circuitsWBUI originally built as proof of concept. Later added web service API but WBUI has continued to exist in essentially its original form.
The WBUI interface is really just a specific implementation. Demonstrates that different UNIs can be used to support the same E-NNI
Web service API is how Phoebus, LambdaStation and TeraPaths all request circuits
82. Requesting a circuit – WS API Used by applications to contact IDC
Authenticate using an X.509 certificate
Generate with command-line tools
Have CA sign (Internet2 has test CA)
Message format defined in DICE Control Plane group
Custom applications should use this interface CA= Cetificate Authority
Internet2 has CA to sign certificates for test requests.
****DEMO command-line client
Point out keystore where X.509 certificate kept
CA= Cetificate Authority
Internet2 has CA to sign certificates for test requests.
****DEMO command-line client
Point out keystore where X.509 certificate kept
83. Additional Information DCN Software Suite
https://wiki.internet2.edu/confluence/display/DCNSS/Home
Java Client API
https://wiki.internet2.edu/confluence/display/CPD/OSCARS+Client+Java+API
84. Workshop Details - end
85. DCN Control Plane Possible Future Features and Work Areas Improved user documentation and software installation procedures
Improved reliability and redundancy of dynamic provisioning operations. (better automated logging and failure reporting, redundant control plane elements, automated interaction between control plane and monitoring systems and NOC operations)
Support for VLAN Translation across a multi-domain circuits
Support for SONET Client Access ports and Interdomain Links
Design for automated multi-domain topology exchange
Enhanced user request options (additional parameters and ability to ask questions without actually making a reservation)
Enabling other signaling methods, e.g. RSVP (as opposed to only Web Service method)
Continue work with international groups, standards bodies to formalize the IDC InterDomain Protocol to further increase interconnected global community for these services
86. Use of Internet2 DCN and peering dynamic networks
87. How do I connect? – Physical Connection Internet2 Connectors
Connect to Internet2 DCN
Universities and campuses
Contact Internet2 Connector
88. How do I connect? – Software Configuration Option 1: No local IDC
Option 2: Install local IDC Nearest IDC = I2 IDC or connector IDC
- Nosoftware works to start but better to have software. Then its truly dynamic.
LambdaStation routes traffic onto DCN.
Terapaths does not changing routing but provides IP QoS on local network up to DCN connection
Phoebus transparently calls DCN when transfer started. Doesn’t configure the local network and may be used in conjunction with other options
NORTEL and AutoBAHN each have IDC packages as wellNearest IDC = I2 IDC or connector IDC
- Nosoftware works to start but better to have software. Then its truly dynamic.
LambdaStation routes traffic onto DCN.
Terapaths does not changing routing but provides IP QoS on local network up to DCN connection
Phoebus transparently calls DCN when transfer started. Doesn’t configure the local network and may be used in conjunction with other options
NORTEL and AutoBAHN each have IDC packages as well
89. How do I connect? – Software Configuration Option 1: No local IDC
Statically configure your local network
Applications/Users can dynamically request circuits from the nearest IDC Nearest IDC = I2 IDC or connector IDC
- Nosoftware works to start but better to have software. Then its truly dynamic.
LambdaStation routes traffic onto DCN.
Terapaths does not changing routing but provides IP QoS on local network up to DCN connection
Phoebus transparently calls DCN when transfer started. Doesn’t configure the local network and may be used in conjunction with other options
NORTEL and AutoBAHN each have IDC packages as wellNearest IDC = I2 IDC or connector IDC
- Nosoftware works to start but better to have software. Then its truly dynamic.
LambdaStation routes traffic onto DCN.
Terapaths does not changing routing but provides IP QoS on local network up to DCN connection
Phoebus transparently calls DCN when transfer started. Doesn’t configure the local network and may be used in conjunction with other options
NORTEL and AutoBAHN each have IDC packages as well
90. How do I connect? – Software Configuration Nearest IDC = I2 IDC or connector IDC
- Nosoftware works to start but better to have software. Then its truly dynamic.
LambdaStation routes traffic onto DCN.
Terapaths does not changing routing but provides IP QoS on local network up to DCN connection
Phoebus transparently calls DCN when transfer started. Doesn’t configure the local network and may be used in conjunction with other options
NORTEL and AutoBAHN each have IDC packages as wellNearest IDC = I2 IDC or connector IDC
- Nosoftware works to start but better to have software. Then its truly dynamic.
LambdaStation routes traffic onto DCN.
Terapaths does not changing routing but provides IP QoS on local network up to DCN connection
Phoebus transparently calls DCN when transfer started. Doesn’t configure the local network and may be used in conjunction with other options
NORTEL and AutoBAHN each have IDC packages as well
91. How do I connect? – Software Configuration IDC usually just a web server
Domain controller can take many formsIDC usually just a web server
Domain controller can take many forms
92. How do I request a circuit? - Clients User-initiated
OSCARS Web Page
Simple command-line tools
Program-initiated
Phoebus
Transparently request circuit upon data transfer initiation
Custom applications you build! Assuming have DCN connection and have made decision on local control plane, what do requests to the IDC look likeAssuming have DCN connection and have made decision on local control plane, what do requests to the IDC look like
93. How do I request a circuit? - Interfaces Web User Interface (WBUI)
Java servlet interface used by OSCARS web page
Not intended for use by other applications
Web Service API
XML-based API intended for use by applications
E.g. Phoebus, LambdaStation, TeraPaths WBUI originally built as proof of concept. Later added web service API but WBUI has continued to exist in essentially its original form.
The WBUI interface is really just a specific implementation. Demonstrates that different UNIs can be used to support the same E-NNI
Web service API is how Phoebus, LambdaStation and TeraPaths all request circuitsWBUI originally built as proof of concept. Later added web service API but WBUI has continued to exist in essentially its original form.
The WBUI interface is really just a specific implementation. Demonstrates that different UNIs can be used to support the same E-NNI
Web service API is how Phoebus, LambdaStation and TeraPaths all request circuits
94. How do I write my own DCN application? Java library for making DCN calls
Can call simple command-line client directly from application
Google Summer of Code students will be developing PERL, C, and Python libraries
95. backup
96. VLSR(Virtual Label Switching Router) GMPLS Proxy
(OSPF-TE, RSVP-TE)
Local control channel
CLI,TL1, SNMP, others
Used primarily for ethernet switches
97. DRAGON Virtual Label Switching Router (VLSR) Control channels could also be provisioned out-of-band via GRE tunnels over an IP network
98. DCN – Circuit Status Description
99. Laying Out the Control Plane Lay out the data plane between NEs first.
For now, we are going to ignore intervening static NEs.
Make sure all Nes and links are uniquely labeled
Then, control links connect the dynamic network elements
If you are including end systems in the dynamic network, you should add them where appropriate
100. Control Plane Often, the dynamic network elements are not directly adjacent to one another – but the control structure expects them to be (at least logically adjacent)
We employ Generic Routing Encapsulation (GRE) tunnels for the control links in order to create logical adjacencies
GRE Tunnels are set up between two IP hosts over the conventional internet interface. (these are the “tunnel endpoints”)
They present a pseudo interface to the end host that appears to be directly linked to the remote endpoint, thus allowing a single common IP subnet to be allocated on this GRE (pseudo) interface.
101. Generic Network Element
102. Case Study: Control ChannelsDRAGON Virtual Label Switching Router (VLSR) Linux PC implements GMPLS control plane protocols
Control channels may be provisioned in-band or out-of-band
103. Case Study: Control ChannelsDRAGON Virtual Label Switching Router (VLSR) Assuming underlying network uses Ethernet VLANs, control channels may be provisioned in-band with static control VLANs
104. Case Study: Control ChannelsDRAGON Virtual Label Switching Router (VLSR) Control channels could also be provisioned out-of-band via GRE tunnels over an IP network
105. Case Study: Control Channels
106. Hybrid NetworksWeb Service Control Plane Interfaces
107. Hybrid NetworksControl Plane Architecture
108. Web Service based E-NNIThree Main Components Routing
Topology Exchange
Domain Abstraction
Varying levels of dynamic information
Resource Scheduling
Multi-Domain path computation techniques
Resource identification, reservation, confirmation
Signaling
path setup, service instantiation
109. Key Control Plane Key Capabilities Domain Summarization
Ability to generate abstract representations of your domain for making available to others
The type and amount of information (constraints) needed to be included in this abstraction requires discussion.
Ability to quickly update this representation based on provisioning actions and other changes
Multi-layer “Techniques”
Stitching: some network elements will need to map one layer into others, i.e., multi-layer adaptation
In this context the layers are: PSC, L2SC, TDM, LSC, FSC
Hierarchical techniques. Provision a circuit at one layer, then treat it as a resource at another layer. (i.e., Forward Adjacency concept)
Multi-Layer, Multi-Domain Path Computation Algorithms
Algorithms which allow processing on network graphs with multiple constraints
Coordination between per domain Path Computation Elements
110. OSCARS Architecture
111. Integration Core Director Domain into the End-to-End Signaling
112. DRAGON enables integration of the Core Director Domain into Multi-Domain, Multi-Layer, Multi-Service, Multi-Vendor Provisioning Environment