340 likes | 357 Views
Learn about the HOPI testbed's future protocols experimentation, architecture refinement, and essential requirements for the new network system. Explore backbone deployment, application-focused topologies, and dynamic switching capabilities.
E N D
Toward the Next Generation Network - The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Internet2 Fall Member Meeting Philadelphia, PA 20 September 2005
Time-Line • October 2007 - End of recent 1-year Abilene transport MoU extension • Sets next-generation network planning timeline • Architecture definition: 1/1/2006 • Transport selection: 4/1/2006 • Equipment selection: 7/1/2006 • Backbone deployed: 1/1/2007 • Connector transition: 2007 • Concurrently, review overall business plan and management model • Network design time frame: 2007-2012 • HOPI testbed is expected to be in place for 2-3 years, to experiment with future protocols • Refine and evolve next generation architecture
Basic Requirements • Recent Reports • Abilene TAC report • Group A report • Requirements multi-dimensional in scope, for example: • Provide capabilities at all network layers (layer) • Provide capabilities for both short term and long term applications or projects (duration) • Provide capabilities at a variety of different levels of robustness, from production to experimental (robustness) • An infrastructure consisting of dark fiber, a significant number of waves, and a production quality IP network • Create a new architecture for the R&E community
Architecture Requirements • Uncongested data transport • Including • IP packet switching • Dedicated capacity • Duration, Reliability, Capacity • Simplified connectivity to • Research and education community in the US • Other national networks • International research and education networks • Potential for commodity network peering • Architectural constraints • Standard hierarchy: national, regional, campus • Expansion to additional layers if necessary • Must be capable of evolving to support new features • Dynamic provisioning to some degree • Hybrid models for data networking
Applications • Focus must on applications • It’s not just about creating dynamic circuits, but supporting applications that need rich topologies. • Application specific topologies • Important to understand how existing applications will use a richer architecture, and also how existing applications might be redesigned, or new applications created • Application layer plays a fundamental role in the architecture • Setup and tear down of application specific topologies is above the control plane layer • Again, all this should be transparent to the user
Backbone Footprint • Basic component will be ITU grid waves that interconnect nodes on a national fiber footprint • Number of waves expected to be from 10 to 40 waves • Bandwidth of each wave expected to be 10 Gbps (and possibly 40 Gbps) • Switching nodes between segments, optical or electrical • Schematic:
Switching Capabilities • Through an optical interconnecting device that serves 3 purposes: • Provides a client interface to connecting network • Provides access to waves on the network • Provides support for sub channels on a wave • i.e. Ethernet VLANS, SONET paths, or other suitably framed capacity. • Potentially using GFP, VCAT, and LCAS
RON Interface The interface to the backbone: • Two or more client interfaces between optical interconnects (analogous to router-to-router connections today) • Requirements: • Support connectivity to IP Network • Support multiple sub channels through backbone to other RONs up to capacity of interface • Potential for alien waves in the future
Campus Connectivity • Hierarchy is likely to remain national to regional to campus • RON - campus connections similar to Backbone - RON connections • Bandwidths may be lower • There may be state networks or campus departments in the hierarchy • The ability to create dedicated capacity across administrative domains is a key factor • End-to-end high performance networking is a fundamental goal • High performance IP service • High performance dedicated capacity from deep within a campus to deep within another campus
Dynamic Provisioning and Switching • Dynamic provisioning across administrative domains • Setup on the order of seconds to minutes • Durations on the order of hours • Eventually understand the need for more dynamic capabilities • Control plane development will be a key • Switching may require unique partnerships and development of capabilities on hardware platforms • For example, being able to isolate user capabilities at switching nodes • There is interest from carriers from the point of view of providing additional services • All this should be transparent to the user • View as a single network • Hybrid aspects must be built into the architecture
Further Investigation • Requirements • Group A report? • Abilene TAC report? • Backbone • What is the national footprint? • Is 100 Gbps the right total bandwidth? • Where are the switching nodes located? • What provides the switching capabilities? • What is the backhaul availability? • What is the framing on the waves? • Is it possible to provide support for alien waves?
Further Investigation • Interconnects • Where are the optical interconnects located? • What are the optical interconnects? • What are the interfaces? • What is the framing on the client interfaces? • What is the service offering? • Dynamics • What degree of dynamic provisioning is required? • What control plane properties are needed? • What availability is required day one? • When are carrier class waves needed? • IP • Providing carrier class wavesfor the IP network? • What is the topology of the IP backbone? • What role does dynamic provisioning play in the IP network, for example in redundancy?
HOPI Project - Overview • As outlined in requirements documents, we expect to see a richer set of capabilities available to network designers and end users • Core IP packet switched networks • A set of optically switched waves available for dynamic provisioning • Fundamental Question: How will the core Internet architecture evolve? • Many options being examined • Examine a hybrid of shared IP packet switching and dynamically provisioned optical lambdas • HOPI Project – Hybrid Optical and Packet Infrastructure - how does one put it all together? • Dynamic Provisioning - setup and teardown of optical paths • Hybrid Question - how do end hosts use the combined packet and circuit switched infrastructures?
HOPI Support • Advisory Groups • HOPI Design Team • HOPI Corporate Advisory Team • HOPI Research Advisory Panel • HOPI Supporters • Force10 • HP • Glimmerglass • The HOPI Testbed Support Center
HOPI Deployment • Installed node locations • Los Angeles Equinix facility - same location as NLR node • Washington, DC MAX/Dragon facility - same location as the NLR node • StarLight in Chicago • The Pacific Northwest GigaPoP in Seattle - Westin Building, the new location of the NLR node • Future node locations • New York City – NYSERNet area in 32 AoA - same location as NLR node (many thanks to NYSERNet for donating rack space and power to support the HOPI project). • Looking at additional possibilities as the southern route of NLR is installed - potentially Houston • Circuit from NYC to London • Early November, 2005 • Connection to GEANT2 testbeds
Recent Activities • Connections to HOPI • UltraLight - Physics project to explore similar ideas • UltraScienceNet - A project of the Department of Energy • GEANT2 Testbed • DRAGON • CHEETAH • RON connections under discussion • Connections to the ITECs at North Carolina, Ohio, Texas A&M, and UCSD • Note - connecting to HOPI is not about providing connectivity, rather it is to understand new ideas and paradigms. It is a testbed that will require real participation from all participants
Contact Information • http://hopi.internet2.edu • hopi@internet2.edu • HOPI Call Center: (877) 472-2419
HOPI Testbed Support Center • Call for proposals issued several months ago • Award of the TSC went to a collaboration between MAX, NCREN, and the GRNOC at IU • Advanced engineering and design focus • Implement control plane activities • The MAX GigaPoP and the NSF supported Dragon project will focus on these issues • Coordinate application activities • NCREN will focus on these issues • Manage and engineer the facility • GRNOC at IU will focus on these issues • Focus is on dynamics and the Hybrid aspects, but also applications, security, measurement , operations, engineering, AAA
HOPI Testbed Support Center MAX Systems & Control Plane TSC Coordination GRNOC Operations & Engineering NCREN Applications Integration
Testbed Support Center Team • Core: • Rick Summerhill (Internet2) HOPI Program Director • Jerry Sobieski (MAX) – TSC Project Manager • Mark Johnson (NCREN at MCNC) co-PM • Dave Jent (GRNOC at IU) co-PM • Chris Robb (IU) • Chris Tracy (MAX) • Chris Heerman (Internet2) • Steve Thorpe (NCREN) • Bonnie Hurst (NCREN)
Systems and Control Plane • Develop and/or port novel control plane technologies to the HOPI testbed • First phase: Deploy GMPLS protocol stacks to cover HOPI Ethernet switches • Open source DRAGON Software Suite • Looking at expanding to include FSC (fiber switching ala Glimmerglass), PSC paths (MPLS LSPs), and TDM switching (e.g. next generation SONET) • Integration of management plane functions into the testbed • Implement experimental AAA mechanisms to explore security and resource management schemes for hybrid networks • Integrate novel user interfaces or service models (e.g. BRUW)
Systems and Control Plane • Promote and assist adoption of these control plane models to collaborating institutions, organizations, and research teams • This will enable inter-domain dynamic provisioning – a key issue for emerging hybrid networks • Encourage experimentation in regional and campus environments • Evaluation and reporting of results • Dissemination of results and findings • Development of a base of Best Common Practices for designing and deploying interoperable hybrid networks
Applications Integration • Incorporation of brave new applications • Find and adapt applications that can leverage HOPI and that will drive HOPI in relevant [network] research directions • Assist end users and campus/regional network personnel in engineering the connection(s) to HOPI for the applications • Application analysis and design • Many applications may need to re-think their relationship with “the network” given the availability of deterministic and dedicated network resources • E.g. could an application take advantage of direct infiniband links between clusters on separate continents? • Do large data sets need pre-staging or can they be effectively accessed at distance? • Are there common practices/techniques or modes of thought about how applications are architected that will enable globally distributed applications to take advantage of hybrid networks?
Applications Integration • Middleware Issues • Assist application adaptation to an “application specific topology” service layer • i.e. the application must request these services of the network…how? • Other MW issues to be explored: • How do you implement a book-ahead scheduling and reservation system for scarce network resources? • How do we integrate non-network resources (computational clusters, instruments, etc.) and the network resources into a grand unified theory of resource objects for end users? • Exploring GRID technologies as both a relevant application and as a candidate solution to some of these issues… • Performance Analysis and Measurment • Insure the applications are getting the full measure of performance from the HOPI testbed--- end-to-end thru campus, RON, and internationally as well.
Operations and Engineering • Design and implementation of the Testbed • Install the HOPI network elements around the country • Initial five sites are up and running • Planning for southern route • Monitor their health • Insure availability and notification of failure of equipment in the HOPI testbed
Operations and Engineering • Systems administration and management of the installed HOPI nodes • Access and authorization • Keeping software levels secure and up to date (major effort given the number of boxes and the esoteric demands of the experimental systems and applications software…) • Inventory control and allocation of ports, modules, etc.
Testbed Support CenterShakedown: iGrid 2005 • Key points of the HOPI Demonstrations for iGRID2005 • E-VLBI Application – real time access to radio telescopes, linking them to correlators at MIT Haystack Observatory • GMPLS control plane – Porting the DRAGON protocol stacks to manage the Force10 E600 switches • Inter-domain provisioning – Three exemplar administrative domains will be part of the demo: international, national, and regional • International scope – • Telescopes in Onsala SE, Westerbork NL, Jodrell Banks UK, Kashima JP, Greenbelt MD, and Westford MA. • Networks: UKLight, NetherLight, NorthernLight, SUnet, StarLight, HOPI, DRAGON, BOSnet • Persistent infrastructure - A new type of demo: The infrastructure will remain in service for use by the end user community after the demos are over (!) • Expanded Support for Supercomputing 2005 • Support for UltraLight • Support for the SC05 Bandwidth Challenge
The 30,000 foot view HOPI Igrid Demo: Jodrell Banks UK Kashima JP Westerbork NE Onsala SE StarLight JGN2 UKLight NetherLight NorthernLight MIT Haystack Observatory Seattle Chicago DRAGON Los Angeles Washington HOPI Westford MA Greenbelt MD