1 / 12

ORNL’s FutureNet Infrastructure

Learn about ORNL's FutureNet Infrastructure, which addresses poor connectivity issues and builds capacity for research networks. Discover the technologies and expansions implemented to improve connectivity.

caster
Download Presentation

ORNL’s FutureNet Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ORNL’s FutureNet Infrastructure W. R. Wing Network and Cluster Computing Computer Science and Mathematics Computing & Computational Sciences Dir

  2. History and context • In 2003 ORNL had a problem—poor connectivity: • ESnet connection only OC-12 • Qwest commercial OC-192 was costing $400,000/year • Three simultaneous network research awards were received • UltraScience Net (DOE circuit-switched research net) • CHEETAH [National Science Foundation (NSF) circuit-switched research] • ETF (NSF extension of TeraGrid) • ORNL pooled funds to build capacity rather than lease it

  3. Bought 20-year indefeasible right of use leases with Qwest (Red) and TVA (Yellow)

  4. Arranged an asset trade with National LambdaRail for membership rights

  5. Infrastructure—not a network • On it, we’ve built and added to three research networks and DOE’s operational network, ESnet • DOE’s UltraScience Net (circuit switching with secure control and robust reservations) • NSF’s CHEETAH and Internet2’s HOPI (which will help study inter-domain issues) • ORNL’s connection to TeraGrid • Options to connect to EVERYTHING worth connecting to

  6. UltraScience Net: Fully-meshed, circuit- switched, dedicated-wave, research

  7. CHEETAH: MPLS/GMPLS-based research

  8. Of course, the real goal has been connectivity

  9. So, how are we really doing it? • Dense wavelength division multiplexing (DWDM) transport gear from Ciena • Latest (3rd-generation) /w 25-Ghz lambda spacing • C-band tunable transponders • CoreDirector switches for lambda and sub-lambda switching • Supports Generic Framing Procedure (GFP) mapping, link capacity adjustment scheme (LCAS), virtual concatenation (VCAT)

  10. The project isn’t finished • Adding extensions to Memphis, Oxford, and Starkville • Pick up University of Memphis, Ole Miss, and Mississippi State • Adding an extension through Cincinnati to Columbus • Providing support for NASA lambda through Nashville to Atlanta

  11. It will look like this when complete

  12. Contacts W. R. Wing Computing & Computational Sciences Directorate (865) 574-8839 wingwr@ornl.gov 12 Wing_11/2006

More Related