1 / 28

MAX Research Activities

MAX Research Activities. Jerry Sobieski Director, Research Initiatives November 29, 2007. Current Projects. DRAGON Jerry Sobieski (PI) Fiona Leung (Systems/Software Development Engineer) NSF granted 1 yr No Cost Extension…Finishing up the project.

john-chaney
Download Presentation

MAX Research Activities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MAX Research Activities Jerry Sobieski Director, Research Initiatives November 29, 2007

  2. Current Projects • DRAGON • Jerry Sobieski (PI) • Fiona Leung (Systems/Software Development Engineer) • NSF granted 1 yr No Cost Extension…Finishing up the project. • DRAGON software seeing substantial interest: I2 DCS, ESnet • ATDNet-V2 • Bill Babson (Project Lead) • Contract being reduced due to IRU arrangements directly with Qwest for fiber costs • 2nd one-year extension coming up for renewal • HOPI –Testbed Support Center • Chris Tracy - Project Lead, Optical/Network Engineering • Jarda Flidr - System/Software Development Engineer • Project is being reduced due to I2 cost reduction efforts • Adapting DRAGON sfw to interoperate with Ciena & Cisco equipment, DICE WS interface, OSCARS f/e • LTS Application Specific Topologies • Ladan Gharai (Research Scientist – contract) • Developing architectural approaches to building realtime data (HD-video) distribution architecture – using DRAGON, the control plane, and dedicated network light paths (ASTs et al)

  3. ATDNet V2 • Sponsor: Naval Research Lab • Participants: NRL,LTS, DISA, MIT-LL • Major Activities: • NRL entered direct contract with Qwest for fiber • Migrating to “rings” from “segment” pricing • Moving to 20 yr IRU with only annual maintenance • Reengineering BOSSnet • Replaced old MONET gear with Ciena CoreStream • Including 10Gbps wave for DRAGON/E-VLBI (MIT Haystack) • 40 Gbps • Optical Peering with DRAGON • Tunable transponders interconnecting DRAGON and GIG-EF (ATDnet) are in place and under test - We believe these are first such deployment of tunables in the R&E community • Cudos: Bill Babson (MAX)

  4. DRAGON • Sponsor: National Science Foundation • Experimental Infostructure Networks (EIN) • Major Activities: • DRAGON is now in its last year. As of Aug 08, there is no further NSF support for the DRAGON Project. • Between now and then, the DRAGON testbed must transition to a cost recovered project in order to maintain operations. • DRAGON will morph into a service offering. • Dynamic circiut capabilities - • Still experimental – i.e. flexible and maturing, but usable now. • Cost recovered from the participants • DRAGON will be providing the initial access to Internet2 DCS and act as the regional gateway to reach DCS • Layer 2 (ethernet framed) dynamic circuits up to 2.5 Gbps • SONET TDM capability will likely be available in CY08 time frame • DRAGON will continue to work with R&E organizations to assist in deployments and to implement needed features as time and funds allow.

  5. pM pM pM pM pM pM pM The DRAGON Optical Layer MAX UMBC CLPK-RE LTS GSFC Qwest fiber ring CLPK DCGW DCNE BOSnet MCLN-RE (new tunable lambda) ARLG NGC DCNE-RE MCLN ARLG-RE NRL Movaz RayROADM (mems wavelength switches) ACCESS Level3 fiber ring Movaz RayExpress (wave Add/Drop mux/dmux)

  6. The DRAGON L2SC Layer UMBC MAX LTS NIH/NLM GSFC MIT/ HAYS CLPK AMES UIC/TeraFlow DCGW DCNE ARLG HOPI/ DCS MCLN ISIE ACCESS GMU NGC

  7. Creating an Experimental Networking Service • MAX has a state of the art, world class, fiber based, advanced networking testbed in the DRAGON facilities… We should make every effort to retain this experimental facility beyond the sunset of the DRAGON project itself. • As the NSF sponsorship for DRAGON winds down, we want to migrate the DRAGON testbed and related activities to a cost-recovered “Experimental Network Service” • This “service” will provide a flexible multi-layer network environment that is focused on deploying and/or supporting new and experimental networking concepts - broadly construed. • We expect the service will require ~$200K-$400K/yr to maintain the fiber infrastructure, colo facilities, maintenance, and operational/engineering support (beginning in FY09) • The facility will provide access to fiber, waves, unconventional framing, colo space, control plane capabilities (sounds somewhat like a GENI microcosm…) • The facility is able to support real applications – iti is reasonably stable, supports circuit capabilites to 10 Gbps, can support 40G (and potentially 100G), …. • Be Imaginative!

  8. Creating an Experimental Networking Service • We will be assembling a Planning Working Group consisting of key personel from existing DRAGON related network research activites and any additional research or development activities (current or proposed) that may be able to leverage this facilitiy going forward. • The WG will review the core infrastructure and make recommendations on how to structure the cost recovery model • The WG will recommend a steering process to coordinate activities and priorities of the facility • i.e. how do we address a broad set of needs and share the resources? • How do we coordinate activities among diverse projects? • The initial WG will be broadly inclusive – anyone (org or project) that is considering participation is invited • We ask everyone to contact any parties in their campus(s) that may be able to use the facility to join the Planning WG • Target is to have the plan ready in the April08 timeframe • Service to officially begin around CY08Q3 (details tbd)

  9. HOPI Testbed Support Center • Sponsor: Internet2 • Major Activities: • HOPI is being decommisioned – • It was successful in that it provided a testbed for a number of circuit service concepts (CHEETAH, PHOEBUS, DRAGON, DoE experiments, international reach, etc.) • TSC personnel are porting DRAGON + DICE control plane concepts to new Internet2 Dynamic Circuit Network • Porting DRAGON GMPLS control plane to interoperate with Ciena CDs • Developing AST XML interface and GUI tool • Developing DICE Web Services interface for interdomain topology distribution and provisioning • Hands-On Dynamic Circuit Services Workshops (more later) • HOPI TSC contract being reduced as part of I2 cost containment efforts

  10. Dynamic Circuit Services • As part of the “DICE” Project, the MAX team working on HOPI have been working with Internet2, GEANT, and Esnet to create a common approach to dynamic provisioning across major network domains • The DICE Web Services interface has been developed to distribute topology information between Esnet, GEANT, I2-DCN, and any other participating network.. • The OSCARs package (DoE/Esnet) has been incorporated to provide Authentication and Authorization capabilities within I2-DCN • DRAGON code has been adapted to cover the Ciena Core Directors to provide Ethernet-over-SONET provisioning within the DCN network • Internet2 has deployed this software as the control plane for the Dynamic Circuit Network • DRAGON will connect to DCN beginning Dec 2007 under the bundled IP+DCN arrangement announced in September for the Internet2 Network connectors: • DRAGON will provide the circuit services between MAX regional organizations, I2-DCN, and any other networks that want to interoperate using this CP architechture • MAX’s production IP connection to I2 is 2.4 Gbps, so the DCN connection will support circuits up to 2.4 Gbps. • MAX core staff will host another DCN workshop in early spring for MAX organizations wishing to deploy these services and begin working with them.

  11. Dynamic Circuit Services • MAX staff, in conjunction with personnel from ISI-East, have just completed an important demonstration of this capability earlier this month at Supercomputing 2007 in Reno, NV. • Dynamically allocated bandwdith guarranteed connections across multiple international domains. • Included: Internet2, Scinet, Esnet (Fermi & Brookhaven), UvA, GEANT2, Grnet, HEAnet, PIONIER, Nortel, NYSERnet, GPN, Merit, Nox • Cudos for the tremendous effort from: • Tom Lehman (USC-ISI East) • Xi Yang (USC ISI East) • Chris Tracy (MAX) • Jarda Flydr (MAX) • Fiona Leung (MAX) • Bill Babson (MAX) Merit HEAnet NoX GPN PIONIER I2-DCN GEANT2 SCinet GRNET ESnet UvA Brookhaven Fermi

  12. Application Specific Topologies • Sponsor: LTS (via UMIACS contract) • Synopsis: • Application Specific Topologies (ASTs) consist of formal XML descriptions of distributed applications. • Develop the ability to dynamically establish customized network topologies that support survivable network architectures, content distribution networks, virtual organizations, etc. • This project will build on basic functionality developed in DRAGON, extending the protocols and middleware to support real-time topology reconfiguration, hierarchical specifications, and “grid” integration.

  13. Current Focus Areas for Future Work • Hybrid Networks and Experimental Network Facilities • DRAGON, HOPI, DCS etc • Early adopter facility for regional fanout of experimental dynamic services • Resilient Architectures • Understanding how to map theory to practice in the R&E environment to construct provably survivable networks supporting business & science processes • HDVideo/visualization and distributed data storage services • HD (video and visualization) source, capture, distrbution, transcoding • How can MAX support a regional distributed video services capability? • GENI – Global Environment for Network Innovation • Major NSF initiative to develop a network research facilitiy • Experimental Networks • Moving DRAGON to self support

  14. Growing data universe • Emerging e-science applications are creating extremely large sensor data sets, computationally intensive analysis workflows covering these sets, and large intermediate data storage (capacity and performance) requirements. • “It takes 16 hours to store the results of an 8 hour computation on the new Cray at ORNL” (Nagi Rao) • Observation: “e-science” requirements are diverging dramatically from the requirements of “normal” network users/applications. • Mostly in terms of the relationships between computational, sensor, and storage facilities at the high end. (Few users move petabyte data sets, and fewer still move them to/from their desktop machine) • “normal” services may accelerate if/when video content grows and as HD content becomes both expected and more common.

  15. Conventional routed IP services Project Specific Networks National/international core DCS RON East RON Central Campus Campus RON South Campus User Cluster A User Cluster B Private Network User Data Repository

  16. Application Portals Web Portal Workflow Step 1 Step 2 Step 3 Input storage repository Compute cluster Output storage repository Federated cluster

  17. Globally Supported ASTs E-VLBI AST HEP AST BioInfo AST

  18. Dynamic Circuit Services Workshop • Purpose: • Disseminate technical expertise in the design and direct deployment of GMPLS based dynamic circuit networks. • Current state of the art Control Plane architecture concepts, issues, and ongoing efforts • Provide practical and hands-on experience building a functional DCS network • Intended Audience: • Network engineering personnel and early adopters • Those responsible for defining regional and/or campus network services and architecture • E-Science applications teams needing/wanting flexible high capacity circuits • Two day workshop • Brief over view of the GMPLS technologies and the DICE architecture • Intense two days of configuring, testing, and utilizing increasing more sophisticated DCN based network environments

  19. Build this in two days: Intra-domain ctrl plane Inter-domain ctrl plane Data plane

  20. Schedule • NYSERnet: New York City Mar 14,15 complete • DRAGON internal mini-workshop April 11,12 complete • MAX: College Park, MD May 2,3 complete • NASA Ames: Mountain View May 30,31 complete • LEARN Houston, TX Sep 14,15 complete • Joint Techs 1 Honolulu, HI Jan 18,19 2008 • Joint Techs 2 Honolulu, HI Jan 25,26 2008 • Others in 2008 TBD • Since the instructors and equipment are based at MAX, we can hold additional workshops at MAX fairly easily if necessary or desired • Contact: • Jerry Sobieski at MAX • FFI: http://events.internet2.edu/2007/DCS/

  21. Dynamic light path services are viable • We are still in the early adopter stage of these technologies (!) • GMPLS protocols continue to evolve • Hardware capabilities are evolving very rapidly to support it • The R&E community’s understanding and experience with global DCS will continue to grow • But it is useable - We want to see these capabilities used for real work as much as possible… • We want to create a community of users that will push the core capabilities, operational management, reliability, robustness, useablity, and applicability of these technologies • Contact me if your staff or faculty would like to participate in these activities

  22. GENI • Long term initiative – 20 years • “Continental scale” instrument for network research • $300M-$400M in the 2010/11 timeframe • Details TBD • Current efforts thru GPO are to develop viable service concepts thru Proof-of-Concept, pilot projects, papers, etc. • Reduce risk: build the case that this instrument will indeed provide the functionality and be effective as we build and operate the real GENI facilities • $7.5 M in FY08 • Call expected in Dec 07, Proposals due Feb 08 • MAX can play an important role developing the “Optical Substrate” and the “Narrow Waist” resource management (via ASTs) • The Opt. Subst. can leverage DRAGON testbed… • ASTs are being extended conceptually to support a broader set of resource management and allocation functions. • From a MAX Research perspective, GENI will be a strategically very important program that MAX and its member institutions should be actively part of. • Long term • Revolutionary service concepts – Optical, network layer( and above), RF, international,

  23. GENI Physical Layer Testbed(s)(Example) Base Waves - network testbeds RF - Wireless testbeds Physical layer - optical testbed

  24. MAX Goddard Space Flight Center (GSFC) University of Maryland College Park (UMCP) George Washington Univ. CLPK DCNE DCGW ARLG I2, NLR, Level3 MCLN DCNE Univ of Southern California/ Information Sciences Institute (ISIE) National Computational Science Aliance (NCSA) A GENI Physical Layer Facility integrating the DRAGON Testbed Pittsburg metro (Phase 2?) New York City metro Long Haul DF (300 mi) Washington, DC (DRAGON)

  25. GENI Limited Longhaul Physical Layer Testbed (Example) To BOS COL NYC PSC PRN PIT To CHI WDC CMU UMD MAX Colo/Fiber/Conduit confluence point & upper layer GENI node Optical Research Laboratory GENI Base Waves Long Haul Huts To RDU Dark Fiber

  26. Ongoing Related Activites • Internet2 Newnet Technical Advisory Committee • Sobieski – “Transport Services” WG • Magorian – Commodity peering WG • Optical Network Testbeds (ONT4) conference Prog Com. • Organized by the Large Scale Networking wg of the federal NCO for Networks & Information Technology Research And Development (reports to OSTP) • ONT4 will be held at Fermi National Lab, Mar 31-Apr 2 2008 • The MAX consortium (by merit of DRAGON and ATDNet programs) has been well represented at these meetings • Resulting reports feed into the OSTP recommendations for federal budget priorities • DoE Review Panel • Charge: • Report on effectiveness of Esnet; Assess the effectiveness of DoE’s long term network R&D programs; develop recommendations (10 year timeframe) for network R&D objectives and priorities. • Report to be presented to Raymond Orbach Under Secretary of the Department of Energy for Advanced Scientific Computing Research (Jan 2008)

  27. Recent Presentations and Workshops • GLIF/DICE conference Lehman (ISI-East) Copenhagen May 07 • UvA Amsterdam meetings DRAGON team, May 07 • IPOP 2007 Bijan Jabbari (GMU) Tokyo June 07 • Questnet 2007 Sobieski Cairns Australia July 07 • E-VLBI Workshop Sydney Austr. July 07 • CANS 2007 Xi’an China Aug 07 • Broadnets 2007 Sobieski RTP, NC Sept 07 • I2 Fall MM Lehman/Tracy/Flydr Oct 07 • Supercomputing 2007 Lehman/Yang/Flydr et al Reno Nov 07

  28. Thanks! • Comments, input, thoughts gladly encouraged and accepted • Jerry Sobieski • 301-346-1849 mobile • 301-314-6662 office • Jerrys(at)maxgigapop.net

More Related