360 likes | 429 Views
100 Gbps Services, Technologies, and Facilities: Basic Issues, Early Implementations, Demonstrations Wendy Huntoon, Director Advanced Networking, ( huntoon@psc.edu ) Pittsburgh Supercomputing Center/3Rivers Optical Exchange Abdella Battou, Interim Executive Director, (abattou@maxgigapop.net)
E N D
100 Gbps Services, Technologies, and Facilities: Basic Issues, Early Implementations, Demonstrations Wendy Huntoon, Director Advanced Networking, (huntoon@psc.edu) Pittsburgh Supercomputing Center/3Rivers Optical Exchange Abdella Battou, Interim Executive Director, (abattou@maxgigapop.net) Joe Mambretti, Director, (j-mambretti@northwestern.edu) International Center for Advanced Internet Research (www.icair.org) Northwestern University Director, Metropolitan Research and Education Network (www.mren.org) Co-Director, StarLight (www.startap.net/starlight) Quilt Annual Meeting Portland Oregon September 2011
Drivers for 100 Gbps Services • Need for Massive Additional Capacity • Support for Capacity Much Beyond Aggregation of Millions of Small Flows • Support for Extremely Large Individual Stream (Including End-To-End) • Communications for Data Intensive (e.g., Petascale Science) • Communications for Specialized Highly Distributed Environments • Environments Directly Controlled By Edge Processes (Application Specific Network Services) • Highly Controllable Science Workflows • Begin Migration From Centralized NOCs • Science Clouds (vs Consumer and Enterprise) • Many New Applications and Services That Cannot Be Supported Today
NSF Advanced Research Infrastructure Program • National Science Foundation Academic Research Infrastructure Program (ARI) • Traditionally, This Program Has Funded Structures - e.g., Creating and Rehabbing Specialized Buildings and Laboratories • Currently, For the First Time, The Program Allows Communications Infrastructure To Be Included As “Structure” • ARI’s Are Limited To One Per Campus, Requiring Local Organization Competition • The Funding Is Directed At Supporting Science Research – Not General R&E Services
Four Worlds of 100 Gbps • 100 Gbps Routing • 100 Gbps Ethernet • 100 Gbps Optical • Other…e.g., Fiber Bundles, Interconnections, Control Planes, etc.
100 Gbps Services: Routing • 100 Gbps Routing • Available Today Based on Proprietary Technology • Optimal Network Designs Place Such Devices At the Network Edge vs Network Core
100 Gbps Services • 100 GigE L2 Switching • Standard: IEEE 802.3ba • Technical changes Finalized In July 2009 • Formal Final Approval Took Place In July 2010 • Beta Products Available Q4 2010 • 1st Commercial Products End of Q4 2011 • Provides for a Rate of 103.125 Gbps
100 Gbps Services: WAN Side/Line Side • 100 Gbps Optical Switching • Standard: ITU G.709 v3 (ODU4 100G) • ODU4/OTU4 Format -- Designed to Transport 100GbE (OTU4 = the ODU4 With FEC Included • Formal Final Approval Took Place In Dec 2009 • Beta Products Available Today • Ref: Demonstrations at SC10 • 1st Commercial Products Available End of Q3 2011
100 Gbps Services: WAN Side/Line Side • 100 Gbps Switch Design Issues • Optical/Electrical Interfaces • Data Plane Processing • Control Plane Processing • Switch Fabrics • Modulation Techniques: e.g., Proprietary Dual Polarization Quadrature Phase Shift Keying (QPSK), a Format Used With 50GHz Channel Spacing (4 Signals In the Same Channel Spacing) • Power • Etc.
100 Gbps Services: Client Side • 100 GigE Physical Layer Standard (PHY) Objectives • Preserve 802.3 / Ethernet Frame Format Based On 802.3 MAC • Preserve Min/Max Frame Size of 802.3 Standard • Provide PHY Specifications For Single-Mode Optical Fiber, Multi-Mode Optical Fiber (MMF), Copper Cables, Backplanes. • Support Bit Error Ratio (BER) Better Than or Equal to 10 − 12 at the MAC/PLS Service Interface • Provide Appropriate Support for Optical Transport Network (OTN) Standard
Optical Internetworking Forum (OIF) 100 Gbps Standards • Physical and Link Layer • 100G Long-Distance DWDM Transmission Framework The “Framework” Project Documents High Level System Objectives for Initial Implementations of 100G Long-Haul DWDM Transmission. • It Identifies Transceiver Module Functional Architecture, and Decomposes That Architecture Into a Number of Technology Building Blocks. • This Project Is Directed At Developing a Consensus Among a Critical Mass of Module and System Vendors On Requirements For Specific 100G technology Elements To Create Largest Possible Market for These Components. • This Project Complements and Builds on Work Defining 100G Ethernet in the IEEE, and the Optical Transport Hierarchy (OTH) in the ITU-T.
One Example: Optical Internetworking Forum100 Gbps Standards • 100G Long Distance DWDM Integrated Photonics Receiver This Project Specifies Key Aspects of Integrated Receivers for Coherent DWDM Applications. • Initially Targeting 100G PM-QPSK Applications, Project Intends to Remain Modulation Format and Data Rate Agnostic Whenever Practical to Maximize Applicability to Future Market Requirements. • Key Aspects of Project Include Definition of: • (1) Required Functionality. • (2) High Speed Electrical Interfaces. • (3) Low Speed Electrical Interfaces. • (4) Optical Interfaces. • (5) Mechanical Requirements. • (6) Environmental Requirements.
100 Gbps Initiative Partners • Pittsburgh Supercomputing Center • 3Rivers Optical Exchange • Mid-Atlantic Cross Roads • Metropolitan Research and Education Network (MREN) • StarLight International/National Communications Exchange • International Center for Advanced Internet Research (iCAIR), Northwestern University • University of Illinois at Chicago • Argonne National Laboratory • Fermi National Accelerator Laboratory • National Center for Supercomputing Applications • CHI MAN Chicago Metro Area Optical Transport Facility for national Research Labs and StarLight (Multiple 100 Gbps) • Illinois Wired/Wireless Infrastructure for Research and Education (I-WIRE) • University of Chicago • Energy Science Network 100 G Advanced Network Initiative • Energy Science Network • NASA Goddard Space Flight Center • CANARIE, SURFnet • National Oceanographic and Atmospheric Administration • etc
Regional Network Renovation: Upgrading the 3ROX Virtual Research Environment Start date - 9/2010 Duration - 36 months Objectives Upgrade 3ROX DWDM infrastructure Upgrade Layer2/Layer2 infrastructure to be 100 GE capable Provide resource pool to researchers 3ROX ARI Project
3ROX ARI Status • DWDM Upgrade • Completed in March 2011 • Features included: • Increased lambda capacity - 4 lambdas per segment to up to 40 lambdas per segment • Added ACM as a PoP on the ring • Reduced the number of transponders required to traverse the metro area • Support 40 GE and 100GE lambdas
3ROX ARI Status • 100 GE • RFP out for 100 GE optical equipment • Responses due September 23 - significant interest • Layer2/Layer3 • Minimum capability is to support 100 GE Internet2 connection • Ideally support switched 100 GE • Support for OpenFlow optional • RFP to be released by the end of September • Responses due 3 weeks after release
3ROX ARI Status • Create Research Resource Pool • Establishes a pool of resources available to 3ROX associated researchers • Current Resources • 10 GE transponders • 10 GE switch port and optics • 3ROX PoP co-location space and power - as available • Pending • 100 GE transponders • 100 GE switch port and optic
3ROX ARI Status • Process • Submit a brief proposal requesting resources • Reviewed by the 3ROX RRP Review committee • If approved, resources allocated for a 6 to 12 month period • Researchers can request a time extension for longer projects • Program Goal • Reduce the barrier for researchers to utilize network infrastructure, including 100 GE • Provide mechanism for researchers to experiment with network resources before committing funds
MAX ARI Project • NSF 09-562 submitted 08/24/2009 • OIA Academic Research Infrastructure • Starting Date : 05/03/10 • Duration 24 months • Objectives • Enhance existing Research and Education metro network with 100G equipment to support data intensive science exploration, modeling, and discovery • Upgrade the 10G connection to Internet2 with 100G • Upgrade the NGIX-East to 100G
MAX 100G Status • Packet Optical Switching RFI issued on 4/6/2010 • Selected Fujitsu on 5/27/2010 • Acquired 6 Fujitsu 9500 Packet Optical on 9/20/2010 • Deployed them on 2/28/2011 • 100G transponders expected end of September, 2011 • 100G L2 Ethernet Switching RFI issued on 6/6/2011 • Internally Selected Brocade MLX-E on 08/22/2011 • Order for 4 100G ports – waiting approval from NSF
E-science Applications It is the nature of science problems that they deal with a continuum of time and space parameters, which need to be quantized to adapt to available network throughput. For e-science experiments, it is never the question of “how much throughput is needed to support an experiment”, but “what the accuracy of an experiment is given the network throughput”. For example, a climate modeling experiment will be able to work with temperature and pressure gradients over a 3-D mesh which is ten times finer in each spatial dimension if the network throughput is improved by three orders of magnitude. Thus, credible science experiments tend to, and in fact are designed to consume as much network throughput as possible We better start working on the next higher rate : 1 Tbps interface !
IP Router Port Cost Improvement Before: After: Router B Router B Router A Router A Router C Router C Router D Router D • Expensive 100Gbs router ports are reduced 50% • Traffic is efficiently routed to appropriate destination (ex. A-C, vs. A-B-C)
10G consolidations Any 10G waves along the same path could be aggregate into a 100G using OTN
Metropolitan Research and Education Network (MREN) Multi-100 Gbps Facility • MREN Submitted ARI Proposal Early in 2010 • Award Announced In October 2010 • Award Granted To Metropolitan Research and Education Network/StarLight International/National Communications Exchange Facility • StarWave, A New Advanced Multi-100 Gbps Facility and Services Is Currently Being Designed • It Will Be Implemented Within the StarLight International/National Communications Exchange Facility • The Optical Components Will Be Implemented September 22-24, 2011 • Demonstration Showcases Planned for Oct, 2011, Nov 2011 (SC11, Seattle)
StarWave: A Multi-100 Gbps Facility • StarWave Is Being Funded To Provide Services To Support Large Scale Data Intensive Science Research Initiatives • Facilities Components Will Include: • An ITU G. 709 v3 Standards Based Optical Switch for WAN Services, Supporting Multiple 100 G Connections • An IEEE 802.3ba Standards Based Client Side Switch, Supporting Multiple 100 G Connections, Multiple 10 G Connections • Multiple Other Components (e.g., Optical Fiber Interfaces, Measurement Servers, Test Servers
CA*net/Ciena/StarLight/iCAIR 100 Gbps Testbed Sept-Oct 2010 100 Gbps ~850 Miles, 1368 Km Source: CANARIE
CA*net/Ciena/StarLight/iCAIR 100 Gbps Testbed Sept-Oct 2011 100 Gbps ~850 Miles, 1368 Km Source: CANARIE
DOE ESnet Advanced Networking Initiative: 100 Gbps Source : ESnet
Future International 100 Gbps Services Via Global Lambda Integrated FacilityAvailable Advanced Network Resources iCAIR is a co-founder of the GLIF, a consortium of institutions, organizations, consortia and country National Research & Education Networks who voluntarily share optical networking resources and expertise to develop the Global LambdaGrid for the advancement of scientific collaboration and discovery. Visualization courtesy of Bob Patterson, NCSA; data compilation by Maxine Brown, UIC. www.glif.is
t Component of Next Gen Control Framework: Fenius GLIF Demonstrations Global Lambda Grid Workshop SC10 HK GLIF Technical Workshop
SC11 Demonstration Plans • Connect MAX and MREN to SCinet to support 100 GE demonstrations • Backbone connectivity • Utilize existing backbone (ESnet, Internet2) to connect MAX and MREN to Scinet • Investigating dark fiber path between MAX and MREN for additional path • Would light with Fujitsu optical equipment • SCInet • SCInet NOC, NASA, Ciena and LAC/iCAIR
WAN WAN 100 GE WAN Connectivity McLean Seattle LA /iCAIR Ciena Optical Chicago NASA NOC Optical Optical S/R