210 likes | 224 Views
This presentation discusses the need for extreme networking solutions in achieving 160-200 Gbs throughput by 2008 for DOE, highlighting challenges, technologies, and the importance of layer-1 bonding for jitter control and security.
E N D
DOE-UltraScience Net (& network infrastructure) Update JointTechs Meeting February 15, 2005 W. R. Wing
Motivation - DOE Needs Extreme Networking • 160-200 Gbs throughput by 2008 • Probably only achievable by circuit bonding • 0.1% packet re-order and jitter control • Probably only achievable at SONET layer • Line rate, provably-secure, connections • Only available at SONET or optical layer
The Well Known Bandwidth Problem • DOE needs 160 - 200 Gbs by 2008 • Typical multi-stream throughput limited to 25+/- Gbs, and programming multi-steam is hard - hero efforts yield ~30 Gbs • Typical single-steam throughput to a single application ~1Gbs, even on a tuned network - hero effort yields 5-6+ Gbs • Networks will soon offer 40 Gbs channels • Firewalls throw it away anyway
Even Without Firewalls -We Have a Technology Problem • DOE needs 160-200 Gbs by 2008 • This is on the “blue line” • This isn’t a problem routers are going to fix - bonded IP channels ONLY work with VERY large numbers of streams (as Teragrid has discovered) • Bonding at layer-2 (Ethernet) is just as bad • Solution is circuit switching and Layer-1 bonding
Jitter Control • Needed to deconstruct data sets, needed for remote vis, and for remote instrument control or steering • Routers and switch-routers are path deterministic, but not time-deterministic • Only at layer-1 (or SONET) do you get time-determinism
Security • Security requirements will get more and more onerous • Solutions involving IPSec, VPN’s, and encryption have more and more trouble running at line speed • Alternate solution is to use optical/SONET circuits with provable, known end-points and inherent immunity to injected traffic • Again - only available at layer-1
In Summary • Only at layer-1 do you get: • Zero packet re-ordering • Zero jitter • Zero drops due to Congestion • Known (by definition) paths and end points • SONET (or below) can do this • Ethernet switches and switch/routers can’t • MPLS can’t • Layer-1 circuits can: • Bypass firewalls • Carry non-IP frames (e.g., Fiber-Channel over SONET) • Easily (transparently) support parallel, bonded circuits
How do we get there from here? • It requires a research network we control at least down to the SONET layer • It requires a research network with significant span • It requires a research network with at least two lambdas
UltraScience Net Research Testbed • Building an extended-regional lambda-switching testbed • Connect to NLR in Atlanta and Chicago • Use asset-trading to extend reach (Sunnyvale and East Coast) • Provide an evolving matrix of switching capabilities • Separately fund research projects (e.g., high-performance protocols, control, visualization) that will exercise the network and directly support applications at the host institutions
Bowling Green Indianapolis Crawfordsville Chattanooga Stevenson Christiana Louisville Schneider Acworth Nashville Seymour Chicago Monon Atlanta Dalton Upton CD CI 2xOC192 – UT TeraGRID 2x10GbE CD bypass for future 10GbE CD switching 2xOC192 – ORNL TeraGRID GbE / 10GbE GbE / 10GbE GbE / 10GbE T640 T640 N x 10GbE 2xOC192 – UT TeraGRID ONLINE Metro ???` F10 N x 10GbE OLA 2x10GbE ORNL TeraGRID 2x10GbE – UT TeraGRID OLA - 2 T320 ORNL CI 20 x Gig-E??? OC192’s OLA -1 UT TeraGRID Knoxville ORNL TeraGRID N x 10GbE, Ultranet Memphis TN.edu GbE / 10GbE GbE / 10GbE In More Detail -
Internet2 NLR ORNL TeraGrid ESnet Connections to National Nets ORNL ORNL-TVA
Research Projects • A progression of switching approaches • Study/compare SONET, MPLS(GMPLS), and all optical • New all-optical technologies coming (e.g., laser tuning) • Fast Local Storage • Storage Depots • Transport-optimized storage • Progression of experimental point-to-point transport technologies • Fiber channel • Infiniband • LAN-PHY Ethernet initial transport • Circuit-Switched backbone, Frame-Switched edge
Digression: LAN-PHY vs. WAN-PHY • Conventional Wisdom: LAN-PHY will win • Cheaper, faster, etc. (based on POS experience) • False cost model (extra processing) • Ignore SONET advantages • New Transponders are transparent • Software selectable OC192 (WAN–PHY) or LAN–PHY • Costs now equal • Advantage now WAN-PHY • DCC and OAM non-trivial considerations
SONET POS PAD POS SONET POS PAD POS E PAD E IP IP IP IP IP IP DATA DATA DATA DATA DATA DATA IP IP IP IP IP IP SONET E PAD E SONET E PAD E IP DATA IP PoS vs. Ethernet
Status and Schedule • All contracts signed • Fiber, Co-Lo space and power, equipment and installation, smart-hands • Equipment shipping, built-out started • Chicago-Sunnyvale paced by NLR cross-connects in Chicago (Level(3) to Starlight) • NLR supplying 10Gig-E (LAN-PHY) initially • PNNL fiber schedule will pace their connection • First Chicago-Sunnyvale traffic this month • Details on following VuGraphs -
N x 1Gig-E ORNL M160 SOX M320 Phase-2 (February-March)
Thank You http://www.csm.ornl.gov/ultranet