1 / 15

A Proposed Architecture for the GENI Backbone Platform

A Proposed Architecture for the GENI Backbone Platform. Jon Turner jon.turner@wustl.edu http://www.arl.wustl.edu/~jst/. GENI Backbone Platform. Flexible infrastructure for experimental networks Implements two primary abstractions metalinks – abstraction of physical links

chelsia
Download Presentation

A Proposed Architecture for the GENI Backbone Platform

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Proposed Architecture for the GENI Backbone Platform Jon Turnerjon.turner@wustl.edu http://www.arl.wustl.edu/~jst/

  2. GENI Backbone Platform • Flexible infrastructure for experimental networks • Implements two primary abstractions • metalinks – abstraction of physical links • metarouters – abstraction of physical network devices • Metalinks • point-to-point or multipoint • point-to-point links may have provisioned bandwidth • built on top of substrate links • Metarouters • substrate platform provides generic resources • variety of resource types with minimal limitations on use • functionality defined by researchers • may forward packets, switch TDM circuits or implement multimedia processing functions

  3. GENI Backbone Overview substrate link metalink substrate platform metarouter substrate links may run over Ethernet, IP, MPLS, . . . metanetprotocol stack

  4. High Level Objectives • Enable experimental nets and minimize obstacles • focus on providing resources – architectural neutrality • enable use by real end users • Stability and reliability • reliable core platform • effective isolation of experimental networks • Ease of use • enable researchers to be productive without heroic efforts • toolkits that facilitate use of high performance elements • Scalable performance • enable >100K users, wide range of metarouter capacities • high ratio of processing to IO • Technology diversity and adaptability • variety of processing resources – add more types later

  5. optional mezzanine cards optional Rear Transition Module carrier card fabric connector power connector Advanced Telecom Computing Architecture • New industry standard • defines standard packaging • enables assembly of multi-supplier systems • Standard 14 slot chassis • high bandwidth serial links • variety of processing blades • redundant switch blades • integrated management • Relevance to GENI • flexible, open subsystems • compelling research platform • faster transition of research ideas into practice

  6. ILC1 OLC1 ILC2 OLC2 Switch Fabric ProcessingResources . . . . . . . . . substrate ILCn OLCn InputLine Cards OutputLine Cards Virtualized Line Card Architecture • Similar to conventional router architecture • line cards connected by a switch fabric • traffic makes a single passthrough the switch fabric • Requires fine-grainedvirtualization • line cards must supportmultiple meta line cards • requires intra-component resourcesharing and traffic isolation • Mismatch for current device technologies • multi-core NPs lack memory protection mechanisms • lack of tools and protection mechanisms for independent, partial FPGA designs • Hard to vary ratio of processing to IO

  7. PEs Switch LineCards Processing Pool Architecture • Processing Engines (PEs) implement metarouters • variety of types • Line Cards terminate ext. links, mux/dmx metalinks • Shared PEs include substrate component • Dedicated PEs need not include substrate • use switch and Line Cards for protection and isolation • PEs in larger metarouters linked by metaswitch • Larger metarouters may own Line Cards • allows metanet to define transmission format/framing • configured by lower-level transport network

  8. metarouter 2 metarouter 1 PE PE PE PE . . . . . . constrainedrouting nonblockingswitch fabric no interference . . . . . . LC LC LC Ensuring Metarouter Isolation • Constrain routing on switch port basis. • use switch with VLAN support for constrained routing • substrate controls VLAN configuration • Nonblocking switch fabric ensures traffic isolation. • congestion at one port does not affect traffic to another • traffic within clusters cannot interfere

  9. Current Development System • Network Processor blades • dual IXP 2850 NPs • 3xRDRAM, 3xSRAM, TCAM • dual 10GE interfaces • 10x1GE IO interfaces • General purpose blades • dual Xeons, 4xGigE, disk • 10 Gb/s Ethernet switch • VLANs for traffic isolation

  10. GPE GPE LC RTM Switch NPE DRAM SRAM SRAM QueueManager (2 ME) ExtRx (2 ME) Key Extract (2 ME) Lookup (2 ME) Hdr Format (1 ME) IntTx (2 ME) ingress side TCAM external interface switch interface egress side QueueManager (2 ME) ExtTx (2 ME) Hdr Format (1 ME) Lookup (2 ME) Rate Monitor (1 ME) Key Extract (1 ME) IntRx (2 ME) SRAM SRAM SRAM DRAM DRAM SRAM SRAM QueueManager (2 ME) Rx (1 ME) Key Extract (1 ME) Lookup (1 ME) Hdr Format (1 ME) Tx (1 ME) TCAM Prototype Operation • One NP blade (with RTM) implements Line Card • separate ingress/egress pipelines • Second NP hosts multiple metarouter fast-paths • multiple static code options for diverse metarouters • configurable filters and queues • GPEs host conventional OS with virtual machines

  11. DRAM SRAM SRAM QueueManager (2 ME) Lookup (2 ME) Hdr Format (1 ME) IntTx (2 ME) ExtRx (2 ME) Key Extract (2 ME) ingress side external interface TCAM switch interface egress side QueueManager (2 ME) ExtTx (2 ME) Hdr Format (1 ME) Lookup (2 ME) Rate Monitor (1 ME) Key Extract (1 ME) IntRx (2 ME) SRAM SRAM SRAM DRAM Line Card • Ingress side demuxes with TCAM filters (port #s) • Egress side provides traffic isolation per interface • Target 10 Gb/s line rate for 80 byte packets

  12. DRAM SRAM SRAM QueueManager (2 ME) Substr. Decap (1 ME) Lookup (1 ME) Hdr Format (1 ME) Tx (2 ME) Rx (2 ME) Parse (1 ME) TCAM SRAM NPE Hosting Multiple Metarouters • Parse and Header Format include MR-specific code • parse extracts header fields to form lookup key • Hdr Format makes required changes to header fields • Lookup block uses opaque key for TCAM lookup and returns opaque result for use by Hdr Format • Multiple static code options can be supported • multiple metarouters per code option • each has own filters, queues and block of private memory

  13. SRAM SDRAM SRAM SRAM SDRAM SDRAM FPE FPE FPE SDRAM SRAM SDRAM SDRAM SRAM SRAM SPI-4 Flash GLU 10 GESwitch Power SDRAM DRAM SDRAM CaviumNP CaviumNP CaviumNP SDRAM SDRAM DRAM Possible Additional PE Types • ATCA carrier card • 10 GE switch with connections to each switch blade • 4 mezzanine card slots with 10 GE • ext. IO interface to RTM connector • FPGA mezzanine card • Xilinx Virtex-5 LX330 • over 200K flip flops and LUT6s • over 1 MB of on-chip SRAM • on-board SDRAM and SRAM chips • Cavium NP card • up to 16 MIPs processor cores • 600 MHz, dual issue • per core L1 cache, shared L2 (2 MB) • more conventional SMP prog. style

  14. ATCA BladeServer 2 BladeServer BladeServer 2 2 ATCA ATCA 6 6 6 ATCA BladeServer 2 24 port 10GE Switches 12 ATCA ATCA 24 2 2 BladeServer BladeServer Scaling Up • Baseline config (1+1) • 14 slot ATCA chassis • separate blade server for GPEs • 14 GPEs + 9 NPEs+ 3 LCs • 10 GE inter-chassis connection • Multi-chassis direct • up to seven chassis pairs • 98 GPEs + 63 NPEs+ 21 LCs • 2-hop forwarding as needed • Multi-chassis indirect • up to 24 chassis pairs • 336 GPEs + 216 NPEs + 72 LCs

  15. Summary • GENI requires capable backbone platform • to enable wide range of experimental research • to support production use of experimental networks by large numbers of non-research users • Required hardware building blocks are at hand • ATCA provides useful framework • powerful server blades and NP blades • high performance switching components • What’s still to do? • software to manage/configure resources for users • sample metarouter code for NPs • FPGA-based processing engines • tools to speed-up metarouter development • demonstration of multi-PE metarouters

More Related