1 / 14

Multi-Terabit IP Lookup Using Parallel Bidirectional Pipelines

This paper presents a high-performance IP lookup system utilizing parallel bidirectional pipelines. It covers front-end and back-end processes, memory balancing, trie partitioning, and performance optimization strategies. The presentation addresses trie partitioning, subtrie-to-pipeline mapping, node-to-stage mapping, and overall system performance metrics.

ssweat
Download Presentation

Multi-Terabit IP Lookup Using Parallel Bidirectional Pipelines

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-Terabit IP LookupUsing Parallel Bidirectional Pipelines Author:Weirong Jiang, Viktor K. Prasanna Publisher: May 2008 CF '08: Proceedings of the 2008 conference on Computing frontiers ACM Presenter: Yu-Ping Chiang Date: 2008/09/16

  2. Outline • Overview • Front End • Back End • Memory Balancing • Trie Partitioning • Subtrie-to-Pipeline Mapping • Node-to-Stage Mapping • Performance

  3. Front End • Receive packets • Dispatch packets to pipelines: • cache hit / miss • set delay

  4. Process packets Output retrieved next-hop information: using delay to retrieve output information Back End

  5. Outline • Overview • Front End • Back End • Memory Balancing • Trie Partitioning • Subtrie-to-Pipeline Mapping • Node-to-Stage Mapping • Performance

  6. I=2 Trie Partitioning • Initial stride (I) • following section: I=12

  7. Subtrie-to-Pipeline Mapping • Problem formulation • Algorithm – O(KP) K = # of tries P = # of pipelines

  8. Performance

  9. Node-to-Stage Mapping • Problem formulation • Constraint: ancestor mapped preceding to child. • Main idea: • two subtries mapped onto different direction. • two same trie level nodes mapped onto different stages.

  10. Inversion: • Methods: • largest leaf • least height • largest leaf per height • least average depth per leaf (use in following section) • Inversion Factor (IFR) • (in following section: 4~8)

  11. H = # of pipeline stages N = total # of trie nodes • O(HN) • Node fields: • Distance to child • Memory address of child

  12. Outline • Overview • Front End • Back End • Memory Balancing • Trie Partitioning • Subtrie-to-Pipeline Mapping • Node-to-Stage Mapping • Performance

  13. Performance • Memory: 1.8 MB • (13+5)*2^13*25*4=14.75Mb=1.8MB • 18 KB/stage • 18.75 G packets / sec • 7.5 PPC*2.5GHz=18.75G packets/sec • Power consumption: 0.2 W / IP lookup • 0.008*25=0.2W

More Related