1 / 20

LayeredTrees : Most Specific Prefix based Pipelined Design for On-Chip IP Address Lookups

LayeredTrees : Most Specific Prefix based Pipelined Design for On-Chip IP Address Lookups . Author : Yeim-Kuau Chang, Fang-Chen Kuo , Han- Jhen Guo and Cheng- Chien Su Publisher : Presenter: Yu Hao , Tseng Date: 2013/03/20. Outline. Introduction Data Structure

garvey
Download Presentation

LayeredTrees : Most Specific Prefix based Pipelined Design for On-Chip IP Address Lookups

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LayeredTrees: Most Specific Prefix based Pipelined Design for On-Chip IP Address Lookups Author: Yeim-Kuau Chang, Fang-Chen Kuo, Han-JhenGuo and Cheng-ChienSu Publisher: Presenter: Yu Hao, Tseng Date: 2013/03/20

  2. Outline • Introduction • Data Structure • Proposed Pipelined Architecture • Performance • Conclusions

  3. Introduction • In this paper, we shall propose a high-speed pipelined architecture called LayeredTrees. It uses layer-based prefix partitioning scheme to construct the multiway B-tree data structure. We will show that, other than trie and range based data structure, LayeredTrees can achieve a throughput of 120 Gbps that is even higher than the best multibit-trie-based implementation nowadays.

  4. Data Structure • LayeredTreesis proposed for dynamic routing tables. • In LayeredTrees, the prefixes in a routing table are grouped into layers based the concept of the most specific prefixes proposed in our previous work [8]. • The prefixes in each layer are organized as a B-tree because the goal of LayeredTrees is to store the entire routing table in the on-chip memory of FPGA device currently available.

  5. Data Structure (Cont.)

  6. Data Structure (Cont.) • Two problems when the B-tree is used to organize the prefixes in each layer • The utilization of the B-tree nodes is only about 50 percent. • The pointers used for the branches of a B-tree node consume a large amount of memory.

  7. Data Structure (Cont.) • Two commonly used formats • Example : 101*** • mask format : 101000/111000 • length format : 101000/3 • Lite Prefix Format • The prefix of length i, , is represented as , where for to , , and for to , (i.e., is followed by a zero and consecutive ones). • For a prefix in the lite prefix format, its mask denoted by can be easily recovered by the logical equations , . • Example : 1011** • Lite Prefix Format : 1011011 • Mask : 111100

  8. Data Structure (Cont.) • For the routing table consisting of 300K prefixes, the memory required for storing the prefixes in a linear array based on the lite prefix format is 9.9 Mbitswhile mask format and length format need 19.2 and 11.4 Mbits, respectively.

  9. Data Structure (Cont.) • LayeredTrees Search

  10. Data Structure (Cont.) • LayeredTrees Insert

  11. Data Structure (Cont.) • LayeredTreesDelete

  12. Data Structure (Cont.) • Controlled B-tree Building Algorithm (CBA) • CBA inserts prefixes in an increasing order of their prefix values and splits at the position of the last key, instead of the middle key of the node when the B-tree node is full. • According to our analysis, CBA can improve the utilization of B-tree node to 90 percent.

  13. Data Structure (Cont.) • Aggregate array and Segmentation table • Only one pointer called base addressis needed in a B-tree node, instead of m branch pointers. • Using a segmentation table [20] is always the simplest way to partition a tree into several smaller ones without complicating the search operations.

  14. Data Structure (Cont.) • Varying segmentation table and node sizes • The segmentation table size denoted by s and B-tree node size denoted by m are two main factors effecting the memory consumption of the proposed LayeredTree.

  15. Proposed Pipelined Architecture

  16. Proposed Pipelined Architecture (Cont.) • Leveled Search Engine (LSE)

  17. Proposed Pipelined Architecture (Cont.)

  18. Proposed Pipelined Architecture (Cont.)

  19. Performance

  20. Conclusions • The proposed Leveled Search Engine (LSE) is a 5-stage pipeline and the parallel LSE uses 17-stage pipelines. The performance experiments on the chip XC6VSX315T of Virtex-6 FPGA family show that achieved throughput by parallel LSE is superior compared to the existing designs.

More Related