1 / 17

FEACAN: Front-End Acceleration for Content-Aware Network Processing

Explore the FEACAN algorithm and hardware design for efficient compression ratios and performance in network processing. Learn about state grouping, bitmap compression, and achieving high throughput with FEACAN technologies.

ltimms
Download Presentation

FEACAN: Front-End Acceleration for Content-Aware Network Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FEACAN: Front-End Acceleration forContent-Aware Network Processing Publisher : InfoComm, 2011 Author : Yaxuan Qi, Kai Wang, Jeffrey Fong, Yibo Xue, Jun Li, Weirong Jiang, Viktor Prasanna Presenter : Yu-Hsiang Wang Date : 2011/05/11

  2. Outline • Introduction • Related work • FEACAN Algorithm design • FEACAN Hardware design • Performance evaluation

  3. Introduction

  4. Related work • Although D2FA based algorithms achieve good compression ratio, they are inherently difficult for efficient hardware implementation because: i) each state transition requires comparing all the nondefault transitions in the current state before taking the default-transition; ii) such linear comparisons are recursively done among all states in the default-path until a non-default transition is found.

  5. Related work • A DFA with N states representing regular expressions over an alphabet Σ with size M =|Σ| contains N*M next state transitions. • The DFA table TN*M is very sparse and has a lot of redundancies.

  6. Algorithm design • Bitmap compression

  7. Algorithm design • First stage grouping: -Group states associated with identical bitmap together • The number of bitmaps from N to K

  8. Algorithm design • Second stage grouping: -Group similar states with in each of the K groups into L sub-groups. • Two states are similar if they satisfy: i) They share the same bitmap (have the same number of transitions); ii) More than 80~95% transitions are identical to each other.

  9. Algorithm design • State Merging: -selects the first state of each of the L groups as a leader state, and define other states as member states.

  10. Algorithm design • Optimization1: bitmap combination Assuming only the mth bit of bmp1 and bmp2 is different, the bit value is 1 for bmp1 and 0 for bmp2. bmp2 = bmp1ORbmp2, the mth transitions of the states will become a unique transition. Although bitmap combination increases the number of unique transitions, the overall number of transitions tends to be even smaller because with fewer bitmaps more states can be grouped together.

  11. Algorithm design • Optimization2: hierarchical bitmap • Experimental results show that using 64-bit bitmap can achieve nearly the same compression ratio as the 256-bit bitmap .

  12. Hardware design

  13. Hardware design • To hide the 4-cycle latency, instead of feeding the engine with the byte stream from a single packet payload, we use 4 bytes from different flows as consecutive inputs characters for the lookup engine, making each pipeline has new input at every clock cycle.

  14. Performance evaluation

  15. Performance evaluation

  16. Performance evaluation • FEACAN lookup engine can achieve 150MHz*8bits*2= 2.4Gbthroughput. With modern ASIC (application specific integrated circuit) technologies, a modest clock rate of 500 MHz can be achieved. Therefore, the throughput achieved by each FEACAN engine with ASIC implementation is 500 150MHz*8bits*2= 8 Gb.

More Related