250 likes | 377 Views
SIGCOMM WREN’09. Hash, Don’t Cache: Fast Packet Forwarding for Enterprise Edge Routers. Minlan Yu Princeton University minlanyu@cs.princeton.edu Joint work with Jennifer Rexford. Enterprise Edge Router . Enterprise edge routers Connects upstream providers and internal routers
E N D
SIGCOMM WREN’09 Hash, Don’t Cache: Fast Packet Forwarding for Enterprise Edge Routers Minlan Yu Princeton University minlanyu@cs.princeton.edu Joint work with Jennifer Rexford
Enterprise Edge Router • Enterprise edge routers • Connects upstream providers and internal routers • A few outgoing links • A small data structure for each next hop Provider 2 Provider 1 Enterprise Network
Challenges of Packet Forwarding • Full routes forwarding table (FIB) • For load balancing, fault tolerance, etc. • More than 250K entries, and growing • Increasing link speed • Over 10 Gbps • Requires large, expensive memory • Expensive, complicated high-end routers • More cost-efficient, less power-hungry solution? • Perform fast packet forwarding in a small SRAM
Using a Small SRAM • Route caching is not a viable solution • Store the most frequently used entries in cache • Bad performance during cache miss • Low throughput and high packet loss • Bad performance under worst-case workloads • Malicious traffic with a wide range of destinations • Route changes, link failures • Our solution should be workload independent • Fit the entire FIB in the small SRAM
0 0 0 1 0 0 0 1 0 1 0 1 0 0 0 Bloom Filter • Bloom filters in fast memory (SRAM) • A compact data structure for a set of elements • Calculate s hash functions to store element x • Easy to check membership • Reduce memory at the expense of false positives x Vm-1 V0 h1(x) h2(x) h3(x) hs(x)
Bloom Filter Forwarding • One Bloom filter (BF) per next hop • Store all addresses forwarded to that next hop • Consider flat addresses in the talk • See paper for extensions to longest prefix match Bloom Filters Nexthop 1 query hit Nexthop 2 Packet destination …… Nexthop T T is small for enterprise edge routers
Contributions • Make efficient use of limited fast memory • Formulate and solve optimization problem to minimize false-positive rate • Handle false positives • Leverage properties of enterprise edge routers • Adapt Bloom filters for routing changes • Leverage counting Bloom filter in slow memory • Dynamically adjust Bloom filter size
Outline • Optimize memory usage • Handle false positives • Handle routing dynamics
Outline • Optimize memory usage • Handle false positives • Handle routing dynamics
Memory Usage Optimization • Consider fixed forwarding table • Goal: Minimize overall false-positive rate • Probability one or more BFs have a false positive • Input: • Fast memory size M • Number of destinations per next hop • The maximum number of hash functions • Output: the size of each Bloom filter • Larger BF for next-hops with more destinations
Constraints and Solution • Constraints • Memory constraint • Sum of all BF sizes fast memory size M • Bound on number of hash functions • To bound CPU calculation time • Bloom filters share the same hash functions • Proved to be a convex optimization problem • An optimal solution exists • Solved by IPOPT (Interior Point OPTimizer)
Evaluation of False Positives • The FIB with 200K entries, 10 next hop • 8 hash functions • Takes at most 50 msec to solve the optimization
Outline • Optimize memory usage • Handle false positives • Handle routing dynamics
False Positive Detection • Multiple matches in the Bloom filters • One of the matches is correct • The others are caused by false positives Bloom Filters Multiple hits Nexthop 1 query Nexthop 2 Packet destination …… Nexthop T
Handle False Positives on Fast Path • Leverage multi-homed enterprise edge router • Send to a random matching next hop • Packets can get to the destination even through a less-preferred outgoing link occasionally • No extra traffic, but may cause packet loss • Send duplicate packets • Send copy of packet to all matching next hops • Guarantees reachability, but introduce extra traffic
Prevent Future False Positives • For a packet that experiences a false positive • Conventional lookup in the background • Cache the result • For the subsequent packets • No longer experience false positives • Compared to conventional route cache • Much smaller (only for false-positive destinations) • Not easily invalidated by an adversary
Outline • Optimize memory usage • Handle false positives • Handle routing dynamics
Problem of Bloom Filters • Routing changes • Add/delete entries in BFs • Problem of Bloom Filters (BF) • Do not allow deleting an element • Counting Bloom Filters (CBF) • Use a counter instead of a bit in the array • CBFs can handle adding/deleting elements • But, require more memory than BFs
Update on Routing Change • Use CBF in slow memory • Assist BF to handle forwarding-table updates • Easy to add/delete a forwarding-table entry CBF in slow memory Delete a route BF in fast memory
Occasionally Resize BF • Under significant routing changes • Number of addresses in BFs changes significantly • Re-optimize BF sizes • Use CBF to assist resizing BF • Large CBF and small BF • Easy to expand BF size by contracting CBF CBF BF Hard to expand to size 4 Easy to contract CBF to size 4
Prototype and Evaluation • Prototype in kernel-level Click • Experiment environment • 3.0 GHz 64-bit Intel Xeon • 2 MB L2 data cache, used as fast memory size M • Forwarding table • 10 next hops, 200K entries • Peak forwarding rate • 365 Kpps for 64 Byte packets • 10% faster than conventional lookup
Conclusion • Improve packet forwarding for enterprise edge routers • Use Bloom filters to represent forwarding table • Only require a small SRAM • Optimize usage of a fixed small memory • Multiple ways to handle false positives • Leverage properties of enterprise edge routers • React quickly to FIB updates • Leverage Counting Bloom Filter in slow memory
Ongoing Work: BUFFALO • Bloom filter forwarding in large enterprise • Deploy BF-based switches in the entire network • Forward all the packets on the fast path • Gracefully handling false positives • Randomly select a matching next hop • Techniques to avoid loops and bound path stretch • www.cs.princeton.edu/~minlanyu/writeup/conext09.pdf
Thanks • Questions?