260 likes | 731 Views
BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations . Minlan Yu Princeton University minlanyu@cs.princeton.edu Joint work with Alex Fabrikant, Jennifer Rexford. Large-scale SPAF Networks. Large layer-2 network on flat addresses Simple for configuration and management
E N D
BUFFALO: Bloom Filter Forwarding Architecture for Large Organizations Minlan Yu Princeton University minlanyu@cs.princeton.edu Joint work with Alex Fabrikant, Jennifer Rexford
Large-scale SPAF Networks • Large layer-2 network on flat addresses • Simple for configuration and management • Useful for enterprise and data center networks • SPAF:Shortest Path on Addresses that are Flat • Flat addresses (e.g., MAC addresses) • Shortest path routing • Link state, distance vector, spanning tree protocol H S H S H S H S S S S S S H H S S H H S S H
Challenges • Scaling control plane • Topology and host information dissemination • Route computation • Recent practical solutions (TRILL, SEATTLE) • Scaling data plane • Forwarding table growth (in # of hosts and switches) • Increasing link speed • Remains a challenge
Existing Data Plane Techniques • TCAM (Ternary content-addressable memory) • Expensive, power hungry • Hash table in SRAM • Crash or perform poorly when out of memory • Difficult and expensive to upgrade memory
0 0 0 1 0 0 0 1 0 1 0 1 0 0 0 BUFFALO: Bloom Filter Forwarding • Bloom filters in fast memory (SRAM) • A compact data structure for a set of elements • Calculate s hash functions to store element x • Easy to check membership • Reduce memory at the expense of false positives x Vm-1 V0 h1(x) h2(x) h3(x) hs(x)
BUFFALO: Bloom Filter Forwarding • One Bloom filter (BF) per next hop • Store all the addresses that are forwarded to that next hop Bloom Filters Nexthop 1 query hit Nexthop 2 packet …… Nexthop T
Optimize Memory Usage • Goal: Minimize overall false-positive rate • The probability that one BF has a false positive • Input: • Fast memory size M • Number of destinations per next hop • The maximum number of hash functions • Output: the size of each Bloom filter • Larger Bloom filter for the next hop that has more destinations
Optimize Memory Usage (cont.) • Constraints • Memory constraint • Sum of all BF sizes <= fast memory size M • Bound on number of hash functions • To bound CPU calculation time • Bloom filters share the same hash functions • Proved to be a convex optimization problem • An optimal solution exists • Solved by IPOPT (Interior Point OPTimizer)
Minimize False Positives • A FIB with 200K entries, 10 next hop • 8 hash functions
Comparing with Hash Table • Save 65% memory with 0.1% false positive 65%
Handle False Positives • Design goals • Should not modify the packet • Never go to slow memory • Ensure timely packet delivery • BUFFALO solution • Exclude incoming interface • Avoid loops in one false positive case • Random selection from matching next hops • Guarantee reachability with multiple false positives
One False Positive • Most common case: one false positive • When there are multiple matching next hops • Avoid sending to incoming interface • We prove that there is at most a 2-hop loop with a stretch <= l(AB)+l(BA) Shortest path A dst B False positive
Multiple False Positives • Handle multiple false positives • Random selection from matching next hops • Random walk on shortest path tree plus a few false positive links • To eventually find out a way to the destination False positive link Shortest path tree for dst dst
Multiple False Positives (cont.) • Provable stretch bound • Expected stretch with k false positive case is at most • Proved by random walk theories • Stretch bound actually not bad • False positives are independent • Different switches use different hash functions • k false positives for one packet are rare • Probability of k false positives decreases exponentially with k
Stretch in Campus Network When fp=0.001% 99.9% of the packets have no stretch When fp=0.5%, 0.0002% packets have a stretch 6 times of shortest path length 0.0003% packets have a stretch of shortest path length
Handle Routing Dynamics • Counting Bloom filters (CBF) • CBF can handle adding/deleting elements • Take more memory than Bloom filters (BF) • BUFFALO solution: use CBF to assist BF • CBF in slow memory to handle FIB updates • Occasionally re-optimize BF size based on CBF CBF BF Hard to expand to size 4 Easy to contract CBF to size 4
BUFFALO Switch Architecture • Prototype implemented in kernel-level Click
Prototype Evaluation • Environment • 3.0 GHz 64-bit Intel Xeon • 2 MB L2 data cache, used as fast memory size M • Forwarding table • 10 next hops • 200K entries
Prototype Evaluation • Peak forwarding rate • 365 Kpps, 1.9 μs per packet • 10% faster than hash-based solution EtherSwitch • Performance with FIB updates • 10.7 μs to update a route • 0.47 s to reconstruct BFs based on CBFs • On another core without disrupting packet lookup • Swap in new BFs is fast
Conclusion • Improve data plane scalability in SPAF • Complementary to recent Ethernet advances • Three nice properties of BUFFALO • Small memory requirement • Gracefully increase stretch with the growth of forwarding table • React quickly to routing updates
Papers and Future Work • Paper in submission to CoNext • www.cs.princeton.edu/~minlanyu/writeup/conext09.pdf • Preliminary work in WREN Workshop • Use Bloom filters to reduce memory for enterprise edge routers • Future work • Reduce transient loops in OSPF using Random walk in shortest path tree
Thanks! • Questions?