470 likes | 1.1k Views
Optimizing the ‘One Big Switch’ Abstraction in Software Defined Networks. Nanxi Kang Princeton University in collaboration with Zhenming Liu, Jennifer Rexford, David Walker. Software Defined Network. Decouple data and control plane A logically centralized control plane (controller)
E N D
Optimizing the ‘One Big Switch’ Abstraction in Software Defined Networks Nanxi Kang Princeton University in collaboration with Zhenming Liu, Jennifer Rexford, David Walker
Software Defined Network • Decouple data and control plane • A logically centralized control plane (controller) • Standard protocol • e.g., OpenFlow program Network policies Controller ... Switch ... Switch rules 2
Existing control platform • Decouple data and control plane • Alogically centralized control plane (controller) • Standard protocol • e.g., OpenFlow ✔ Flexible policies ✖ Easy management 3
‘One Big Switch’ Abstraction H1 H2 H2 H3 H1 H1 H3 From H1, dstIP = 0* => go to H2 From H1, dstIP = 1* => go to H3 H1 H2 Routing policy R e.g., Shortest path routing Endpoint policy E e.g., ACL, Load Balancer H3 Automatic Rule Placement 4
Challenges of Rule Placement Endpoint policy E Routing policy R H1 ... H2 H3 H1 #rules >10k Automatic Rule Placement ... TCAM size =1k ~ 2k H1 H2 ... H3 ... ... 5
Past work • Nicira • Install endpoint policies on ingress switches • Encapsulate packets to the destination • Only apply when ingress are software switches • DIFANE • Palette 6
Contributions • Design a new rule placement algorithm • Realize high-level network policies • Stay within rule capacity of switches • Handle policy update incrementally • Evaluation on real and synthetic policies 7
Contribution • Design a new rule placement algorithm • Realize high-level network policies • Stay within rule capacity of switches • Handle policy update incrementally • Evaluation on real and synthetic policies 7
Problem Statement Topology Endpoint policy E Routing policy R 0.5k ... 1k 1k 0.5k Automatic Rule Placement Stay within capacity Minimize total ... ... ... ... 8
Algorithm Flow 1. 2. 3. 9
Algorithm Flow 1. 2. 3. 9
Single Path • Routing policy is trivial C1 C2 C3 10
Endpoint policy R1: (srcIP = 0*, dstIP = 00), permit R2: (srcIP = 01, dstIP = 1* ), permit R3: (srcIP = **, dstIP = 11), deny R4: (srcIP = 11, dstIP= ** ), permit R5: (srcIP = 10, dstIP= 0* ), permit R6: (srcIP = **, dstIP= ** ), deny 11
Map rule to rectangle dstIP 00 00 01 01 10 10 11 11 dstIP srcIP R1: (0*, 00),P R2: (01, 1*),P R3: (**, 11),D R4: (11, **),P R5: (10, 0*),P R6: (**, **),D 00 00 srcIP 01 01 R1 10 10 11 11 12
Map rule to rectangle dstIP 00 00 01 01 10 10 11 11 dstIP srcIP R1: (0*, 00),P R2: (01, 1*),P R3: (**, 11),D R4: (11, **),P R5: (10, 0*),P R6: (**, **),D 00 00 srcIP 01 01 R1 R3 10 10 R2 11 11 R5 R4 C1= 4 13
Pick rectangle for every switch R1 R3 R2 R5 R4 14
Select a rectangle 00 01 10 11 • Overlapped rules: • R2, R3, R4, R6 • Internal rules: • R2, R3 • #Overlapped rules ≤ C1 R3 R1 00 R2 R5 01 R4 10 q 11 C1= 4 15
Install rules in first switch 00 00 01 01 10 10 11 11 R3 R1 00 00 R2 R5 R3 01 01 R4 R2 10 10 q 11 11 R’4 C1= 4 16
Rewrite policy 00 00 01 01 10 10 11 11 R3 R1 00 00 R2 R5 01 01 R1 q R4 10 10 q Fwdeverything in q Skip the original policy 11 11 R5 R4 17
Overhead of rules • #Installed rules ≥ |Endpoint policy| • Non-internal rules won’t be deleted • Objective in picking rectangles • Max(#internal rules) / (#overlap rules) 18
Algorithm Flow 1. 2. 3. 19
Topology = {Paths} • Routing policy • Implement: install forwarding rules on switches • Gives {Paths} H1 H1 H2 H2 H2 H3 H3 H1 H1 H1 H3 20
Project endpoint policy to paths • Enforce endpoint policy • Project endpoint policy to paths • Only handle packets using the path • Solve paths independently H1 H1 H2 H2 H2 H3 H3 H1 H1 H1 H3 E1 Endpoint Policy E E2 E3 E4 21
What is next step ? H2 H1 H3 ? • Divide rule space across paths • Estimate the rules needed by each path • Partition rule space by Linear Programming ✔ ✔ Decomposition to paths Solve rule placement over paths 22
Algorithm Flow 1. 2. Fail 3. Success 23
Roadmap • Design a new rule placement algorithm • Stay within rule capacity of switches • Minimize the total number of installed rules • Handle policy update incrementally • Fast in making changes, • Compute new placement in background • Evaluation on real and synthetic policies 24
Insert a rule to a path • Path 25
Limited impact • Path • Update a subset of switches R R R 26
Limited impact • Path • Update a subset of switches • Respect original rectangle selection R R’ 27
Roadmap • Design a new rule placement algorithm • Stay within rule capacity of switches • Minimize the total number of installed rules • Handle policy update incrementally • Evaluation on real and synthetic policies • ACLs(campus network), ClassBench • Shortest-path routing on GT-ITM topology 28
Path • Assume switches have the same capacity • Find the minimum #rules/switch that gives a feasible rule placement • Overhead = #rule/switch x #switches 29
Path • Assume switches have the same capacity • Find the minimum #rules/switch that gives a feasible rule placement • Overhead = #rule/switch x #switches - |E| 30
Path • Assume switches have the same capacity • Find the minimum #rules/switch that gives a feasible rule placement • Overhead = #rule/switch x #switches- |E| |E| 31
#Extra installed rules vs. length Normalized #extra rules Path Length 32
#Extra installed rules vs. length Normalized #extra rules Path Length 33
Data set matters Many rule overlaps Normalized #extra rules Few rule overlaps Path Length • Real ACL policies 34
Place rules on a graph • #Installed rules • Use rules on switches efficiently • Unwanted traffic • Drop unwanted traffic early • Computation time • Compute rule placement quickly 35
Place rules on a graph • #Installed rules • Use rules on switches efficiently • Unwanted traffic • Drop unwanted traffic early • Computation time • Compute rule placement quickly 36
Carry extra traffic along the path • Install rules along the path • Not all packets are handled by the first hop • Unwanted packets travel further • Quantify effect of carrying unwanted traffic • Assume uniform distribution of traffic with drop action 37
When unwanted traffic is dropped • An example single path • Fraction of path travelled 38
When unwanted traffic is dropped • An example single path • Fraction of path travelled • Unwanted traffic dropped until the switch 39
Aggregating all paths • Min #rules/switch for a feasible rule placement 40
Give a bit more rule space • Put more rules at the first several switches along the path 41
Take-aways • Path: low overhead in installing rules. • Rule capacity is efficiently shared by paths. • Most unwanted traffic is dropped at the edge. • Fast algorithm, easily parallelized • < 8 seconds to compute the all paths 42
Summary • Contribution • An efficient rule placement algorithm • Support for incremental update • Evaluation on real and synthetic data • Future work • Integrate with SDN controllers, e.g., Pyretic • Combine rule placement with rule caching 43