270 likes | 431 Views
Design and Implementation of a Consolidated Middlebox Architecture. Vyas Sekar. Sylvia Ratnasamy. Michael Reiter. Norbert Egi Guangyu Shi. Need for Network Evolution. New applications. Evolving threats. Policy constraints. Performance, Security, Compliance. New devices.
E N D
Design and Implementation of aConsolidated MiddleboxArchitecture Vyas Sekar Sylvia Ratnasamy Michael Reiter Norbert EgiGuangyu Shi
Need for Network Evolution New applications Evolving threats Policy constraints Performance, Security, Compliance New devices
Network Evolution today: Middleboxes! Data from a large enterprise: >80K users across tens of sites Just network security $10 billion
Key “pain points” Narrow interfaces Management Management Management Specialized boxes Increases capital expenses & sprawl Increases operating expenses Limits extensibility and flexibility “Point” solutions!
Outline • Motivation • High-level idea: Consolidation • System design • Implementation and Evaluation
Key idea: Consolidation Two levels corresponding to two sources of inefficiency 2. Consolidate Management Network-wide Controller 1. Consolidate Platform
Consolidation at Platform-Level Today: Independent, specialized boxes AppFilter Proxy Firewall IDS/IPS Decouple Hardware and Software Commodity hardware: e.g., PacketShader, RouteBricks, ServerSwitch, SwitchBlade Consolidation reduces capital expenses and sprawl
Consolidation reduces CapEx Multiplexing benefit = Max_of_TotalUtilization / Sum_of_MaxUtilizations
Consolidation Enables Extensibility VPN Web Mail IDS Proxy Firewall Protocol Parsers Session Management Contribution of reusable modules: 30 – 80 %
Management consolidation enables flexible resource allocation Today: All processing at logical “ingress” Process (P) Process (0.4 P) Process (0.3 P) Process (0.3 P) N3 N1 P: N1 N3 Overload! N2 Network-wide distributionreduces load imbalance
Outline • Motivation • High-level idea: Consolidation • CoMb: System design • Implementation and Evaluation
CoMb System Overview Logically centralized e.g., NOX, 4D General-purpose hardware: e.g., PacketShader, RouteBricks, ServerSwitch, Network-wide Controller Existing work: simple, homogeneous routing-likeworkload Middleboxes: complex, heterogeneous, new opportunities
CoMb Management Layer Goal: Balance load across network. Leveragemultiplexing, reuse, distribution Resource Requirements Policy Constraints Routing, Traffic Network-wide Controller Processing responsibilities
Capturing Reuse with HyperApps HyperApp: find the union of apps to run HTTP: 1+2 unit of CPU 1+3 units of mem HTTP UDP HTTP NFS UDP = IDS IDS Proxy HTTP = IDS & Proxy NFS = Proxy common 3 4 Memory CPU 2 3 1 1 3 1 Memory CPU Memory CPU Footprint on resource 4 1 Memory CPU Need per-packet policy, reuse dependencies! Policy, dependency are implicit
Modeling Processing Coverage HTTP: Run IDS < Proxy IDS < Proxy 0.4 IDS < Proxy 0.3 IDS < Proxy 0.3 HTTP N1 N3 N3 N1 N2 What fraction of traffic of class HTTP from N1 N3 should each node process?
Network-wide Optimization Minimize Maximum Load, Subject to Processing coverage for each class of traffic Fraction of processed traffic adds up to 1 No explicit Dependency Policy Load on each node sum over HyperApp responsibilities per-path A simple, tractable linear program Very close (< 0.1%) to theoretical optimal
CoMb System Overview Logically centralized e.g., NOX, 4D General-purpose hardware: e.g., PacketShader, RouteBricks, ServerSwitch, Network-wide Controller Existing work: simple, homogeneous routing-likeworkload Middleboxes: complex, heterogeneous, new opportunities
CoMb Platform Applications Challenges: Performance Parallelize Isolation IDS Proxy … Core1 … Core4 Challenges: Lightweight Parallelize Policy Enforcer Policy Shim (Pshim) IDS < Proxy NIC Challenges: No contention Fast classification Classification: HTTP Traffic
Parallelizing Application Instances ✔ M1 M2 M3 M2 App-per-core HyperApp-per-core M2 M3 M1 Core1 Core2 Core3 Core2 Core1 PShim PShim PShim PShim + Keeps structures core-local + Better for reuse - But incurs context-switch - Need replicas • Inter-core communication • More work for PShim • + No in-core context switch HyperApp-per-core is better or comparable Contention does not seem to matter!
CoMb Platform Design M1 M1 M2 M4 M4 M3 Core-local processing Workload balancing Core 2 Core 3 Core 1 M1 M5 Hyper App1 Hyper App2 Hyper App4 Hyper App3 Hyper App3 PShim PShim PShim PShim PShim Q1 Q2 Q4 Q5 Q3 NIC hardware Parallel, core-local Contention-free network I/O
Outline • Motivation • High-level idea: Consolidation • System design: Making Consolidation Practical • Implementation and Evaluation
Implementation using CPLEX Policy Shim Kernel mode Click Extensible apps Standalone apps Network-wide Management Memory mapped Or Virtual interfaces Ported logic From Bro Click Protocol Session 8-core Intel Xeon with Intel 82599 NIC
Consolidation is Practical • Low overhead for existing applications • Controller takes < 1.6s for 52-node topology • 5x better than VM-based consolidation
Benefits: Reduction in Maximum Load MaxLoadToday /MaxLoadConsolidated Consolidation reduces maximum load by 2.5-25X
Benefits: Reduction in Provisioning Cost ProvisioningToday /ProvisioningConsolidated Consolidation reduces provisioning cost 1.8-2.5X
Discussion • Changes traditional vendor business • Already happening (e.g., “virtual appliances”) • Benefits imply someone will do it! • May already have extensible stacks internally! • Isolation • Current: rely on process-level isolation • Get reuse-despite-isolation?
Conclusions • Network evolution occurs via middleboxes • Today: Narrow “point” solutions • High CapEx, OpEx, and device sprawl • Inflexible, difficult to extend • Our proposal: Consolidated architecture • Reduces CapEx, OpEx, and device sprawl • Extensible, general-purpose • More opportunities • Isolation • APIs (H/W—Apps, Management—Apps, App Stack)