320 likes | 816 Views
Modular Layer 2 In OpenStack Neutron. Robert Kukura, Red Hat Kyle Mestery, Cisco. I’ve heard the Open vSwitch and Linuxbridge Neutron Plugins are being deprecated. I’ve heard ML2 does some cool stuff! I don’t know what ML2 is but want to learn about it and what it provides.
E N D
Modular Layer 2 In OpenStack Neutron Robert Kukura, Red Hat Kyle Mestery, Cisco
I’ve heard the Open vSwitch and Linuxbridge Neutron Plugins are being deprecated. • I’ve heard ML2 does some cool stuff! • I don’t know what ML2 is but want to learn about it and what it provides.
What is Modular Layer 2? A new Neutron core plugin in Havana • Modular • Drivers for layer 2 network types and mechanisms - interface with agents, hardware, controllers, ... • Service plugins and their drivers for layer 3+ • Works with existing L2 agents • openvswitch • linuxbridge • hyperv • Deprecates existing monolithic plugins • openvswitch • linuxbridge
Motivations For a Modular Layer 2 Plugin
Before Modular Layer 2 ... Neutron Server Neutron Server OR ... OR Linuxbridge Plugin Open vSwitch Plugin
Before Modular Layer 2 ... I want to write a Neutron Plugin. What a pain. :( Neutron Server But I have to duplicate a lot of DB, segmentation, etc. work. Vendor X Plugin
ML2 Use Cases • Replace existing monolithic plugins • Eliminate redundant code • Reduce development & maintenance effort • New features • Top-of-Rack switch control • Avoid tunnel flooding via L2 population • Many more to come... • Heterogeneous deployments • Specialized hypervisor nodes with distinct network mechanisms • Integrate *aaS appliances • Roll new technologies into existing deployments
The Modular Layer 2 (ML2) Plugin is a framework allowing OpenStack Neutron to simultaneously utilize the variety of layer 2 networking technologies found in complex real-world data centers.
What’s Similar? ML2 is functionally a superset of the monolithic openvswitch, linuxbridge, and hyperv plugins: • Based on NeutronDBPluginV2 • Models networks in terms of provider attributes • RPC interface to L2 agents • Extension APIs
What’s Different? ML2 introduces several innovations to achieve its goals: • Cleanly separates management of network types from the mechanisms for accessing those networks • Makes types and mechanisms pluggable via drivers • Allows multiple mechanism drivers to access same network simultaneously • Optional features packaged as mechanism drivers • Supports multi-segment networks • Flexible port binding • L3 router extension integrated as a service plugin
ML2 Architecture Diagram Neutron Server ML2 Plugin API Extensions Type Manager Mechanism Manager GRE TypeDriver VLAN TypeDriver VXLAN TypeDriver Arista Cisco Nexus Linuxbridge Open vSwitch Hyper-V L2 Population Tail-F NCS
Multi-Segment Networks VXLAN 123567 physnet1 VLAN 37 physnet2 VLAN 413 VM 1 VM 2 VM 3 • Created via multi-provider API extension • Segments bridged administratively (for now) • Ports associated with network, not specific segment • Ports bound automatically to segment with connectivity
Type Driver API class TypeDriver(object): @abstractmethod def get_type(self): pass @abstractmethod def initialize(self): pass @abstractmethod def validate_provider_segment(self, segment): pass @abstractmethod def reserve_provider_segment(self, session, segment): pass @abstractmethod def allocate_tenant_segment(self, session): pass @abstractmethod def release_segment(self, session, segment): pass
Mechanism Driver API def create_port_precommit(self, context): pass def create_port_postcommit(self, context): pass def update_port_precommit(self, context): pass def update_port_postcommit(self, context): pass def delete_port_precommit(self, context): pass def delete_port_postcommit(self, context): pass def bind_port(self, context): pass def validate_port_binding(self, context): return False def unbind_port(self, context): pass class NetworkContext(object): @abstractproperty def current(self): pass @abstractproperty def original(self): pass @abstractproperty def network_segments(self): pass class MechanismDriver(object): @abstractmethod def initialize(self): pass def create_network_precommit(self, context): pass def create_network_postcommit(self, context): pass def update_network_precommit(self, context): pass def update_network_postcommit(self, context): pass def delete_network_precommit(self, context): pass def delete_network_postcommit(self, context): pass def create_subnet_precommit(self, context): pass def create_subnet_postcommit(self, context): pass def update_subnet_precommit(self, context): pass def update_subnet_postcommit(self, context): pass def delete_subnet_precommit(self, context): pass def delete_subnet_postcommit(self, context): pass
Port Binding • Determines values for port’s binding:vif_type and binding:capabilities attributes and selects segment • Occurs when binding:host_id set on port or existing valid binding • ML2 plugin calls bind_port() on registered MechanismDrivers, in order listed in config, until one succeeds or all have been tried • Driver determines if it can bind based on: • context.network.network_segments • context.current[‘binding:host_id’] • context.host_agents() • For L2 agent drivers, binding requires live L2 agent on port’s host that: • Supports the network_type of a segment of the port’s network • Has a mapping for that segment’s physical_network if applicable • If it can bind the port, driver calls context.set_binding() with binding details • If no driver succeeds, port’s binding:vif_type set to BINDING_FAILED class PortContext(object): @abstractproperty def current(self): pass @abstractproperty def original(self): pass @abstractproperty def network(self): pass @abstractproperty def bound_segment(self): pass @abstractmethod def host_agents(self, agent_type): pass @abstractmethod def set_binding(self, segment_id, vif_type, cap_port_filter): pass
Type Drivers in Havana The following are supported segmentation types in ML2 for the Havana release: • local • flat • VLAN • GRE • VXLAN
Mechanism Drivers in Havana The following ML2 MechanismDrivers exist in Havana: • Arista • Cisco Nexus • Hyper-V Agent • L2 Population • Linuxbridge Agent • Open vSwitch Agent • Tail-f NCS
Before ML2 L2 Population MechanismDriver “VM A” wants to talk to “VM G.” “VM A” sends a broadcast packet, which is replicated to the entire tunnel mesh. VM A VM B Host 1 VM I VM C Host 1 Host 2 VM H Host 4 Host 3 VM G VM F VM E VM D
With ML2 L2 Population MechanismDriver The ARP request from “VM A” for “VM G” is intercepted and answered using a pre-populated neighbor entry. Traffic from “VM A” to “VM G” is encapsulated and sent to “Host 4” according to the bridge forwarding table entry. VM A VM B Host 1 Proxy Arp VM I VM C Host 2 Host 1 VM H Host 4 Host 3 VM G VM F VM E VM D
ML2 Futures: Deprecation Items • The future of the Open vSwitch and Linuxbridge plugins • These are planned for deprecation in Icehouse • ML2 supports all their functionality • ML2 works with the existing OVS and Linuxbrige agents • No new features being added in Icehouse to OVS and Linuxbridge plugins • Migration Tool being developed
Plugin vs. ML2 MechanismDriver? • Advantages of writing an ML2 Driver instead of a new monolithic plugin • Much less code to write (or clone) and maintain • New neutron features supported as they are added • Support for heterogeneous deployments • Vendors integrating new plugins should consider an ML2 Driver instead • Existing plugins may want to migrate to ML2 as well
ML2 With Current Agents • Existing ML2 Plugin works with existing agents • Separate agents for Linuxbridge, Open vSwitch, and Hyper-V Neutron Server ML2 Plugin API Network Host A Host B Host C Host D Linuxbridge Agent Hyper-V Agent Open vSwitch Agent Open vSwitch Agent
ML2 With Modular L2 Agent • Future direction is to combine Open Source Agents • Have a single agent which can support Linuxbridge and Open vSwitch • Pluggable drivers for additional vSwitches, Infiniband, SR-IOV, ... Neutron Server ML2 Plugin API Network Host A Host B Host C Host D Modular Agent Modular Agent Modular Agent Modular Agent
What the Demo Will Show • ML2 running with multiple MechanismDrivers • openvswitch • cisco_nexus • Booting multiple VMs on multiple compute hosts • Hosts are running Fedora • Configuration of VLANs across both virtual and physical infrastructure
ML2 Demo Setup VLAN is added on the VIF for VM1 and also on the br-eth2 ports by the ML2 OVS MechanismDriver. VLAN is added on the VIF for VM2 and also on the br-eth2 ports by the ML2 OVS MechanismDriver. Host 1 Host 2 nova api nova compute ... neutron server neutron ovs agent nova compute neutron ovs agent neutron dhcp neutron l3 agent vm1 vm2 VM1 can ping VM2 … we’ve successfully completed the standard network test. br-int br-int br-eth2 br-eth2 eth2 eth2 The ML2 Cisco Nexus MechanismDriver trunks the VLAN on eth2/1. The ML2 Cisco Nexus MechanismDriver trunks the VLAN on eth2/2. Cisco Nexus Switch eth2/1 eth2/2