900 likes | 1.34k Views
Cisco Nexus 1000V Technical Overview. Agenda. Introduction System Overview Switching Overview Policy Management Advanced Features Network Management Troubleshooting & Diagnostics Design Examples Installation. Transparency in the Eye of the Beholder.
E N D
Agenda • Introduction • System Overview • Switching Overview • Policy Management • Advanced Features • Network Management • Troubleshooting & Diagnostics • Design Examples • Installation
Transparency in the Eye of the Beholder With virtualization, VMs have a transparent view of their resources…
Transparency in the Eye of the Beholder …but its difficult to correlate network and storage back to virtual machines
Transparency in the Eye of the Beholder Scaling globally depends on maintaining transparency while also providing operational consistency
Scaling Server Virtualization Networking Challenges Security & Policy Enforcement Operations & Management Organizational Structure Applied at physical server—not the individual VM Impossible to enforce policy for VMs in motion Lack of VM visibility, accountability, and consistency Inefficient management model and inability to effectively troubleshoot Muddled ownership as server admin must configure virtual network Organizational redundancy creates compliance challenges
VLAN101 VN-Link Brings VM Level Granularity Problems: VMotion • VMotion may move VMs across physical ports—policy must follow • Impossible to view or apply policy to locally switched traffic • Cannot correlate traffic on physical links—from multiple VMs • VN-Link: • Extends network to the VM • Consistent services • Coordinated, coherent management Cisco VN-Link Switch
Cisco Nexus 1000VSoftware Based VN-Link With the Cisco Nexus 1000V VM VM VM VM • Industry’s first 3rd-party vNetwork Distributed Switch for VMware vSphere • Built on Cisco NX-OS • Compatible with all switching platforms • Maintain vCenter provisioning model unmodified for server administration; allow network administration of virtual network via familiar Cisco NX-OS CLI vSphere Nexus 1000V Nexus 1000V Policy-Based VM Connectivity Mobility of Network & Security Properties Non-Disruptive Operational Model
Cisco Nexus 1000V Components Cisco VSMs Virtual Supervisor Module(VSM) • CLI interface into the Nexus 1000V • Leverages NX-OS 4.04a • Controls multiple VEMs as a single network device Virtual Ethernet Module(VEM) • Replaces Vmware’s virtual switch • Enables advanced switching capability on the hypervisor • Provides each VM with dedicated “switch ports” Cisco VEM Cisco VEM Cisco VEM VM6 VM2 VM3 VM4 VM5 VM11 VM9 VM10 VM7 VM12 VM1 VM7 vCenter Server
Cisco Nexus 1000V ‘Virtual Chassis’ Cisco VSMs pod5-vsm# show module Mod Ports Module-Type Model Status --- ----- -------------------------------- ------------------ ------------ 1 0 Virtual Supervisor Module Nexus1000V active * 2 0 Virtual Supervisor Module Nexus1000V ha-standby 3 248 Virtual Ethernet Module NA ok Cisco VEM Cisco VEM VM1 VM2 VM3 VM4 VM5 VM6 VM7 VM8
Single Chassis Management Cisco VSMs Upstream-Switch#showcdp neighbor Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone Device ID Local IntrfceHoldtme Capability Platform Port ID N1KV-Rack10 Eth 1/8 136 S Nexus 1000V Eth2/2 N1KV-Rack10 Eth 2/10 136 S Nexus 1000V Eth3/2 • A single switch from control plane and management plane perspective • Protocols such as CDP and SNMP operate as a single switch Cisco VEM Cisco VEM
Virtual Supervisor Modules Options VSM-PA VSM - Physical Appliance 2HCY09 • Cisco Branded Physical Server • Hosts 4 VSM Virtual Appliance • Deployed in pairs for redundancy VSM - Virtual Appliance • ESX Virtual Appliance • Supports 64 VEMs • Installable via ISO or OVA file Cisco VEM Cisco VEM Cisco VEM VM2 VM3 VM4 VM5 VM6 VM9 VM8 VM7 VM10 VM11 VM1 VSM-VA
Cisco Nexus 1000V Scalability Nexus 1000V • A single Nexus 1000V supports: • 2 Virtual Supervisor modules (HA) • 64* Virtual Ethernet modules • 512 Active VLANs • 2048 Ports (Eth + Veth) • 256 Port Channels • A single Virtual Ethernet module supports: • 216 Ports Veths • 32 Physical NICs • 8 Port Channels * 64 VEMs pending final VMware/Cisco scalability testing ** Overall system limits are lower than VEM limit x 64 Cisco VEM
Cisco Nexus 1000V Component Communication (cont.) Cisco VSMs • Two distinct virtual interfaces are used to communicate between the VSM and VEM • Control • Extended AIPC such as those within a physical chassis (6k, 7k, MDS). • Carries low level messages to ensure proper configuration of the VEM. • Maintains a 2sec heartbeat with the VSM to the VEM (timeout 6 seconds) • Maintains syncronization between primary and secondary VSMs • Packet • Carries any network packets from the VEM to the VSM such as CDP or IGMPcontrol • Separate VLANs recommended • Requires layer 2 connectivity L2 Cloud P C P C Cisco VEM
Cisco Nexus 1000V Component Communication (cont.) Cisco VSMs • Communication using the VMware VIM API over SSL • Connection is setup on the VSM • Requires installation of vCenter plug-in (downloaded from VSM) • Once established the Nexus 1000V is created in vCenter pod5-vsm# show svs connections connection VC: hostname: phx2-dc-pod5-vc ip address: 10.95.5.158 protocol: vmware-vim https certificate: default datacenter name: Phx2-Pod5 DVS uuid: df 11 38 50 0a 95 83 4e-95 69 d6 a7 f4 76 4a 7f config status: Enabled operational status: Connected vCenter Server
Cisco Nexus 1000V Opaque Data Cisco VSMs • Each Nexus 1000V requires global setting on the VSMs and VEMs called Opaque Data • Contains such data as control/packet VLAN, Domain ID, System Port Profiles • VSM pushing the opaque data to vCenter Server • vCenter Server pushes the opaque data to each VEM when they are added OD OD OD OD OD OD Cisco VEM Cisco VEM Cisco VEM vCenter Server
Cisco Nexus 1000V Domain • Each VSM is assigned a unique ‘Domain ID’ • Domain ID ensures that VEMs do not respond to commands from non-participating VSMs. • Each packet between VSM and VEM is tagged with the appropriate Domain ID • Domain range from 1-4095 Other VSM Active VSM DID 15 CMD DID 25 CMD Cisco VEM DID 15 Cisco VEM DID 15 Cisco VEM DID 15 DID 25 CMD
Distributed Data Plane Cisco VSMs • Each Virtual Ethernet Module forwards packets independent of each other. • No address learning/synchronization across VEMs • No concept of Crossbar/Fabric between the VEMs • Virtual Supervisor Module is NOT in the data path • No concept of forwarding from an ingress linecard to an egress linecard (another server) • No Etherchannel across VEMs Cisco VEM Cisco VEM Cisco VEM
Cisco Nexus 1000V Switch Interfaces • Ethernet Port (eth) • 1 per physical NIC interface • Specific to each module • vmnic0= ethx/1 • Up to 32 per host Po1 Eth3/1 Eth3/2 • Port Channel (po) • Aggregation of Eth ports • Up to 8 Port Channels per host Veth2 Veth1 • Virtual Ethernet Port (veth) • 1 per VNIC (including SC and VMK) • Notation is Veth(port number). • No module number is assigned to enable consistent naming when moved • 216 per host VM1 VM2
Cisco Nexus 1000V vEth Interface • Virtual Ethernet Port • vEths are assigned sequentially • VMvNICs are statically bound to a vEth • Assignment persistent through reboots • May change if the vNIC is reassigned to another port profile • vEthswill move between modules when a VM is moved (HA, Vmotion, etc…) • Default virtual ‘speed’ is Gigabit as negotiated with the guest OS • By default performance is un-gated (i.e1GbvNIC can run faster than 1Gb) • 2048 vEths supported system wide
Loop Prevention without STP X BPDU X Eth4/2 Eth4/1 X Cisco VEM Cisco VEM Cisco VEM Local MAC Address Packets Dropped on Ingress (L2) No Switching From Physical NIC to NIC BPDU are dropped VM3 VM1 VM11 VM2 VM10 VM7 VM7 VM6 VM5 VM4 VM12 VM9
MAC Learning • Each VEM learns independently and maintains a separate MAC table • VMMACs are statically mapped • Other vEths are learned this way (vmknics and vswifs) • No aging while the interface is up • Devices external to the VEM are learned dynamically Eth4/1 Eth3/1 Cisco VEM Cisco VEM VEM 3 MAC Table VM1 Veth12 Static VM2 Veth23 Static VM3 Eth3/1 Dynamic VM4 Eth3/1 Dynamic VEM 4 MAC Table VM1 Eth4/1 Dynamic VM2 Eth4/1 Dynamic VM3 Veth8 Static VM4 Veth7 Static VM3 VM4 VM1 VM2
Port Channels • Standard Cisco Port Channels • Behaves like EtherChannel • Link Aggregation Control Protocol (LACP) Support • 17 hashing algorithms available • Selected either system wide or per module • Default is source MAC • Automated creation using Port Profiles Po1 Po2 Cisco VEM VM1 VM2 VM3 VM4
Port Channel Hashing Options pod5-vsm(config)# port-channel load-balance ethernet ? dest-ip-port Destination IP address and L4 port dest-ip-port-vlan Destination IP address, L4 port and VLAN destination-ip-vlan Destination IP address and VLAN destination-mac Destination MAC address destination-port Destination L4 port source-dest-ip-port Source & Destination IP address and L4 port source-dest-ip-port-vlan Source & Destination IP address,L4 port and VLAN source-dest-ip-vlan Source & Destination IP address and VLAN source-dest-mac Source & Destination MAC address source-dest-port Source & Destination L4 port source-ip-port Source IP address and L4 port source-ip-port-vlan Source IP address, L4 port and VLAN source-ip-vlan Source IP address and VLAN source-mac Source MAC address source-port Source L4 port source-virtual-port-id Source Virtual Port Id vlan-only VLAN only
virtual Port Channel - Host Mode • Allows a single PC to span multiple upstream switches using ‘subgroups’ • Forms up to two subgroups based on Cisco Discovery Protocol (CDP) • Subgroups can be manually defined outside of a Port Profile • vEths are round robin assigned to a subgroup and then hashed within a subgroup • Does not support LACP • Does not require EtherChannel upstream when using source hashing • EtherChannel is recommended upstream • Required when connecting to multiple switches • (only supports two upstream switches when using flow based hashing) SG0 SG1 Po1 Cisco VEM VM1 VM2 VM3 VM4
What is a Port-Profile? • A port-profile is a container used to define a common set of configuration commands for multiple interfaces • Define once and apply many times • Simplifies management by storing interface configuration • Key to collaborative management of virtual networking resources • Why is it not like a template or SmartPort macro? • Port-profiles are ‘live’ policies • Editing an enabled profile will cause config changes to propagate to all interfaces using that profile (unlike a static one-time macro)
Port Profile Configuration n1000v# show port-profile name WebProfile port-profile WebProfile description: status: enabled capability uplink: no system vlans: port-group: WebProfile config attributes: switchport mode access switchport access vlan 110 no shutdown evaluated config attributes: switchport mode access switchport access vlan 110 no shutdown assigned interfaces: Veth10 • Support Commands Include: • Port management • VLAN • PVLAN • Port-channel • ACL • Netflow • Port Security • QoS
Port Profile Policy Distribution n1000v(config)# port-profile WebServers n1000v(config-port-prof)# switchport mode access n1000v(config-port-prof)# switchport access vlan 100 n1000v(config-port-prof)# no shut PP Cisco VSM vCenter Server
Overriding Port Profile Configuration • Administrators can interact with individual switchports, overriding a port profile • Use to isolating problems with one or two interfaces without changing the port-profile and affecting other ports • Manual configuration always takes precedence over a port profile configuration • The ‘no’ command can remove the override and restore the profile’s config by doing: • n1000v(config)# int vethernet 2 • n1000v(config-if)# switchport access vlan 250 • n1000v(config)# int vethernet 2 • n1000v(config-if)# no switchport access vlan
Port Profile Inheritance • Profile inheritance allows the construction of profile hierarchies • ‘Parent’ profiles pass configuration to ‘child’ profiles • Only the child profiles need to be visible within VC • Updates to the parent filter to the child • Child profiles can be updated independently n1000v(config)# port-profile Web n1000v(config-port-prof)# switchport mode access n1000v(config-port-prof)# switchport access vlan 100 n1000v(config-port-prof)# no shut n1000v(config)# port-profile Web-Gold n1000v(config-port-prof)# inherit port-profile Web n1000v(config-port-prof)# service-policy output Gold n1000v(config-port-prof)# vmware port-group Web-Gold n1000v(config)# port-profile Web-Silver n1000v(config-port-prof)# inherit port-profile Web n1000v(config-port-prof)# service-policy output Silver n1000v(config-port-prof)# vmware port-group Web-Silver Effective Port Profile – Web-Silver Access Port VLAN100 SilverQoS Policy Effective Port Profile – Web-Gold Access Port VLAN100 GoldQoS Policy
Uplink Port Profiles • Special profiles that define physical NIC properties • Usually configured as a trunk • Defined by adding ‘capability uplink’ to a port profile • Uplink profiles cannot be applied to vEths • Non-uplink profiles cannot be applied to NICs • Only selectable in vCenter when adding a host or additional NICs n1000v(config)# port-profile DataUplink n1000v(config-port-prof)# switchport mode trunk n1000v(config-port-prof)# switchport trunk allowed vlan 10-15 n1000v(config-port-prof)# system vlan 51, 52 n1000v(config-port-prof)# channel-group mode auto sub-group cdp n1000v(config-port-prof)# capability uplink n1000v(config-port-prof)# no shut Cisco VEM VM1 VM2 VM3 VM4
Cisco Nexus 1000V System VLANs • System VLANs enable interface connectivity before an interface is programmed • i.E VEM can’t communicate with VSM during boot • Required System VLANs • Control • Packet • Recommended System VLANs • IP Storage • Service Console • VMKernel • Management Networks Cisco VSM L2 Cloud P C P C Cisco VEM
System VLAN Guidelines • Port profiles that contain system VLANs are ‘system port profiles’ • The system VLAN list must be a subset of the allowed VLANlist on trunk ports • There must be only one system VLAN on an access port (the access VLAN) • The ‘no system vlan’ command can be given only when no interface is using the profile. • Once a system profile is in use by at least one interface, you can only add to the list of system vlans, but not delete any vlans from the list. • For a profile with system VLANs, ‘no port-profile SysProfile’, ‘no vmware port-group’ and ‘no state enabled’ commands can be given only when no interface is using that profile
Automated Port Channel Configuration • Port channels can be automatically formed using port profile • Interfaces belonging to different modules cannot be added to same channel-group. E.g. Eth2/3 and Eth3/3 • ‘auto’ keyword indicates that interfaces inheriting the same uplink port-profile will be automatically assigned a channel-group. • Each interface in the channel must have consistent speed/duplex • Channel-group does not need to exit and will automatically be created • n1000v(config)# port-profile Uplink • n1000v(config-port-prof)# channel-group auto
Access Control List Overview • ACLs provide traffic filtering mechanisms • Provides filtering for ingress and egress VM traffic for additional network security • Permit/Drop traffic based on ACL policies • ACL types supported: • IPv4 and MAC ACLs • Ingress and Egress • Supported on Eth and vEth interfaces • Configured via port profiles or directly on the interface
Port Security Overview • Port Security secures a port by limiting and identifying the MAC addresses that can access a port. • Secure MACs can be manually configured or dynamically learnt • Two security violation types are supported • Addr-Count-Exceed Violation • MAC Move Violation • Port security can be applied to vEths • Cannot be applied to physical interfaces • Three types of secure MACs • Static • Sticky • Dynamic
Cisco Nexus 1000V Private VLANs • Private VLANs divide a normal VLAN into sub-L2 domains • Consist of a Primary VLAN and one or more secondary VLANs • Used to segregate L2 traffic without wasting IP address space (smaller subnets) • Secondary VLAN access is restricted by setting ‘community’ or isolated’ status
PVLAN Definitions • Primary VLAN: VLAN carrying downstream traffic from the router(s) to the host ports. • Secondary VLAN: Can be either an isolated VLAN or a community VLAN. A port assigned to the isolated VLAN is a isolated port. A port assigned to a community VLAN is a community port. • Isolated VLAN : Communicate only with the primary VLAN • Community VLAN: Communicate within community and with primary VLAN
PVLAN Promiscuous Ports • Promiscuous port: can communicate with all isolated ports and community ports and vice versa. • Promiscuous ports are the boundary between the PVLAN domain and the rest of the network • Secondary VLANs are remapped to the primary vlan at the promiscuous port. • Nexus 1000V supports promiscuous trunk ports and promiscuous access ports • Most deployments will use promiscuous trunk port
PVLAN Topology Examples • Regular Trunk Port to Upstream switch • Defines N1KV uplink as regular trunk port • Defines PVLAN configuration in upstream switch • PVLAN extends into upstream switch. • Defines SVI promiscuous port in upstream switch • Promiscuous Trunk Port to Upstream switch • Defines N1KV uplink as promiscuous trunk • PVLAN ends at the promiscuous trunk port. • No PVLAN configuration in upstream switch
Cisco Nexus 1000V Quality of Service • Nexus 1000V provides traffic classification, marking and policing • Police traffic to/from VMs • Mark traffic leaving the ESX host • Can be configured multiple ways • Individual Eths or vEths • Port-Channels • Port Profiles • Policies can be applied on input or output • Statistics per policy (input/output) per interface • Nexus 1000V does not implement queuing or full traffic shaping
QoS Classification Support • Classification support based on: Access-group: ACL reference • Class-map (hierarchical classes possible) • Cos (L2 header) • Discard-class: internal QoS value • Dscp: From IP TOS • Ip rtp: UDP port list • Packet length: IP Datagram size; inclusive ranges • Precedence: 3 bit value from within Dscp field • Qos-group: Internal QoS value
QoS Marking Support • Support for marking: • Cos (L2 header) • Discard-class: Internal QoS value • Dscp: In IP TOS • Precedence: 3 bit value from within Dscp field • Qos-group: Internal QoS value • Packets are only marked when leaving a VEM • Intra-VEM traffic is not marked
QoS Feature Overview: Policing • Standard MQC configuration • Traffic categorized into • Conforming traffic • Exceeding traffic • Violating traffic • Policer Actions • Set various fields • Markdown DSCP • Transmit or Drop