210 likes | 302 Views
nSwitching : Virtual Machine Aware Relay Hardware Switching to improve intra-NIC Virtual Machine Traffic. Author(s ): Bardgett , J. Harris Corp., Melbourne, FL, USA Zou , C. Published in: Communications (ICC), 2012 IEEE International Conference on Date of Conference: 10-15 June 2012.
E N D
nSwitching: Virtual Machine Aware Relay Hardware Switching to improve intra-NIC Virtual Machine Traffic Author(s): Bardgett, J. Harris Corp., Melbourne, FL, USA Zou, C. Published in: Communications (ICC), 2012 IEEE International Conference on Date of Conference: 10-15 June 2012
Outline • Introduction • Background • Proposed nSwitch design • Evaluation • Conclusion
Introduction • Traditional data center network switching architecture involves: • Definition of switching platforms • Port bandwidth • Physical medium connectivity • Virtual local-area network (VLAN) • Internet Protocol (IP) addressing • Fail-over mechanisms • Port bonding • Quality of service (QoS) • Security • In cloud computing ? • With the introduction of improved switching protocols and virtualization, increased utilization of hardware has imposed many design challenges.
Introduction cont. • It is important to consider the frequent switching of frames between VMs on the same machine. • Some proposed solutions: • vSwitch– a virtual switch was hypervisor integrated software for VM to VM switching • increase CPU load • vm-vm traffic is not transparent. • IEEE 802.1Qbg(HP,IBM) and 802.1Qbh(Cisco) – its permits reflective relay (hairpin turn) • require modification of the NIC and reflective relay upgrade to the external network hardware to switch VM-VM frames originating and terminating on the same physical Ethernet port • We propose the nSwitch architecture to improve the VM-VM switching performance for traffic in the same computer across multiple CPUs and sockets. • nSwitching is compatible with the SR-IOVspecification without any Ethernet frame alteration.
Background Frame from Port x Frame Forwarding DMAC in FDB ? N VM2 VM1 Y Hypervisor Belong to Port x? Forward to all ports (except port x) Filter Y hardware N Forward to belonging Port pSwitch Port Address Learning
vSwitch • The vSwitch makes monitoring of protocols or bandwidth usage complicated or impossible. • Open-vSwitch does have rate limiting but not QoS (e.g. 802.1p). • Concerns like limited I/O bandwidth and the additional skill development for server administrators makes managing the vSwitch complex. • In addition, vSwitch could cause very high CPU loads with software switching .
IEEE 802.1Qbg • Virtual Ethernet port aggregator (VEPA) that aggregates virtual machine packets on the server before the resulting single stream is transmitted to the switch. • In VEPA all the frames from the VMs are forwarded out to the switch. • Reflective Relay is Enabled • VM-VM traffic is Transparent • Require modification of the NIC/Switch
IEEE 802.1Qbh withdrawn • Basically the frames from each VM is tagged with an identifier, called a VN-Tag. • The switch has a virtual interface (VIF) mapped to each identifier/VN-Tag. • From a switching point of view the switch treats the virtual and physical interfaces the same.
Proposed nSwitchdesign • We present two designs of nSwitch which reflect VM-VM traffic on the same computer. • These designs differ in terms of implementation complexity and functionality. • Design 1, is a single Ethernet port NIC. • Design 2, has two Ethernet ports. • Both designs support multiple CPUs and multiple socket logic boards.
A. nSwitch Design for Single Port SR-IOV NIC Architecture Table A : ab-cd-ef-12-34-56
Pseudo Code for nSwitching in the Synopsys (SR-IOV) core. • Initial state • VF 0,3 associated with VM3; • PF 0 associated with PCIe routing function allocating bandwidth data path and switching functionality between VMs; • Create table space (Table A) to associate a MAC Address with a given VF • Initialization of VM interfaces: • vMACoffered to VMs will have a consistent MAC Address OUI based on the PF number • (e.g. VF 0,3 assigns vMAC3 an OUI F0-F0-F0, VF 0,4 assigns the same to vMAC4); • Insert MAC Address into Table A and associate with a given VF • Steady State • Upon receipt of frame from a VF, compare source and destination OUI, prioritize based on 802.1p marking from VM. • Case 1: equal source and destination OUI, look up the destination VF and route packet to that VF for the associated VM; • Case 2: unequal source and destination OUI, send to PCIe port; • Case 3: Follow SR-IOV for receipt of a frame from PCIe port
B. nSwitch Design for Multiple Port SR-IOV NIC 8.0 GigaTransfers/second
C. Benefits of nSwitching • The addition of nSwitching to SR-IOV will reduce CPU loads and eliminate the need for bandwidth between the NIC and pSwitch for inter-VM traffic internal to the serve • Compared with the software-based vSwitch, there are many benefits to switching in hardware by using nSwitching: • Eliminate CPU utilization increase caused by inter-VM I/O traffic and remove NIC bandwidth constraint. • Enable application of Access Lists (ACLs) and Quality of Service (802.1p) without CPU performance hit. • Enable VM-VM frame monitoring and control using MAC Address Organizational Unit Identifier (OUI). • This will eliminate the CPU workload problems created by inter-VM switching in the hypervisor or vSwitch, and the bandwidth, latency and reliability problems created by switching in the pSwitch
Evaluation • Software, hardware, platform profiling tools and VM with several operating systems were used for switching methods evaluation. • In this paper, we compare existing vSwitch with an approximation of 802.1Qbhand the proposed nSwitch. • Investing capital in any new core in silicon would be cost prohibitive without the intent to produce and sell the product, thus real implementation is beyond the scope of this paper.
A. Testing Software, Hardware, Profiling Tools and VMs • Software: • Citrix(r) XenServer(tm)11 5.6 FP1 with Open vSwitch was chosen for accelerated I/O virtualization and a paravirtualized guest. • The VM operating systems used were Redhat Beta 6 and Ubuntu 10.10 Maverick Meerkat. • Hardware: • Directed I/O, Virtualization Technology for Directed I/O (VT-D) and SR-IOV were integrated in the main board, Ethernet card and Processor. • The hardware was special built by us as the features are not yet combined in a single platform. • Platform profiling tools: • Linux top, dstat, md5sum for load and CPU Limit. • Xen uses Open vSwitch. • The Redhat VMs were given 1G RAM and 4 GB of hard drive. • VMs also used Ubuntu 10.10.
B. vSwitch: Bandwidth, Delay and CPU Load with 2 VMs Increase 20 %
D. Proposed nSwitch approximation: Bandwidth, Delay andCPU Load testing 0.009 ms = 9 us
Conclusion • We have presented a method of using SR-IOV functions in the nSwitching design and proposed that it is feasible to investigate the detailed implementation nSwitching in existing SR-IOV core structures. • nSwitch is shown to be able to reduce CPU utilization over the vSwitch and decrease latency. • Comparing with 802.1Qbh or Qbg, inter-VM transmission speed will not be limited by the Ethernet port speed. • One of the primary benefits for nSwitching is that it eliminates any load created on the CPU due to switching in the hypervisor and the changes to the switch infrastructure as required by other edge switching technologies. • VM-VM traffic is Transparent ? • Evaluation is enough? • Algorithm is correct ?