1 / 22

New Approach to OVS Datapath Performance

Founder of CloudNetEngine Jun Xiao presents a new OVS datapath approach, including performance comparisons and Q&A session. Learn about VNIC emulation, paravirtualization, multiple queues, VNIC offloading, PNIC H/W acceleration, overlay awareness, conntrack, and more.

bmcintyre
Download Presentation

New Approach to OVS Datapath Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. New Approach to OVSDatapath Performance Founder of CloudNetEngine Jun Xiao

  2. Agenda • VM virtual network datapath evolvement • Technical deep dive on a new OVS datapath • Performance comparisons • Q & A

  3. VM virtual network datapath evolvement VNIC emulation VNIC paravirtualization VNIC/PNIC Multiple queues/load balance VNIC offloading and PNIC H/W acceleration OVS Kernel DP Overlay Overlay awareness offloading Stateful actions, i.e. conntrack OVS-DPDK DP Very high packet rate processing

  4. Why a new approach to OVS datapath performance? VNIC emulation VNIC paravirtualization VNIC/PNIC Multiple queues/load balance VNIC offloading and PNIC H/W acceleration OVS Kernel DP CPU efficiency is very important! Overlay Overlay awareness offloading Stateful actions, i.e. conntrack OVS-DPDK DP Very high packet rate processing

  5. A new approach to OVS datapath performance VNIC emulation VNIC paravirtualization VNIC/PNIC Multiple queues/load balance VNIC offloading and PNIC H/W acceleration Overlay Overlay awareness offloading Stateful actions, i.e. conntrack Very high packet rate processing … An Uniform OVS DP

  6. Technical deep dive on CloudNetEngine virtual switch

  7. Design principles • Datapath needs to be as reliable as possible • High performance for all typical workloads • Throughputs on both BPS and PPS wise • CPU efficiency is very critical • Easy of integration for various virtual networking solutions • Easy of maintenance

  8. CloudNetEngine virtual switch architecture Openflow APIs OVSDB APIs ODP APIs Adv APIs CDP APIs Scheduler Adaptive poll Mem mgmt Timer mgmt Classifier* Flow cache* FWD engine* Multi queue Overlay Security group QoS Offloading Sniffer ovs-vswitchd* (dpif-netlink) ovsdb-server DPDK* Net chain CDP (CloudNetEngine Data Path)

  9. Performance practices • Packet handle layout, lazy metadata reset. • Improve instruction per cycles. • Load balancing rxq processing. • Inline filtering for packet monitoring. • CPU efficiency: • Hybrid polling + RX interrupt • Packet group metadata • Zero copy • S/W H/W Offloading depending on system runtime configuration

  10. Extensibility • A lightweight and efficient framework (net chain) to plugin new features • It’s RCU protected so that updating a net chain won’t have any performance penalty on the datapath • A net chain can use packet group metadata to very quickly decide whether the net chain is applicable to the input packet vector or not

  11. Performance comparisons

  12. Performance test configuration Guest H/W - 4 vCPUs/ 2vNICs/ 4G memory for NFV tests - 1 vCPUs/ 1vNICs/ 1G memory for non-NFV tests - for NFV tests, virtio mrg_rxbuff=off, all other offload flags are enabled - for non-NFV test, virtio all offload flags are enabled - vNICs use default queues Guest S/W - buildroot kernel 4.4.3 x86_64 - testpmd io mode forward for NFV test - iperf 3.1.1 for TCP test - netperf 2.7.0 for TCP_RR test Host H/W CPU: - Xeon E5-2620 v3 2.40GHz - 6 physical cores, 12 logical cores NIC: - 82599ES 10-Gigabit MEM: - 16G Host S/W - Ubuntu 16.04 x86_64 + KVM - Qemu 2.5.1 - 1G size hugepages are used All QEMU instances set cpu affinity Virtual Switches Under Test Native OVS - OVS 2.6 - kernel module bundled with Linux kernel 4.4.0 OVS-DPDK - OVS 2.6 - DPDK v16.11 CNE vSwitch - CNE vSwitch 1.0

  13. Bi-directional traffic • Each direction with 250 concurrent flows • 0.5% PDR VM (testpmd) vnic0 vnic1 Host2 Host1 vswitch TRex NFVTest Topology

  14. MPPS (Higher is better) NFV 64 Bytes 0.5% PDR test Throughput

  15. VM VM Host1 vswitch TCP Single Host Test Topology

  16. Gbps (Higher is better) CPU % (Lower is better)

  17. TPS (Higher is better) CPU % (Lower is better)

  18. VM1 VM2 VM3 VM4 Host2 Host1 vswitch vswitch VXLAN TCP/VXLAN Two Hosts Test Topology

  19. Gbps (Higher is better) TCP/VXLAN Two Hosts Test Throughput CPU % (Lower is better) TCP/VXLAN Two Hosts Test CPU Usage

  20. TPS (Higher is better) TCP_RR/VXLAN Two Hosts Test Throughput CPU % (Lower is better) TCP_RR/VXLAN Two Hosts Test CPU Usage

  21. Demo: CNE vSwitch integration with OVN/OpenStack CONTROLLER NODE Network Node c-api l3-agent ovs_vswitchd ovsdb_server ovn-nb ... dhcp-agent n-api OVS Kernel DP ovn-controller ovn-sb q-svc EXT network MGMT&&CTRLnetwork Data network COMPUTE NODEs(OVSKernel DP) COMPUTE NODEs(CNE) CDP OVS Kernel DP ovn-controller ovn-controller n-cpu n-cpu ovsdb_server ovs_vswitchd ovs_vswitchd ovsdb_server c-vol c-vol

  22. Q & A www.cloudnetengine.com info@cloudnetengine.com Twitter: @cloudnetengine

More Related