1 / 65

VINI: Virtual Network Infrastructure

VINI: Virtual Network Infrastructure. Nick Feamster Georgia Tech Andy Bavier, Mark Huang, Larry Peterson, Jennifer Rexford Princeton University. ?. VINI Overview. Bridge the gap between “ lab experiments ” and live experiments at scale. Runs real routing software

levi
Download Presentation

VINI: Virtual Network Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. VINI: Virtual Network Infrastructure Nick FeamsterGeorgia Tech Andy Bavier, Mark Huang, Larry Peterson, Jennifer RexfordPrinceton University

  2. ? VINI Overview Bridge the gap between “lab experiments”and live experiments at scale. • Runs real routing software • Exposes realistic network conditions • Gives control over network events • Carries traffic on behalf of real users • Is shared among many experiments Emulation VINI Simulation Small-scale experiment Live deployment

  3. Traffic Real clients, servers Synthetic or traces Goal: Control and Realism Topology Arbitrary, emulated Actual network Traffic Synthetic or traces Real clients, servers • Control • Reproduce results • Methodically change or relax constraints • Realism • Long-running services attract real users • Connectivity to real Internet • Forward high traffic volumes (Gb/s) • Handle unexpected events Network Events Inject faults, anomalies Observed in operational network

  4. Overview • VINI characteristics • Fixed, shared infrastructure • Flexible network topology • Expose/inject network events • External connectivity and routing adjacencies • PL-VINI: prototype on PlanetLab • Preliminary Experiments • Ongoing work

  5. Fixed Infrastructure

  6. Shared Infrastructure

  7. Arbitrary Virtual Topologies

  8. Exposing and Injecting Failures

  9. c Carry Traffic for Real End Users s

  10. BGP BGP c BGP BGP Participate in Internet Routing s

  11. PL-VINI: Prototype on PlanetLab • First experiment: Internet In A Slice • XORP open-source routing protocol suite (NSDI ’05) • Click modular router (TOCS ’00, SOSP ’99) • Clarify issues that VINI must address • Unmodified routing software on a virtual topology • Forwarding packets at line speed • Illusion of dedicated hardware • Injection of faults and other events

  12. Node Mgr Local Admin VM1 VM2 VMn … Virtual Machine Monitor (VMM) (Linux++) PL-VINI: Prototype on PlanetLab • PlanetLab: testbed for planetary-scale services • Simultaneous experiments in separate VMs • Each has “root” in its own VM, can customize • Can reserve CPU, network capacity per VM PlanetLab node

  13. XORP: Control Plane • BGP, OSPF, RIP, PIM-SM, IGMP/MLD • Goal: run real routing protocols on virtual network topologies XORP (routing protocols)

  14. User-Mode Linux: Environment • Interface ≈ network • PlanetLab limitation: • Slice cannot create new interfaces • Run routing software in UML environment • Create virtual network interfaces in UML UML XORP (routing protocols) eth0 eth1 eth2 eth3

  15. Click: Data Plane • Performance • Avoid UML overhead • Move to kernel, FPGA • Interfaces  tunnels • Click UDP tunnels correspond to UML network interfaces • Filters • “Fail a link” by blocking packets at tunnel UML XORP (routing protocols) eth0 eth1 eth2 eth3 Control Data Packet Forward Engine UmlSwitch element Tunnel table Click Filters

  16. Intra-domain Route Changes s 2095 856 700 260 233 1295 c 639 548 366 846 587 902 1893 1176

  17. Routes converging Link down Link up Ping During Link Failure

  18. Slow start Retransmit lost packet Close-Up of TCP Transfer PL-VINI enables a user-space virtual network to behave like a real network on PlanetLab

  19. Challenge: Attracting Real Users • Could have run experiments on Emulab • Goal: Operate our own virtual network • Carrying traffic for actual users • We can tinker with routing protocols • Attracting real users

  20. Conclusion • VINI: Controlled, Realistic Experimentation • Installing VINI nodes in NLR, Abilene • Download and run Internet In A Slice http://www.vini-veritas.net/

  21. Link down Link up Zoom in TCP Throughput

  22. Ongoing Work • Improving realism • Exposing network failures and changes in the underlying topology • Participating in routing with neighboring networks • Improving control • Better isolation • Experiment specification

  23. Resource Isolation • Issue: Forwarding packets in user space • PlanetLab sees heavy use • CPU load affects virtual network performance

  24. Performance is bad • User-space Click: ~200Mb/s forwarding

  25. VINI should use Xen

  26. Experimental Results • Is a VINI feasible? • Click in user-space: 200Mb/s forwarded • Latency and jitter comparable between network and IIAS on PL-VINI. • Say something about running on just PlanetLab? Don’t spend much time talking about CPU scheduling…

  27. Low latency for everyone? • PL-VINI provided IIAS with low latency by giving it high CPU scheduling priority

  28. XORP Run OSPF Configure FIB Click FIB Tunnels Inject faults OpenVPN & NAT Connect clients and servers C Internet In A Slice S S C S C

  29. UML XORP eth0 eth1 eth2 eth3 UmlSwitch Control Data FIB UmlSwitch element Encapsulation table Click tap0 PL-VINI / IIAS Router • Blue: topology • Virtual net devices • Tunnels • Red: routing and forwarding • Data traffic does not enter UML • Green: enter & exit IIAS overlay

  30. PL-VINI Summary

  31. PL-VINI / IIAS Router • XORP: control plane • UML: environment • Virtual interfaces • Click: data plane • Performance • Avoid UML overhead • Move to kernel, FPGA • Interfaces  tunnels • “Fail a link” UML XORP (routing protocols) eth0 eth1 eth2 eth3 Control Data Packet Forward Engine UmlSwitch element Tunnel table Click

  32. Same abstractions as PL-VINI Virtual hosts and links Push performance, ease of use Full network-stack virtualization Run XORP, Quagga in a slice Support data plane in kernel Approach native Linux kernel performance (15x PL-VINI) Be an “early adopter” of new Linux virtualization work Trellis virtual host application user kernel kernel FIB virtual NIC virtual NIC bridge bridge shaper shaper EGRE tunnel EGRE tunnel Trellis Substrate Trellis

  33. Virtual Hosts • Use container-based virtualization • Xen, VMWare: poor scalability, performance • Option #1: Linux Vserver • Containers without network virtualization • PlanetLab slices share single IP address, port space • Option #2: OpenVZ • Mature container-based approach • Roughly equivalent to Vserver • Has full network virtualization

  34. Network Containers for Linux • Create multiple copies of TCP/IP stack • Per-network container • Kernel IPv4 and IPv6 routing table • Physical or virtual interfaces • Iptables, traffic shaping, sysctl.net variables • Trellis: marry Vserver + NetNS • Be an early adopter of the new interfaces • Otherwise stay close to PlanetLab

  35. Virtual Links: EGRE Tunnels Trellis virtual host application • Virtual Ethernet links • Make minimal assumptions about the physical network between Trellis nodes • Trellis: Tunnel Ethernet over GRE over IP • Already a standard, but no Linux implementation • Other approaches: • VLANs, MPLS, other network circuits or tunnels • These fit into our framework user kernel kernel FIB virtual NIC virtual NIC EGRE tunnel EGRE tunnel Trellis Substrate

  36. Tunnel Termination • Where is EGRE tunnel interface? • Inside container: better performance • Outside container: more flexibility • Transparently change implementation • Process, shape traffic btw container and tunnel • User cannot manipulate tunnel, shapers • Trellis: terminate tunnel outside container

  37. Glue: Bridging • How to connect virtual hosts to tunnels? • Connecting two Ethernet interfaces • Linux software bridge • Ethernet bridge semantics, create P2M links • Relatively poor performance • Common-case: P2P links • Trellis • Use Linux bridge for P2M links • Create new “shortbridge” for P2P links

  38. How to connect virtual hosts to EGRE tunnels? Two Ethernet interfaces Linux software bridge Ethernet bridge semantics Support P2M links Relatively poor performance Common-case: P2P links Trellis: Use Linux bridge for P2M links New, optimized “shortbridge” module for P2P links Glue: Bridging Trellis virtual host application user kernel kernel FIB virtual NIC virtual NIC bridge* bridge* shaper shaper EGRE tunnel EGRE tunnel Trellis Substrate

  39. 2/3 of native performance, 10X faster than PL-VINI IPv4 Packet Forwarding Forwarding rate (kpps)

  40. Virtualized Data Plane in Hardware • Software provides flexibility, but poor performance and often inadequate isolation • Idea: Forward packets exclusively in hardware • Platform: OpenVZ over NetFPGA • Challenge: Share common functions, while isolating functions that are specific to each virtual network

  41. Accelerating the Data Plane • Virtual environments in OpenVZ • Interface to NetFPGA based on Stanford reference router

  42. Control Plane • Virtual environments • Virtualize the control plane by running multiple virtual environments on the host (same as in Trellis) • Routing table updates pass through security daemon • Root user updates VMAC-VE table • Hardware access control • VMAC-VE table/VE-ID controls access to hardware • Control register • Used to multiplex VE to the appropriate hardware

  43. Virtual Forwarding Table Mapping

  44. Share Common Functions • Common functions • Packet decoding • Calculating checksums • Decrementing TTLs • Input arbitration • VE-Specific Functions • FIB • IP lookup table • ARP table

  45. Forwarding Performance

  46. Efficiency • 53K Logic Cells • 202 Units of Block RAM Sharing common elements saves up to 75% savings over independent physical routers.

  47. Conclusion • Virtualization allows physical hardware to be shared among many virtual networks • Tradeoffs: sharing, performance, and isolation • Two approaches • Trellis: Kernel-level packet forwarding(10x packet forwarding rate improvement vs. PL-VINI) • NetFPGA-based forwarding for virtual networks(same forwarding rate as NetFPGA-based router, with 75% improvement in hardware resource utilization)

  48. Accessing Services in the Cloud • Hosted services have different requirements • Too slow for interactive service, or • Too costly for bulk transfer! ISP1 Cloud Data Center Internet Data Center Router ISP2 Routing updates Interactive Service Bulk transfer Packets

  49. Cloud Routing Today • Multiple upstream ISPs • Amazon EC2 has at least 58 routing peers in Virginia data center • Data center router picks one route to a destination for all hosted services • Packets from all hosted applications use the same path

More Related