320 likes | 502 Views
VINI: Virtual Network Infrastructure. Nick Feamster Georgia Tech Andy Bavier, Mark Huang, Larry Peterson, Jennifer Rexford Princeton University. ?. VINI Overview. Bridge the gap between “lab experiments” and live experiments at scale. Runs real routing software
E N D
VINI: Virtual Network Infrastructure Nick FeamsterGeorgia Tech Andy Bavier, Mark Huang, Larry Peterson, Jennifer RexfordPrinceton University
? VINI Overview Bridge the gap between “lab experiments” and live experiments at scale. • Runs real routing software • Exposes realistic network conditions • Gives control over network events • Carries traffic on behalf of real users • Is shared among many experiments Emulation VINI Simulation Small-scale experiment Live deployment
Traffic Synthetic or traces Real clients, servers Goal: Control and Realism Topology Arbitrary, emulated Actual network Traffic Synthetic or traces Real clients, servers • Control • Reproduce results • Methodically change or relax constraints • Realism • Long-running services attract real users • Connectivity to real Internet • Forward high traffic volumes (Gb/s) • Handle unexpected events Network Events Inject faults, anomalies Observed in operational network
Overview • VINI characteristics • Fixed, shared infrastructure • Flexible network topology • Expose/inject network events • External connectivity and routing adjacencies • PL-VINI: prototype on PlanetLab • Preliminary Experiments • Ongoing work
c Carry Traffic for Real End Users s
BGP BGP c BGP BGP Participate in Internet Routing s
PL-VINI: Prototype on PlanetLab • First experiment: Internet In A Slice • XORP open-source routing protocol suite (NSDI ’05) • Click modular router (TOCS ’00, SOSP ’99) • Clarify issues that VINI must address • Unmodified routing software on a virtual topology • Forwarding packets at line speed • Illusion of dedicated hardware • Injection of faults and other events
Node Mgr Local Admin VM1 VM2 VMn … Virtual Machine Monitor (VMM) (Linux++) PL-VINI: Prototype on PlanetLab • PlanetLab: testbed for planetary-scale services • Simultaneous experiments in separate VMs • Each has “root” in its own VM, can customize • Can reserve CPU, network capacity per VM PlanetLab node
XORP: Control Plane • BGP, OSPF, RIP, PIM-SM, IGMP/MLD • Goal: run real routing protocols on virtual network topologies XORP (routing protocols)
User-Mode Linux: Environment • Interface ≈ network • PlanetLab limitation: • Slice cannot create new interfaces • Run routing software in UML environment • Create virtual network interfaces in UML UML XORP (routing protocols) eth0 eth1 eth2 eth3
Click: Data Plane • Performance • Avoid UML overhead • Move to kernel, FPGA • Interfaces tunnels • Click UDP tunnels correspond to UML network interfaces • Filters • “Fail a link” by blocking packets at tunnel UML XORP (routing protocols) eth0 eth1 eth2 eth3 Control Data Packet Forward Engine UmlSwitch element Tunnel table Click Filters
Intra-domain Route Changes s 2095 856 700 260 233 1295 c 639 548 366 846 587 902 1893 1176
Routes converging Link down Link up Ping During Link Failure
Slow start Retransmit lost packet Close-Up of TCP Transfer PL-VINI enables a user-space virtual network to behave like a real network on PlanetLab
Challenge: Attracting Real Users • Could have run experiments on Emulab • Goal: Operate our own virtual network • Carrying traffic for actual users • We can tinker with routing protocols • Attracting real users
Conclusion • VINI: Controlled, Realistic Experimentation • Installing VINI nodes in NLR, Abilene • Download and run Internet In A Slice http://www.vini-veritas.net/
Link down Link up Zoom in TCP Throughput
Ongoing Work • Improving realism • Exposing network failures and changes in the underlying topology • Participating in routing with neighboring networks • Improving control • Better isolation • Experiment specification
Resource Isolation • Issue: Forwarding packets in user space • PlanetLab sees heavy use • CPU load affects virtual network performance
Performance is bad • User-space Click: ~200Mb/s forwarding
Experimental Results • Is a VINI feasible? • Click in user-space: 200Mb/s forwarded • Latency and jitter comparable between network and IIAS on PL-VINI. • Say something about running on just PlanetLab? Don’t spend much time talking about CPU scheduling…
Low latency for everyone? • PL-VINI provided IIAS with low latency by giving it high CPU scheduling priority
XORP Run OSPF Configure FIB Click FIB Tunnels Inject faults OpenVPN & NAT Connect clients and servers C Internet In A Slice S S C S C
UML XORP eth0 eth1 eth2 eth3 UmlSwitch Control Data FIB UmlSwitch element Encapsulation table Click tap0 PL-VINI / IIAS Router • Blue: topology • Virtual net devices • Tunnels • Red: routing and forwarding • Data traffic does not enter UML • Green: enter & exit IIAS overlay
PL-VINI / IIAS Router • XORP: control plane • UML: environment • Virtual interfaces • Click: data plane • Performance • Avoid UML overhead • Move to kernel, FPGA • Interfaces tunnels • “Fail a link” UML XORP (routing protocols) eth0 eth1 eth2 eth3 Control Data Packet Forward Engine UmlSwitch element Tunnel table Click