140 likes | 303 Views
Alex Landau 25 May 2010. Plugging the Hypervisor Abstraction Leaks Caused by Virtual Networking Alex Landau, David Hadas, Muli Ben-Yehuda IBM Research – Haifa. Hypervisor leaks. Original goal of hypervisors – complete replica of physical hardware
E N D
Alex Landau 25 May 2010 Plugging the Hypervisor AbstractionLeaks Caused by Virtual NetworkingAlex Landau, David Hadas, Muli Ben-YehudaIBM Research – Haifa
Hypervisor leaks • Original goal of hypervisors – complete replica of physical hardware • Application running on host should be able to run in guest • Host details leaked to guest • Instruction set extensions • Bridged networking • Leaked IP address, subnet mask, etc. • NAT • Not suitable for many applications
Why leaks are bad? • Why is that a problem? • Checkpoint / restart • Cloning • Live migration • Example: • Guest acquires IP address from DHCP • Guest is live-migrated to different data center • Guest uses old IP address in new network • Current solution: • Defer problem to guests and network equipment • E.g., VLANs
Packet flow today (in KVM) QEMU QEMU Guest Guest Guest application Guest application Socket Interface Socket Interface Guest Kernel Guest Kernel Guest Network Stack Guest Network Stack Network Adapter Driver VIRTIOFrontend EmulatedNetwork Adapter VIRTIOBackend HostNetworkServices (E.g. Bridgeor VAN central services) VirtualNetworkInterface VirtualNetworkInterface Host Kernel TAP TAP
How to avoid leaks? • Hypervisor, not network, is responsible for avoiding leaks • Guests should be: • Offered an isolated virtual environment • Independent of physical network characteristics (e.g., topology) • Independent of physical location (e.g., IP addresses) • Example: • Guest should receive IP address independent of: • Host running the guest • Data center containing the host • Network configuration of the host
Avoiding leaks – Encapsulation • Guest produces Layer-2 frame • Host encapsulates it in UDP packet • Host finds destination host • By peeking at destination (guest) MAC address • And “somehow” finding destination host • Host transmits UDP packet • Receiver host receives UDP packet • Receiver host decapsulates Layer-2 frame from UDP packet • Receiver host passes Layer-2 frame to guest
Proposed packet flow – Dual Stack QEMU QEMU Guest Guest Guest application Guest application App. Socket Interface Socket Interface Guest Stack Guest Kernel Guest Kernel Guest Network Stack Guest Network Stack Network Adapter Driver VIRTIOFrontendDriver (Glue) EmulatedNetwork Adapter VIRTIOBackend Traffic Encapsulation Traffic Encapsulation Socket Interface Isolation Host Stack Host Kernel Host Network Stack Net Driver Driver
Performance • Path from guest to wire is long • Latencies are manifested in the form of: • Packet copies • VM exits and entries • User/Kernel mode switches • Host QEMU process scheduling
Large packets • Transport and Network layers capable of up to 64KB packets • Ethernet limit is 1500 bytes • Ignoring jumbo frames • But there is no Ethernet wire between guest and host! • Set MTU to 64KB in guest • 64KB packets are transferred from guest to host • Inhibit TCP/UDP checksum calculation and verification
Large packets – Flow • Application writes 64KB to TCP socket • TCP, IP check MTU (=64KB) and create 1 TCP segment, 1 IP packet • Guest virtual NIC driver copies entire 64KB frame to host • Host writes 64KB frame into UDP socket • Host stack creates 1 64KB UDP packet • If packet destination = VM on local host • Transfer 64KB packet directly on the loopback interface • If packet destination = other host • Host NIC segments 64KB packet in hardware
CPU affinity and pinning • QEMU process contains 2 threads • CPU thread (actually, one CPU thread per guest vCPU) • IO thread • Linux process scheduler selects core(s) to run threads on • Many times scheduler made wrong decisions • Schedule both on same core • Constantly reschedule (core 0 -> 1 -> 0 -> 1 -> …) • Solution/workaround – pin CPU thread to core 0, IO thread to core 1
Flow control • Guest does not anticipate flow control at Layer-2 • Thus, host should not provide flow control • Otherwise, bad effects similar to TCP-in-TCP encapsulation will happen • Lacking flow control, host should have large enough socket buffers • Example: • Guest uses TCP • Host buffers should be at least guest TCP’s bandwidth x delay
Performance results Throughput Receiver CPU Utilization