430 likes | 596 Views
Virtual Machine Monitors: Technology and Trends. Jonathan Kaldor CS614 / F07. Virtual machine Monitors (VMMs). Allow users to run multiple commodity OSes on a single piece of hardware Applications unchanged Resources fairly distributed and multiplexed
E N D
Virtual Machine Monitors:Technology and Trends Jonathan Kaldor CS614 / F07
Virtual machine Monitors (VMMs) • Allow users to run multiple commodity OSes on a single piece of hardware • Applications unchanged • Resources fairly distributed and multiplexed • Can get, but doesn’t need help from hardware/OS • Main difference between two papers
Why a VMM? Server A Server B OS A OS B Application A Application B Client (Windows) Client (Windows) Client (Linux)
Why a VMM? Server A • Hardware consolidation OS A Application A Application B Client (Windows) Client (Windows) Client (Linux)
Why a VMM? Server A • Hardware consolidation OS A Application A Application B Client (Windows) Client (Windows) Client (Linux)
Why a VMM? Server A • Hardware consolidation OS A ??? Application A Application B Windows Linux Client (Windows) Client (Windows) Client (Linux)
Why a VMM? Server A • Hardware consolidation OS A Application A Application B Client (Windows) Client (Linux)
Why a VMM? Server A • Hardware consolidation OS A Application A Application B Client (Windows) Client (Linux)
Why a VMM? Server A • Hardware consolidation • While preserving boundaries OS A Application A Application B Client (Windows) Client (Linux)
Why a VMM? Server A • Hardware consolidation • While preserving boundaries VMM Guest OS A Guest OS B Application A Application B Client (Windows) Client (Windows) Client (Linux)
Applications • Server consolidation • Application hosting • Application mobility • Security • Reducing need for dual-booting
Exokernel-like layer (Type I) Multiplexes and manages hardware through virtual layer Layered on Host OS (Type II) Use HostOS to interface with hardware VMM Organizational Types App App App App App App App App GuestOS GuestOS GuestOS GuestOS GuestOS VM VM VM VM VM App VMM VMM HostOS Hardware Hardware
To Host or Not to Host • Hosted eases development • Can use HostOS drivers to interface with hardware • But performs poorly • Hybrid systems (modify HostOS for performance)
Performance • Run code directly on CPU for speed • Conflicting requirements: • VMM needs to maintain control • OS assumes it is privileged • Solution: run Guest OS code directly in less-privileged level • How to deal with the consequences?
Hardware Issues(or why no one has ever called x86 elegant, part 15,023) • Allows multiple privilege levels (ring 0-3) • ISA can be ill-defined in virtualized environment • Silent failures, multiple outcomes, etc • Hardware page tables • Nontrivial for VMM to exert control
(Para-) Virtualization • OS no longer has complete control over hardware • Paravirtualization • Provide alternatives to privileged instructions • Requires modifying source code of GuestOS • Binary Translation • Translate privileged instructions to virtualized alternatives while running
(Para-) Virtualization, cont. • Arguments for paravirtualization • Can improve performance • Important virtual/nonvirtual concepts • Time! • Architecture doesn’t necessarily work well with full virtualization
(Para-) Virtualization, cont. • Arguments for binary translation • Does not require access to source • Unrealistic at times to modify the OS • *cough*Windows*cough* • Legacy apps compatible with older OSes • Can be reasonably fast?
An example: Page Tables Application OS VMWare Shadow Table Page Table • VMWare: Keeps a shadow copy of page table
“Add page 52” “Add page 10” An example: Page Tables Application OS VMWare Shadow Table Page Table • VMWare: Keeps a shadow copy of page table • Detects when change is made, makes corresponding change to shadow table • Translation from OS address to machine address
An example: Page Tables Application OS Xen Page Table • Xen: OS tells Xen about the page table, relinquishes write control
An example: Page Tables Application OS “Add page 10” Xen Page Table • Xen: OS tells Xen about the page table, relinquishes write control • OS tells Xen what updates it wants to make • Xen ensures updates are legal, can batch
Xen: Improving Performance • Minimize TLB flushes • Xen lives at top 64MB of every address space • Allow batch updates/requests to Xen • I/O, page tables, etc • OS-specified handlers • Need to guarantee safety
Xen I/O • Use ring structure to queue requests / responses • Enables batching, reordering • Virtual Network Interface • Rules used to correctly route packets • Avoids copying via page trading
Oversubscribing Memory • “Hundreds” of OSes, each with 128MB of maximum memory • Need to efficiently allocate memory among OSes, effectively page to disk • Disk paging at VMM level can result in poor behavior
Disk Disk Paging Policy Decisions at the VMM Level • VMM decides to take a page from the OS VMM Main Memory Guest OS Page A Page B Page C Page D
Disk Disk Paging Policy Decisions at the VMM Level • VMM decides to take a page from the OS VMM Main Memory Guest OS Page A Page B Page C Page D
Disk Disk Paging Policy Decisions at the VMM Level • VMM decides to take a page from the OS • OS decides to page to disk as well, picks same page VMM Main Memory Guest OS Page A Page B Page C Page D
Disk Disk Paging Policy Decisions at the VMM Level • VMM decides to take a page from the OS • OS decides to page to disk as well, picks same page • VMM now needs to reload page from disk… VMM Main Memory Guest OS Page A Page B Page C Page D
Disk Disk Paging Policy Decisions at the VMM Level • VMM decides to take a page from the OS • OS decides to page to disk as well, picks same page • VMM now needs to reload page from disk… • … solely so the Guest OS can write it back out to disk! VMM Main Memory Guest OS Page A Page B Page C Page D
Using the OS paging algorithm • Lesson: The VMM is necessarily a poor estimator of which page to claim • Use OS paging algorithm instead • Balloon process
Disk Using a Balloon Process “request memory” Guest OS VMM Balloon Process Main Memory Page A Page B Page C Page D
Disk Using a Balloon Process “request memory” Guest OS VMM Balloon Process Main Memory Page A Page B Page C Page D “process needs pages badly!”
Disk Using a Balloon Process “request memory” Guest OS VMM Balloon Process Main Memory Page A Page B Page C Page D “process needs pages badly!”
Disk Using a Balloon Process “he gave me page c” Guest OS VMM Balloon Process Main Memory Page A Page B Page C Page D “take page c”
Additional Memory Tricks • Still need a paging algorithm in case ballooning fails • Potentially many copies of the same page • Detect these, remap them with copy-on-write • VMWare: 7-30% memory savings in real world
Performance L: Native Linux, X: Xen, V: VMWare, U: User-Mode Linux
Xen Versus VMWare ESX (with a bucket of salt) From “A Performance Comparison of Commercial Hypervisors”, XenSource http://www.xensource.com/Documents/hypervisor_performance_comparison_1_0_5_with_esx-data.pdf
Conclusions • Either approach works well in practice • Small but noticeable performance penalty • Becoming a nonissue • OS support for virtualization • Microsoft Windows (?!) • Hardware support
The Future • Virtualization is probably going to become more commonplace • Hardware support will hopefully eliminate some issues • In a way, back to where we started • Resurrection of an old research idea to solve new problems