200 likes | 306 Views
Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache. USENIX ‘07 Pin Lu & Kai Shen Department of Computer Science University of Rochester. Motivation. Virtual Machine (VM) memory allocation Lack of OS information at the hypervisor The existing approaches do not work well
E N D
Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache USENIX ‘07 Pin Lu & Kai Shen Department of Computer Science University of Rochester
Motivation • Virtual Machine (VM) memory allocation • Lack of OS information at the hypervisor • The existing approaches do not work well • Static allocation inefficient memory utilization & sub-optimized performance • Working set sampling (i.e., VMware ESX server) limited information to support flexible allocation, work poorly under workloads with little data reuse
Miss Ratio Curve • Miss ratio curve (MRC) • The miss rates at different memory allocation sizes • Allows flexible allocation objectives • The reuse distance distribution Y: page miss ratio 2.0 C1 C2 0.5 1.0 0.5 X: memory size Current allocation size
Related Work on MRC Estimation • Geiger (Jones et al., ASPLOS 2006) • Append a ghost buffer in addition to VM memory • Reuse distance is tracked through I/O • Dynamic Tracking of MRC for Memory Management (Zhou et al., ASPLOS 2004) & CRAMM (Yang el at, OSDI 2006) • Protecting the LRU pages • Reuse distance is tracked through memory accesses • Transparent Contribution of Memory (Cipar et al., USENIX 2006) • Periodically sampling the access bits to approximate the memory access traces
Estimate VM MRC with Hypervisor Cache • The hypervisor cache approach • Part of VM memory cache managed by the hypervisor • Memory accesses are tracked by cache references • Low overhead & requires minimal VM information VM memory Virtual Machine Data misses Hypervisor Cache Hypervisor Data misses Storage
Outline • The Hypervisor Cache • Design • Transparency & overhead • Evaluation • MRC-directed Multi-VM Memory Allocation • Allocation Policy • Evaluation • Summary& Future Work
VM memory VM direct memory Hypervisor cache VM direct memory Hypervisor cache VM direct memory Hypervisor cache Design • Track MRC with VM memory allocation • Part of VM memory Hypervisor cache • Exclusive cache • Caching efficiency • Comparable miss rate • HCache does not incur extra miss rate if LRU is employed • Data admission from VM • Avoid expensive storage I/O
Cache Correctness • Cache contents need to be correct • i.e., matching with the storage location • Challenging because hypervisor has very limited information • VM data eviction notification • The VM OS notifies the hypervisor about a page eviction/release • Page to storage location and the reverse two-way mapping tables • Each VM I/O request mappings are inserted in both mapping tables • Each VM page eviction data is admitted with consultation to the mapping tables
Design Transparency & Overhead • Current design is not transparent • Explicit page eviction notification from VM OS • The changes are small, fit well in para-virtualization • Reuse time inference techniques (Geiger) are not appropriate • The page may have already been changed – too late to admit it from VM • System overhead • Cache and mapping table management • Minor page faults • Page eviction notification
System Workflow • More complete page miss rate info • Smaller VM direct memory (larger hypervisor cache) • The cache can be kept permanently (no step 3) • If overhead is not tolerable
Read Write Eviction Xen0 XenU Cache & Tables Front-end Back-end Read, Write Storage Prototype Implementation • Hypervisor Xen 3.0.2 with VM OS Linux 2.6.16 • Page eviction as new type of VM I/O request • Hypervisor cache populated through ballooning • HCache and mapping tables maintained at Xen0 backend driver • Page copying to transfer data
Hypervisor Cache Evaluation • Goals • Evaluate caching performance, overhead & MRC prediction accuracy • VM Workloads • I/O bound • Specweb99 • Keyword searching • TPC-C like • CPU bound • TPC-H like
Throughput Results • Total VM memory is 512MB • Hypervisor cache sizes: 12.5%, 25%, 50%, and 75% of total VM memory
CPU Overhead Results • Total VM memory is 512MB • Hypervisor cache sizes: 12.5%, 25%, 50%, and 75% to VM memory
Outline • The Hypervisor Cache • Design • Transparency and overhead • Evaluation • MRC-directed Multi-VM Memory Allocation • Allocation Policy • Evaluation • Summary& Future Work
MRC-directed Multi-VM Memory Allocation • More Complete VM MRC via Hypervisor cache • Provides detailed miss rates at different memory sizes • Flexible VM memory allocation policies • Isolated Sharing Policy • Maximize system-wide performance • e.g., lower the geometric mean of all VMs’ miss rates • Constrained individual VM performance degradation • e.g., any of the VM does not suffer extra α% more miss rate
Isolated Sharing Experiments • Base allocation of 512MB each; minimize geo. mean of miss ratios • Isolation constraint at 5% ⇒ achieve mean miss ratio of 0.85 • Isolation constraint at 25% ⇒ achieve mean miss ratio of 0.41
Comparison with VMware ESX Server • ESX server policy and Isolated sharing with 0% tolerance • Both work well when the VM working set (around 330MB) VM fits in the VM memory (512MB) • Add a noise background workload that slowly scans through a large dataset • VM MRC identifies that the VM does not benefit from extra memory • ESX server estimates the working set size at 800MB, preventing memory reclamation
Summary and Future Work • Summary • VM MRC estimation via Hypervisor Cache • Features, design and implementation • MRC-directed multi-VM memory allocation • Future Work • Improving the transparency of HCache • Reducing the overhead of HCache • Generic hypervisor buffer cache