240 likes | 360 Views
Our work on virtualization. Chen Haogang, Wang Xiaolin {hchen, wxl}@pku.edu.cn Institute of Network and Information Systems School of Electrical Engineering and Computer Science Peking University 2008.11. Agenda. Work at PKU Remote paging for VM Transparent paravirtualization
E N D
Our work on virtualization Chen Haogang, Wang Xiaolin {hchen, wxl}@pku.edu.cn Institute of Network and Information Systems School of Electrical Engineering and Computer SciencePeking University 2008.11
Agenda • Work at PKU • Remote paging for VM • Transparent paravirtualization • Virtual resource sharing • Cache management in Multi-Core and Virtualization environment http://ncis.pku.edu.cn
REMOCA: Hypervisor Remote Disk Cache • Motivation • To improve paging performance for memory-intensive or I/O-intensive workloads by utilizing free memory resource on another physical machine • Solution • The remote memory plays the role of a storage cache between a VM’s virtual memory and its virtual disk devices. • In most cases, the network latency is much lower than the disk latency (1~2 magnitude) http://ncis.pku.edu.cn
REMOCA: The design of REMOCA • Local Module: a ghost buffer • REMOCA is an exclusive cache • Remote Module: the memory service http://ncis.pku.edu.cn
Summary of REMOCA • REMOCA can efficiently alleviate the impact of thrashing behavior, and also significantly improve the performance for real-world I/O intensive applications. • Future work • Cluster-wide memory balancing • Predicting miss ratio before allocating Remote cache size: 768MB http://ncis.pku.edu.cn
主要内容 • Work at PKU • Remote paging for VM • Transparent paravirtualization • Virtual resource sharing • Cache management in Multi-Core and Virtualization environment http://ncis.pku.edu.cn
Transparent paravirtualization • Some limitation of current hardware-assistant virtualization • Too many VMexits incur significant overhead. • Most VM exits are related to page fault or I/O operation Reason and count of VM exits http://ncis.pku.edu.cn
Top 10 trap instructions with high VM exits frequency in KVM-54 (io: I/O operation, pf: page fault, rd cr: read control registers, clts and hlt: x86 instructions, ot: others) http://ncis.pku.edu.cn
Hot Instructions detection and translation • How to reduce VM exits • Paravirtualization • Xenand KVMapply paravirtualization to improve performance. • The needs to change the source code damages its applicability. • Transparent paravirtualization • Detecting Hot Instructions • An efficient mechanism to catch 97% with top 64 instructions • Replacing Hot Instructions • New or even complex assistant mechanisms should be introduced into VMM to make the replacement safe and possible • Implanting Replaced Instructions to Guest OS • Adaptive code implantation http://ncis.pku.edu.cn
Implementation in KVM http://ncis.pku.edu.cn
Transparent Memory Paravirtualization • A New Memory Virtualization Mechanism • Transfers the guest OS page table to map guest virtual addresses directly to host physical addresses. • The transferred guest page table, called direct page table, is directly registered with the hardware MMU. • A process using direct page table is called as a para-virtualized process. • To provide the guest OS an independent view of its own physical address space as used for guest OS memory management. • When the guest OS accesses the direct page table, it expects guest physical addresses rather than host physical addresses as currently presented in the direct page table. http://ncis.pku.edu.cn
Transparent Memory Paravirtualization The Direct page table structure of A New Memory Virtualization Mechanism http://ncis.pku.edu.cn
Evaluation http://ncis.pku.edu.cn
Evaluation http://ncis.pku.edu.cn
Transparent Paravirtualization • Future work • TMP Evaluation • Impact on cache hits • Compares with: EPT/NPT,Shadow Page Table • Compares with: KVM Para-MMU,Xen Para-MMU • Transparent MMU Extension • Linux Windows • Emulate all Guest OS page faults • TMP Transparent Para-IO • Other hot instructions • Limitation of Transparent Paravirtualization • Security vs. performance http://ncis.pku.edu.cn
Agenda • Work at PKU • Remote paging for VM • Transparent paravirtualization • Virtual resource sharing • Cache management in Multi-Core and Virtualization environment http://ncis.pku.edu.cn
Virtual resource sharing • Motivation • In a homogeneous environment, how to achieve high-degree of resource sharing while preserving isolation? • Example: Network classroom @ PKU Zhongzhi • Teaching Windows, MS Office or VC++ programming • About 30 students per class • Homogeneous OS, software,data and application instances http://ncis.pku.edu.cn
Virtual resource sharing • Limitations of current solutions • Terminal server: bad isolation • Preferred to run a single OS per student • VM live clone: cannot provide data persistency • Content-based sharing: high scanning overhead • Difference Engine (OSDI ’08): unable to share during OS startup or application startup • Goal • Fast startup of VMs and applications • Accurate resource sharing • Low management overhead
Virtual resource sharing • Solution: a bottom-up approach • Starts from disk sharing • Map identical disk blocks to a single storage location • Manage a shared disk cache within the VMM • Replace disk reads with page remapping • Fast application startup • Challenges • How to discover identical disk blocks? • CoW disk / CAS • How to handle sharable application data, especially the “zero pages”?
Agenda • Work at PKU • Remote paging for VM • Transparent para-virtualization • Virtual resource sharing • Cache management in Multi-Core and Virtualization environment http://ncis.pku.edu.cn
Cache management in Multi-Core • Motivation • Current VMM cannot make efficient use of the cache hierarchy in a multi-core platform • Objectives • Explore new compiling and profiling techniques to analyze and predict memory access behavior of a program • Implement the cache-aware memory allocation and CPU scheduling in the VMM • Dynamic memory balancing among VMs http://ncis.pku.edu.cn
Cache management in Multi-Core • Lower-level cache partitioning • Avoid cache contention for concurrent VMs • Using page-coloring technique • Restricting the number of cache sets that a VM can use • Transparent to the guest OS http://ncis.pku.edu.cn
Cache management in Multi-Core • Challenges • Predicting the performance impact to the application before partitioning • Online profiling and dynamic re-partitioning • Reducing page migration overhead • Cooperating with VM scheduling, especially CPU allocation and migration • New micro-architectures • Example: Intel Nehalem256 KB dedicated L2 per core and shared L3 http://ncis.pku.edu.cn
Thanks! Q&A Discussion