1 / 25

Memory Buddies: Exploiting Page Sharing for Smart C olocation in Virtualized Data Centers

Memory Buddies: Exploiting Page Sharing for Smart C olocation in Virtualized Data Centers. Timothy Wood , Gabriel Tarasuk-Levin, Prashant Shenoy, Peter Desnoyers*, Emmanuel Cecchet, and Mark D. Corner University of Massachusetts, Amherst *Northeastern University.

briana
Download Presentation

Memory Buddies: Exploiting Page Sharing for Smart C olocation in Virtualized Data Centers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Buddies: Exploiting Page Sharing for Smart Colocation in Virtualized Data Centers Timothy Wood, Gabriel Tarasuk-Levin, Prashant Shenoy, Peter Desnoyers*, Emmanuel Cecchet, and Mark D. Corner University of Massachusetts, Amherst *Northeastern University

  2. Server Placement in Data Centers • Virtualization improves resource utilization by consolidating servers • But how to determine which servers to place together? • Must consider many resource constraints: • CPU, Disk, Network, Memory

  3. Why Memory? • CPU scheduling is fine grain • Easily share among many users • Work conserving, so no waste • Memory is much less flexible • Allocated on a large time scales • Being wrong (paging) is disastrous • Memory is an expensive resource • Memory capacity is increasing slower than CPU power

  4. FREE A A D D D B FREE VM 2Page Table B A A B B C C VM 1Page Table Content Based Page Sharing • If two VMs have an identical pages in memory, just keep one copy • Supported by VMware ESX platform • Experimental tests in Xen, further support planned • Potential benefits • 33% in VMware ESX paper • 65% with subpage sharing (Difference Engine) PhysicalRAM Hypervisor 1) Hypervisor detects identical pages 2) Copy-on-Write references created for shared pages

  5. FREE A D D B FREE VM 2Page Table A A B B C C VM 1Page Table But what if…. • Pages change over time, breaking sharing • If memory is being overcommitted, this can lead to hotspots PhysicalRAM A* A*

  6. FREE FREE D D FREE FREE A A A A B B B B C C C C VM 4Page Table VM 2Page Table What’s the problem? • Only get a benefit if VMs on a machine actually have pages to share! Host 2 Host 1 PhysicalRAM PhysicalRAM E E D D E E F F D D VM 1Page Table VM 3Page Table F F

  7. Where to place a VM? • How do you figure out which VMs to place together? • Meet resource constraints • Maximize sharing • Why placement is hard in large data centers? • Many applications from different clients • Many software stacks / platforms • Workloads change over time ? Here or there? Or there or there or there…?

  8. Memory Buddies Goals • Efficiently analyze the memory contents of multiple VMs to determine sharing potential • Find more compact VM placement schemes • Respond quickly to changing conditions to prevent memory hotspots Bonus! Traces released at traces.cs.umass.edu

  9. Outline • Motivation • Memory Fingerprinting & Comparison • Sharing-aware Colocation • Hotspot Mitigation • Implementation & Evaluation • Related Work & Summary

  10. Memory Fingerprints • Hypervisor creates hash for each page • Check hash table to see if page is sharable • Record these hashes to create fingerprint • Hash lists are big • 32bits per 4K page = 1MB per 1GB of RAM • Need to forward fingerprint to other hosts • Comparisons of lists is relatively slow 0x11223344 A B 0x55667788 VM 1

  11. Bloom Filter Fingerprints • Bloom filter is a probabilistic data structure • Stores keys by setting some bits to 1 • False positive chance at lookup from hash collisions • Very space efficient • Tradeoff between filter size and accuracy M bits 0x11223344 0 1 1 0 … 1 0 0 1 0x55667788 VM 1 Insert(key) --> set h1(key)=1 and h2(key)=1

  12. 1 1 1 0 0 1 1 0 1 1 0 0 1 1 0 1 1 1 0 1 1 1 1 1 0 1 1 1 Fingerprint Comparison • Hash list comparison • Sort each list and then step through • Bloom Filter • Simple method: Dot product of bit vectors • Bloom Sharing Equation • Corrects for the expected number of false matches in each filter • Impressively accurate! = 4

  13. Eval: Fingerprinting • 4GB RAM VMs • Hash: 4 sec • Sorted: 0.3 sec • Bloom: 0.02 sec • Bloom Fingerprint 10% the size, still < 1% error Bloom filters are smaller and 10 to 100 times faster

  14. Outline • Motivation • Memory Fingerprinting & Comparison • Sharing-aware Colocation • Hotspot Mitigation • Implementation & Evaluation • Related Work & Summary

  15. Host 1 Compare 1000010 Staging Host + 1110000 = Host 3 Host 2 Sharing Aware Placement • Where to place a freshly started VM? • Use staging area to find initial placement • Find feasible hosts • Estimate sharing potential • Migrate VM • Done! 1100011 1010101 1110010

  16. Consolidation & Hotspot Mitigation • Resource usage changes over time • Sharing may not last forever • Periodically consolidate servers • Identify candidates (least loaded hosts) • Match to destinations (hosts with best sharing) • Migrate VMs • Disable unnecessary servers • Hotspot Mitigation • Monitor memory usage to detect hotspots • VMs may run out of memory if sharing stops • Redistribute VMs to rebalance

  17. Host resources Resource Traces Memory Fingerprints Host 1 Host N Offline Planning Tool Offline Planning Tool Dynamic programming based bin-packing tool Finds subsets of VMs that can be placed together and maximize sharing Number of hosts required = X VM to host mapping Estimated sharing per host = Y

  18. Outline • Motivation • Memory Fingerprinting & Comparison • Sharing-aware Colocation • Hotspot Mitigation • Implementation & Evaluation • Related Work & Summary

  19. Implementation • Memory Tracer • Tool used to gather data for trace study • Runs on Linux, OS X, and Windows • Calculates 32bit hashes for each page in memory • Sends either a hash list or Bloom filter to control node • Works on physical systems or in VMs

  20. Implementation • Nucleus • Collects memory fingerprints for each VM • Sends data tocontrol plane • Control Plane • Gathers VM statistics and makes migration decisions based on sharing • Interacts with VMware Virtual Infrastructure to manage VMs

  21. Eval: Trace Study

  22. TPC-W OFBiz RUBiS SpecJBB Eval: App Placement • Try to place as many VMs onto a set of 4 hosts • Sharing Oblivious: Place on first host with sufficient capacity • Four app types -- data contents different for each VM instance Sharing Aware Sharing Oblivious 20 VMs 17 VMs 1 2 3 4 Host 1 2 3 4 Host

  23. Outline • Motivation • Memory Fingerprinting & Comparison • Sharing-aware Colocation • Hotspot Mitigation • Implementation & Evaluation • Related Work & Summary

  24. Related Work • Waldspurger, OSDI 2002 • CBPS in VMware ESX Server • Gupta, et al., OSDI 2008 • Increase sharing potential by looking at parts of pages • VM Memory provisioning • Zhao & Wang (yesterday) has a good list!

  25. Summary • Hypervisors already support page sharing… • Memory Buddies makes it more useful • Identifies sharing opportunities across data center • Migrates VMs to maximize sharing • Uses efficient memory fingerprinting techniques to scale to large data centers • Traces will be online (soon) at: • http://traces.cs.umass.edu • Macbooks, Linux servers, and more! • Questions? twood@cs.umass.edu

More Related