1 / 17

Active Memory Sharing Overview Carol Hernandez, Power Firmware Architect

Active Memory Sharing Overview Carol Hernandez, Power Firmware Architect. Outline. Technology Overview What is Active Memory Sharing Value Proposition Requirements Configuration Major Sub-Systems Deployment Considerations Usage and Cost Savings Performance Methodology for Deployment

berke
Download Presentation

Active Memory Sharing Overview Carol Hernandez, Power Firmware Architect

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Active Memory Sharing Overview Carol Hernandez, Power Firmware Architect

  2. Outline • Technology Overview • What is Active Memory Sharing • Value Proposition • Requirements • Configuration • Major Sub-Systems • Deployment Considerations • Usage and Cost Savings • Performance • Methodology for Deployment • Memory Utilization Improvement Use Cases • Time Zone Variant Workloads • High Availability Scenario • Physical Over-commitment

  3. What is Active Memory Sharing Active Memory Sharing intelligently flows memory from one partition to another for increased utilization and flexibility of memory usage • Memory virtualization enhancement for Power Systems • Latest innovation in PowerVM virtualization: extends resource optimization to include memory • Can improve overall memory utilization similar to the way micro-partitioning improves CPU utilization • A pool of physical memory is dynamically allocated amongst logical partitions as needed to optimize overall memory usage in the pool • Blends Power Systems hardware, firmware and software enhancements to optimize resources • Supports over-commitment of logical memory with overflow going to VIOS managed paging devices • Two paging VIOS partitions can be used for redundancy • Compatible with Live Partition Mobility • More efficient utilization of memory through collaboration with Operating System • Enables fine-grained sharing of physical memory and automated expansion and contraction of a partition’s physical memory footprint • Supports OS collaborative memory management to reduce hypervisor paging

  4. Memory Usage (GB) Time Memory Usage (GB) Workloads Time Memory Usage (GB) Time Active Memory Sharing Value Proposition Dynamically optimize memory across virtual images to improve memory utilization • Dynamically adjusts memory available on a physical system for multiple virtual images based on their workload activity levels: • Different workload peaks due to time zones • Mixed workloads with different time of day peaks (e.g. CRM by day, batch at night) • Ideal for highly-consolidated workloads with low or sporadic memory requirements • Increases memory utilization in an autonomic manner • Memory is automatically re-allocated between participating partitions • No user intervention required after set-up • Save minutes - hours compared to manual DLPAR memory between partitions

  5. Active Memory Sharing Requirements • Available with PowerVM Enterprise Edition • No additional cost • System requirements: • IBM Power Systems server or blade with POWER6 processors • Virtual I/O Server (VIOS) 2.1.1 • Firmware level: eFW 3.4.2 • HMC v7.342 • Operating systems supported: • AIX 6.1 TL3 • IBM i 6.1 plus PTFs • SUSE Linux Enterprise Server 11 • Partition Configuration Requirements • Must use shared processors only - dedicated processor is not supported • All I/O must be virtualized through VIOS – dedicated I/O, including HEA and HCA, is not supported • 4K pages only – 64K or larger pages are not supported* *Linux kernel emulates 64K pages

  6. Micro-Partition Processor Pool Dedicated Processor LPAR Finance Dedicated Processor LPAR Planning Physical Memory 350 GB LPAR #1 LPAR #2 LPAR #3 LPAR VIOS Shared Memory Pool 210 GB 70 GB 60 GB 105 GB 105 GB 105 GB 5 GB 350 GB Memory Memory Pool: 210 GB M M M M M M M M M Disk1 M M M M M M M M M M M M M M M M M M M M M Disk2 M M M M M M M M M M M M M M M M M M M M M M M M M M M M M M Disk3 M M M M M M M M M M M M M M M M M M M M M M M M M M M M M M M Disk4 Linux AIX AIX IBM i AIX AIX Power Hypervisor Shared Memory Pool Paging Devices Total Defined Memory 450 GB Active Memory Sharing Configuration • Shared Memory Pool • Specify desired and maximum pool size • Assign paging devices and paging VIOS • Single or Redundant Paging VIOSes • Shared Memory Partition • Partition Attributes • Min, Max, Assigned Memory refer to logical memory. • I/O Entitled Memory: maximum amount of physical memory available for I/O mapping. • Memory Weight: partition’s priority to get physical pages • Paging VIOSes: single or redundant; primary and secondary paging VIOS (optional) • DLPAR memory operations change logical memory • Partition Mobility support: among AMS capable systems

  7. Active Memory Sharing Major Sub-Systems • Virtualization Control Point User Interface • Create Shared Memory Pool • Create Shared Memory Partitions • Change Shared Memory Pool Configuration and partition attributes. • Switch between dedicated and shared memory partitions. • Profile I/O Entitled Memory Usage • Firmware and OS Interfaces • Paging VIOS Interface to manage paging devices and allocate them to Shared Memory partitions. • Hypervisor interface to create and manage Shared Memory Partitions. • Client interface for DLPAR operations and dynamic partition attributes changes. Virtualization Control Point (VCP) SM Partition 1 AIX SM Partition 2 Linux SM Partition 3 IBM i Dedicated Memory Partition 4 (8 GB) AIX Paging VIOS (1 GB) VAS I vSCSI Server CMM CMM CMM FC Page In / Out Page Loaning Shared Memory Manager (SMM) Hypervisor Shared Memory Pool (16 GB) Dedicated Memory (9 GB) Free Memory (5.5 GB) Hypervisor Memory (1.5 GB) Paging Devices Physical Memory (32 GB)

  8. Active Memory Sharing Major Sub-Systems (cont.) • Shared Memory Manager (SMM) • Guarantee physical memory is available for I/O ops. • Manage and allocate physical memory in the pool among shared memory partitions, using: • Page stealing based on OS page usage hints, memory weight, page usage statistics. • Page loaning mechanism • Hypervisor Paging (when all else fails). • Paging VIOS Partition • Help SMM move partition page frames in and out of the Shared Memory Pool to a paging device • Page In/Out requests received through VASI stream • Operating System • Manage partition’s I/O entitlement across device drivers and provides page usage hints to hypervisor. • Dynamically change partition’s memory footprint in response to hypervisor page loaning requests (CMM: Collaborative Memory Manager). Virtualization Control Point (VCP) SM Partition 1 AIX SM Partition 2 Linux SM Partition 3 IBM i Dedicated Memory Partition 4 (8 GB) AIX Paging VIOS (1 GB) VAS I vSCSI Server CMM CMM CMM FC Page In / Out Page Loaning Shared Memory Manager (SMM) Hypervisor Shared Memory Pool (16 GB) Dedicated Memory (9 GB) Free Memory (5.5 GB) Hypervisor Memory (1.5 GB) Paging Devices Physical Memory (32 GB)

  9. Outline • Technology Overview • What is Active Memory Sharing • Active Memory Sharing Value Proposition • Active Memory Sharing Requirements • Active Memory Sharing Major Sub-Systems • Active Memory Sharing Configuration • Deployment Considerations • Usage and Cost Savings • Performance • OS and VIOS • Methodology for Deployment • Memory Utilization Improvement Use Cases • Time Zone Variant Workloads • High Availability Scenario • Physical Over-commitment

  10. Deployment Considerations: Usage and Cost Savings • Usage • AMS provides the most benefit when the aggregate memory working sets for all partitions running concurrently can be backed by the physical memory in the pool. • Variable workloads that peak at different times across the partitions • Workloads with low average memory residency requirements • Active/Inactive Partition Scenarios • AMS provides limited benefit and is not recommended for the following types of applications: • Workloads with high, sustained memory residency requirements • Response time and performance sensitive workloads • Workloads with high degree of load variation • To understand the benefits of AMS, customers should run test trials on the new AMS functions prior to deploying in a production environment • White paper and LBS are available to assist customer with their set-up / optimization • Cost Savings • Reduction in real memory requirements may reduce cost of system configuration depending on specific workloads and performance requirements • AMS allows creation of more partitions than would be otherwise possible • Only actively referenced memory needs to stay resident in a workload memory footprint • AMS can save time and money of system admin who otherwise would be manually reallocating memory

  11. Partition 4 (active)  application running full speed  (-1.6GB) Partition 3 (inactive / app idle) Partition 2 (active)  application running full speed  Application running full speed. Partition 1 (becoming active) (+1.6 GB) Elapsed time (Minutes) Deployment Considerations: Performance • Performance depends on characteristics and usage model of the workloads that share the memory pool, memory configuration, and over subscription levels • Switching latency may vary depending on utilization across the shared memory partitions, configured memory, and paging devices • When a large amount of memory is moved, there will be a ramp-up latency at the destination partition • When memory demand increases, the shared memory pool can be increased dynamically to avoid paging and improve performance • Latency has to be monitored to initiate DLPAR memory add to the shared pool • High performance paging devices are required to minimize performance impact • Solid State Devices and FASTt are recommended Example: Memory Bandwidth Workload • Partition 2 and 4 workload performance protected • Partition 3 workload idle but, memory not released by application • Partition 1 started, memory removed from partition 3 until performance full speed

  12. Methodology For Deployment • Baseline – Dedicated Memory Partition • Determine the memory capacity needed for the workloads per partition • Base Overhead - AMS with the same physical memory as dedicated memory scenario • Shared Memory Pool will have physical memory to cover the memory capacity determined in baseline measurements • Logical Overcommit – Workloads peak at different times • Shared Memory Pool will have enough physical memory to cover the peaks at different time periods • Frequent change of loads might impact latency, additional memory may have to be added to the Shared Memory Pool to meet response time criteria • Physical Overcommit – Workloads peak concurrently • Shared Memory Pool cannot back up all the memory in use at a time • If performance is not within acceptable level, go back to Logical Overcommit

  13. Outline • Technology Overview • What is Active Memory Sharing • Active Memory Sharing Value Proposition • Active Memory Sharing Requirements • Active Memory Sharing Major Sub-Systems • Active Memory Sharing Configuration • Deployment Considerations • Usage and Cost Savings • Performance • OS and VIOS • Methodology for Deployment • Memory Utilization Improvement Use Cases • Time Zone Variant Workloads • High Availability Scenario • Physical Over-commitment

  14. Active Memory Sharing: Time Zone Variant Workloads (No DLPAR)

  15. Active Memory Sharing: High Availability Scenario BU1 P1 P1 100 GB 10 LPARs 300 GB 30 LPARs P2 100 GB 10 LPARs BU1’ P3 100 GB 10 LPARs 120 GB 30 LPARs

  16. Active Memory Sharing: Physical Overcommitment

  17. Questions? • Thank you • Questions? • Carol Hernandez • carolh@us.ibm.com

More Related