730 likes | 959 Views
VIR320. Virtualizing Microsoft Exchange Server with Hyper-V. Matt McSpirit Senior Product Marketing Manager Microsoft Corporation @ mattmcspirit. What We Will Cover. Why Microsoft Virtualization for Microsoft Server Applications? Base Hypervisor Performance ESG Performance Results
E N D
VIR320 Virtualizing Microsoft ExchangeServer with Hyper-V Matt McSpirit Senior Product Marketing Manager Microsoft Corporation @mattmcspirit
What We Will Cover • Why Microsoft Virtualization for Microsoft Server Applications? • Base Hypervisor Performance • ESG Performance Results • Exchange Server Virtualization Best Practices • Links to Reference Material
Trends – Changing Market Landscape • Virtualization is exploding resulting in VM proliferation and impacting OS share Number of physical servers shipments used for virtualization will grow to 1.7M+ in 2012 at a CAGR of 15% IDC Server Virtualization Forecast 9.00 8.00 7.00 6.00 5.00 4.00 3.00 2.00 1.00 VM Density 19% of physical server shipments will be used for virtualization, increasing from 11.7% in 2007
Why Microsoft Virtualizationfor Microsoft ServerApplications
Microsoft Virtualization for Server Applications Business Critical Applications Management Platform Line of Business (LOB) Custom Applications Enterprise Applications Microsoft Server Applications Communication: Databases: Collaboration: Virtualization Platform Microsoft VirtualizationWindows Server 2008 R2 Hyper-V & System Center
Microsoft Virtualization: The Best Choice for Microsoft Server Applications Built for Windows CompleteManagementSolution Low CostCompleteSolution *Built-in Virtualization with one-stop support *Deep Application Knowledge *A comparable solution can cost almost 4 times more† *Large PartnerEcosystem *Physical & Virtual Management *Lower On-goingCosts Increased Deployment Options *Cross-Platform & Hypervisor Support Virtualization-Friendly Licensing *Only available with Microsoft Virtualization †Based on a comparison of Microsoft® System Center Server Management Suite Datacenter with VMware® vSphere Enterprise Plus with VMware vCenter Server.. Assumes a five host configuration, 2 processors on each host, 2 years support costs for both products, and no operating system costs included.. The Microsoft solution can use either the free Microsoft Hyper-V Server 2008 R2 hypervisor or an existing Windows Server 2008 R2 hypervisor. Based on Microsoft estimated retail prices and published VMware prices available at https://www.vmware.com/vmwarestore as of 08/04/2009 for purchases in the United States. Actual reseller prices may vary.
Windows Server 2008 R2 SP1 • Many of the scenarios discussed leverage features new to R2 and R2 SP1 • Live Migration & HA • New Processor Support • Improved Performance & Lower Costs • Enhanced Scalability • Greater VM density & Lower TCO • Networking enhancements • Improve performance & 10 Gb/E ready • Dynamic VM capabilities • Enhancements to Server Core • Ease management & Lower TCO • Dynamic Memory & Remote FXin SP1
Hyper-V Configuration Guidelines • Hyper-V Root Configuration • Plan for 1GB+ memory reserve for the management OS in the root partition • Plan for one dedicated NIC for management purposes • Plan (ideally) for one dedicated NIC for live migration • Separate LUNs/Arrays for management OS, guest OS VHDs and VM storage • Management OS and VHD LUNs should employ RAID to provide data protection and performance • Challenge for blades with 2 physical disks • Hyper-V Guest Configuration • Fixed-sized VHDs for Virtual OS • Need to account for page file consumption in addition to OS requirementsOS VHD Size (minimum 15GB) + VM Memory Size = Minimum VHD size • Account for space needed by additional files by VM • Example for SQL: OS VHD Size + (VM Memory Size) + Data Files + Log Files
Dynamic Memory in SP1 • Overview • A memory management enhancement for Hyper-V • Enables customers to dynamically grow and decrease the memory of a VM • Available as a feature in Windows Server 2008 R2 SP1 • Benefits • Better consolidation ratios with predictable performance • Enables linear scalability for deployment environments • Designed for production use • Recommendations for use for server applications still being evaluated. • With the SP1 release, we will meet all the Gartner/Burton group requirements for an Enterprise ready Hypervisor
How does it work? • VM memory configuration parameters: • Initial (what VM will boot with) • Maximum (what VM can grow to) • Memory is pooled and dynamically distributed across VMs • Memory is dynamically allocated/removed based VM usage with no service interruption • Guest enlightened: guests & Hyper-V work TOGETHER • Memory is added and removed via synthetic memory driver (memory VSC) support
How is it different? • There is a difference between memory overcommit and overcommitting memory • We warned of the dangers of overcommitting memory • We pressed against the use of overcommit from a marketing perspective • Dynamic Memory allows for predictable, consistent performance even after overcommitting memory • Key is avoid the performance cliff which is possible with the VMware solution • The features VMware has does NOT provide significantly more scale than Hyper-V with DM does (despite what VMware will tell our customers)
How does VMware do it? • Memory Ballooning and Un-ballooning • Set VM memory, reservation, and limit • Will add and remove as necessary, between the reservation and limit • Memory Compression • Page Sharing • Optimized common memory pages • Affected by ASLR and SLAT • Hypervisor Memory Paging • Pages at the Hypervisor when physical memory is completely committed • All these (italicized) are only used after all physical memory is committed, which is not recommended by VMware or Microsoft!
Project Virtual Reality Check • Available at www.virtualrealitycheck.net • Performed by Ruben Spruijt and Jeroen van de Kamp • Not sponsored by any one company, although VMware and Citrix have assisted the site • Results are focused on running and replacing Terminal Server workloads only, on vSphere, Hyper-V, and XenServer • Results are not for redistribution or validation, although they are public • Phase II results from February 2010, with significant increase in vSphere performance
Project VRC Results • Enable EPT/RVI results in a significant increase in capacity of VMs running TS • vSphere – 90% increase • XenServer – 95% increase • Hyper-V – 154% increase • When scaling x86 TS VMs w/o Hyper-threading, vSphere is 5% better than both Xen and Hyper-V • When scaling x86 TS VMs w/ Hyper-threading, Xen and Hyper-V are 15% better than vSphere • When scaling up to 100 TS sessions, response times for all three hypervisors are fairly equal • Beyond 100 sessions, vSphere response times increases with each new session • When scaling x64 TS VMs, Xen and Hyper-V are within 13.6% of bare metal, and are 27% better than vSphere
2010 ESG Paper • 3rd Party Performance Validation White Paper, sponsored by Microsoft • Key Points • Hyper-V is easy to install to get running for those administrators familiar with Windows • Hyper-V provides high availability with transparent and automatic failover of Virtual Machines (VMs) • Hyper-V Live Migrations provides for zero-downtime migrations of VMs between Hyper-V servers • Most importantly, the report shows Hyper-V performance versus physical is excellent, with 95% to 99% of the performance of physical disks and 89% to 98% of performance of the tested workloads versus physical.
2010 ESG Paper • All this strongly supports the position that Hyper-V performance is not a bottleneck or blocker to implementation and that customers can run almost all their workloads on Hyper-V • Publically available and you can distribute to your customers/clients • http://www.enterprisestrategygroup.com/2010/07/microsoft-hyper-v-r2-scalable-native-server-virtualization-for-the-enterprise/ • http://www.infostor.com/index/articles/display/5976242552/articles/infostor/esg-lab-review/2010/july-2010/microsoft-hyper-v.html
Hyper-V Fixed Virtual Hard Disk Performance Overhead Analysis
Microsoft/Intel iSCSI Test • Used Windows Server 2008 R2, Intel Xeon 5500 processors, and Intel 10Gbps Ethernet Adapters • Reached over One Million IOPS over a single 10 Gbps Ethernet link using a software iSCSI initiator on Native HW • Reached over 700,000 IOPS over a single 10 Gbps Ethernet link using a software iSCSI initiator on Hyper-V to the Guest OS
Microsoft/Intel iSCSI test • In-Guest VM Performance • Native Performance
Microsoft/Intel iSCSI test • iSCSI Performance can be as good or better than either Fibre Channel or FCoE. iSCSI can offer a compelling offering for those who don’t want to invest in FC. • Windows Server and Hyper-V is not a limiting factor to IO performance. There shouldn’t be any significant concern around IO for virtualizing with Hyper-V. • More details (along with a great Photoshop picture) at http://gestaltit.com/all/tech/storage/stephen/microsoft-and-intel-push-one-million-iscsi-iops/
ESG Lab Summary 2011 • Available online: http://www.microsoft.com/virtualization/en/us/solution-business-apps.aspx • Useful links, resources, case studies, white papers and webcasts
2011 ESG Test Lab - Physical SharePoint SQL Server Exchange Utilities HP BL680C up to 24 cores and128 GB RAM per blade SAN 2x4 Gbps FC per server RAID-10 Pools: Data (88): EMC CX4-960 155 15K RPM FC disk drives Logs(16): OS(24): Apps(16): http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
2011 ESG Test Lab - Virtual Exchange Exchange Exchange Exchange Application: Exchange 2010 VM configuration: 4 vCPU, 4 GB RAM Microsoft Windows Server 2008 R2 SP1 Hyper-V R2 Hypervisor: Microsoft Hyper-V R2 Physical OS: Windows Server 2008 R2 SP1 SAN Virtual machine images: Fixed VHD Exchange data and logs: Fixed VHD http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
Microsoft Exchange Server Jetstress 2010 • Scale from 5,000 to 20,000 simulated Exchange mailboxes • 5,000 mailboxes per VM • 250 MB mailboxes • 0.15 IOPs per mailbox • 2 databases/2 copies per VM http://download.microsoft.com/download/1/7/F/17FB551C-0905-4A04-AB46-2EBA616CFDF3/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20SharePoint%20Mar%2011_Wide.pdf
2011 Exchange Workload Results http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
2011 Exchange Workload Results http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
Exchange Tested Solutions • Microsoft and participating server and storage vendors • Key design decision points for common scenarios • Cost effective resource planning (server, storage, network) • Device Access Group and multi-site designs for HA • Documented results of exhaustive performance tests • 500 Mailboxes in a Single Site Running Hyper-V on Dell Servers • 9000 Mailboxes in Two Sites Running Hyper-V on Dell M610 Servers, Dell EqualLogic Storage, and F5 Load Balancing Solutions • 15000 Mailboxes in Two Sites Running Hyper-V on Unisys ES7000 Servers and Hitachi Adaptable Modular Storage 2000 Family • 16000 Mailboxes in a Single Site Deployed on IBM and Brocade Hardware • 20000 Mailboxes in Two Sites Running Hyper-V on Dell R910 Servers, EMC CLARiiON Storage, and Brocade Network Solutions • 32400 Mailboxes In Three Sites Running Hyper-V on Cisco Unified Compute System Blade Servers and EMC CLARiiON Storage http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
Exchange Tested Solutions with Hyper-V http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
Exchange Tested Solutions on Hyper-V http://technet.microsoft.com/en-us/library/gg513522.aspx http://www.microsoft.com/downloads/en/details.aspx?FamilyID=4D83E5AC-7D9A-47C6-A843-A5520D118FC4 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=6FD7E23F-30C8-4FB7-BCA3-A5DE9AA9A8E4 http://www.emc.com/collateral/hardware/white-papers/h7337-exchange-unified-cisco-hyper-v-wp.pdf http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
2011 Exchange Results Summary • 20,000 simulated Exchange 2010 users on a single server in four Hyper-V R2 VMs with Jetstress • Exchange tested solutions from 9,000 through 32,000 mailboxes • The 32,000 mailbox solution was deployed in 36 Hyper-V R2 VMs over three sites http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
2011 Exchange Results Summary • A single Hyper-V R2 enabled server scaled to meet the IO performance requirements of 20,000 Exchange 2010 mailboxes. • Exchange Tested Solutions virtualized with Hyper-V R2 met the scalability and performance requirements of 9,000 through 32,000 highly available Exchange 2010 mailboxes. • The performance, scalability and low overhead of Hyper-V R2 can be used to reduce costs and improve the manageability, flexibility, and availability of consolidated Exchange applications. http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
Why Virtualize ExchangeTake advantage of virtualization capabilities to optimize server utilization Host in Datacenter VM 1 Exchange 2010 CAS & HUB File & Print Server Exchange 2010 MBX DC 1 2 Exchange 2010 MBX Database Server DC 2 NLB DAG Exchange 2010 UM Management Server 3 Exchange 2010 CAS & HUB • Consolidate under-utilized servers into a single virtualized hosts • Lower costs by reducing space needs and power consumption • Rapid provisioning of a mobile infrastructure
Updated Support Guidance • Support for virtualized Exchange servers since Exchange Server 2007 SP1 • Exchange 2010 release continued support for virtualization • Expanding support scenarios • Release of Exchange 2010 Virtualization Guidance whitepaper
Support Guidance (continued) • Storage • Fixed sized VHD, SCSI or iSCSI pass-through. • Dynamic or Thin Disks are not supported • Differencing or Delta Disks are not supported • Disk space should be 15GB + VM Memory Size • Other considerations • Only the management software in the parent partition. No other roles should be running. • VM Snapshots are not supported • 2:1 vCPU to physical CPU ratio is supported but 1:1 is best • Current guidance to not use any memory management or oversubscription
Deployment Recommendations • Exchange application is not ‘virtualization aware’ • Core Exchange Design Principles Still Apply • Design for Performance, Reliability and Capacity • Design for Usage Profiles (CAS/MBX) • Design for Message Profiles (Hub/Edge) • Virtualization Design Principles Now Apply • Design for Performance, Reliability and Capacity • Virtual machines should be sized specific to the Exchange role (EDGE, HUB, CAS, MBX, multi-role) • Hosts should be sized to accommodate the guests that they will support
Best Practices: Capacity, Sizing and Performance
Sizing Process Overview • Start with the physical server sizing process • Calculator & TechNet guidance • Account for virtualization overhead • Determine VM placement • Account for VM migration if planned • Size root servers, storage, and network infrastructure
Guest Sizing Rules of Thumb • Size Mailbox role first • CPU ratios for other roles based on Mailbox role sizing • Mailbox role performance is key to user experience • High availability design significantly impacts sizing • Don’t oversubscribe resources • Size based on anticipated peak workload, don’t under provision physical resources • Don’t forget network needs
Guest Sizing for Unified Messaging • Newly supported for virtualization • Requires Exchange 2010 SP1 (or greater) • Role is susceptible to poor voice quality and/or latency if undersized • Requires min. 4 virtual processors • UM must be able to utilize physical processors on demand • Consider network requirements (low latency, sufficient bandwidth) to meet UM needs • Tests show that 4VP/16GB VM can handle 40 concurrent calls with VM Preview and 65 calls without
Root Server Sizing • Root server storage sizing includes space for the OS & required hypervisor components, plus connectivity to storage for guest VMs • Don’t forget about high availability of storage if required (multi-path HBAs or iSCSI NICs, redundant paths, etc.) • Network sizing is critical: number of interfaces and bandwidth • Consider app connectivity, storage networking, heartbeats, CSV, VM migration
Root Server Sizing • CPU sizing should include root needs plus per-guest overhead • Follow hypervisor vendor recommendations • Memory sizing should not assume oversubscription • Follow hypervisor vendor recommendations • Provide memory for root plus sum of running VM requirements • Memory for Hyper-V root = the larger of 512MB or the per-VM value (summed for running VMs) of 32MB for the first 1GB of virtual RAM + 8MB for each additional GB of virtual RAM • Example: 8 VMs running, each with 32GB RAM. Root requires 8 * (32MB + 8MB*31) = 2240MB
Virtual Processors • Scale up CPU on VMs as much as possible • Don’t deploy 4 x 1 vCPU machines vs. 1 x 4 vCPU machine: take advantage of Exchange scalability • Don’t oversubscribe CPUs unless consolidating with P2V, or similar scenario • Generally assume 1 logical CPU == 1 virtual CPU, don’t assume that a hyperthreaded (SMT) CPU counts
Best Practices: Server Deployment
Locating Virtual Machines • VM placement is important for high availability • Don’t co-locate DAG database copies on physical hosts • Exchange unaware of VM location relative to other VMs • No path correction in transport to avoid data loss • Ensure peak workload can run in standard VM locations • OK to move temporarily for maintenance assuming high availability requirements are met and current workload can be serviced
Storage Decisions • Exchange performance and health highly dependent on availability and performance of storage • Many options for presentation of storage to VMs • VHD • FC • iSCSI, FCoE • DAS • Optimize for performance and general design goals • We recommend looking for options that provide large mailboxes and low cost
Storage Decisions • Exchange storage should be on spindles separate from guest OS VHD physical storage • Exchange storage must be fixed VHD, SCSI passthroughor iSCSI • Preference is to use SCSI passthrough to host queues, DBs, and logfile streams • Hyper-V Live Migration suggests Cluster Shared Volumes with fixed VHD (faster “black-out” period) • FC/SCSI HBAs must be configured in Root OS with LUNs presented to VMs as passthrough or VHD • Internet SCSI (iSCSI) • Standard best practices for iSCSI connected storage apply (dedicated NIC, jumbo frames, offload, etc…) • iSCSI initiator in the guest is supported but need to account for reduced performance • Exchange storage must be block-level • Network attached storage (NAS) volumes not supported