310 likes | 441 Views
Building the Foundation.. Server Virtualisation and Management. Julius Davies Datacenter Technology Specialist Microsoft UK Julius.Davies@microsoft.com. Clive Watson Datacenter Technology Specialist Microsoft UK Clive.Watson@microsoft.com. Where are we today?. How can we optimise?.
E N D
Building the Foundation..Server Virtualisation and Management Julius DaviesDatacenter Technology SpecialistMicrosoft UKJulius.Davies@microsoft.com Clive WatsonDatacenter Technology SpecialistMicrosoft UKClive.Watson@microsoft.com
Thin Provisioning Guest OS needs to see 100GBbut may only consume % of that With Fixed VHDs, a 100GB VHDwould consume 100GB on SAN VM With Dynamic VHDs, the physicalspace consumed is only equal tothat consumed by Guest OS VHD Performance Whitepaper: Link here VHD
Dynamic Storage Flexible solution for adjusting available VM storage without downtime Utilises SCSI Controller for Hot-Add and Hot-Remove of VHD/PTD Each VM can have up to 4 SCSI Controllers Each SCSI Controller can have up to 64 disks attached
Hyper-v Networking 3 types of network:Private, Internal, External Private = VM 2 VM Internal = VM 2 VM & VM 2 Host External = VM 2 VM, VM 2 HostVM 2 VM across Hosts Each VM can have up to 12 vNICs 8 Synthetic & 4 Legacy (PXE) Each with different VLAN ID • Teaming Support provided by NIC Vendor • Intel = PROSet, Broadcom = BACS, HP = NCU • Best practice: install/enable Hyper-V, then install networkingutilities
Hyper-V Networking for Clusters Great guide here:http://technet.microsoft.com/en-us/library/ff428137(WS.10).aspx Best Practice Suggests: 1 Network for Host Management 1 Network for Cluster Heartbeat 1 Network for Cluster Shared Volumes 1 Network for Live Migration 1 Network for Virtual Machine Traffic If using ISCSI: 2 Networks for iSCSI Storage with MPIO The above numbers represent networks, not ports. You may wish to team certain ports to provide resiliency.
High availability – Clustering SAN • 2 Hyper-V R2 Nodes in a Failover Cluster. Each Node has 2 VMs running. VMs are stored on the SAN. • Node 1 Fails, and also brings down 2 VMs • Failover Clustering in Hyper-V R2 ensures that VMs restart on Node 2 of the Hyper-V Cluster
Cluster Shared Volumes Enabling multiple nodes to concurrently access a single ‘truly’ shared LUN Provides VM’s complete transparency with respect to which nodes actually own a LUN Guest VMs can be moved without requiring any drive ownership changes No dismounting and remounting of volumes is required
Cluster Shared Volumes SAN C:\ClusterStorage\Volume1 C:\ClusterStorage\Volume2 C:\ClusterStorage\Volume3 C:\ClusterStorage\Volume4 C:\ClusterStorage\Volume1 C:\ClusterStorage\Volume2 C:\ClusterStorage\Volume3 C:\ClusterStorage\Volume4 • We’ve set up a WS2008 R2 Cluster, and created 4 LUNs on the SAN. • We’ve made the LUNs available to the Cluster • In Failover Clustering MMC, we mark the LUNs as CSV’s. • Each Node in our Cluster then has a Consistent Namespace for accessing the LUNs. We can now drop as many VMs on each CSV as we like
Live Migration SAN • 2 Hyper-V R2 Nodes in a Failover Cluster. Each Node has 2 VMs running. VMs are stored on the SAN. • We decide we’d like to Live Migrate a VM from Node 1 to Node 2. • Live Migration in Hyper-V R2 ensures that VMs aremigrated with no downtime.
Dynamic Memory Automatic, dynamic balancing of memory between running VMs Understands the needs of the Guest OS Available as part of WS2008 R2 SP1 at no cost “on the hardware I was testing with, I saw an increase from 64 VMs (Windows 7 on Hyper-V R2) to 133 VMs (Windows 7 on Hyper-V R2 SP1), we also ran performance testing against this so this wasn't a case of "let's see how many VMs we can fire up“ Matt Evans, Quest Software
RemoteFX Not a replacement for RDP! Enhancement to the graphical capabilities of RDP 7.1 vGPU (WDDM) Single GPU for multiple Hyper-V Guests Host Side Rendering Apps run at full-speed on host Intelligent Screen Capture & Hardware-Based Encode Screen deltas sent to client based on network/client availability Bitmap Remoting & Hardware-Based Decode Full range of client devices – HW and SW manifestations by design
Hyper-V R2 SP1 – Summary Business Continuity - High Availability & Live Migration Host Scalability- 64 Cores & 1TB RAM VM Scalability - 64 GB RAM & 4vCPUs Per VM Density – Dynamic Memory included with SP1 Power Efficiency - Core Parking & Many Power Improvements Dynamic Storage - Add/Remove disks with no downtime Thin Provisioned VHDs - Use Less Storage Networking Improvements - NIC Teaming via NIC Vendor, Jumbo Frames, TCP Offload, VMq, vLANs etc. Familiarity - Based on Windows, managed through Windows and System Center Hardware Optimised – Takes advantage of latest h/w innovations (e.g. SLAT) Huge HCL – http://www.windowsservercatalog.com OS Support - In-lifecycle Windows Server/Clients & Linux (SUSE/RHEL/CENTOS)
How can we better manage? VMware
SCVMM 2008 R2 SP1 ADMIN CONSOLE LIBRARY SERVER SQL DATABASE Self Service Portal 1.0 VMM SERVER
SCVMM 2008 R2 SP1 Multi-Hypervisor P2V & V2V Live Migration Support Quick Storage Migration OpsMgr Integration: Unlocks PRO Capabilities Rapid Provisioning Intelligent Placement Library & Web Portal AD Integration Granular Management PowerShell Maintenance Mode
SCVMM 2012 - key pillars SERVICES FABRIC DEPLOYMENT CLOUD CAPACITY & CAPABILITY SERVICE LIFECYCLE CLUSTER CREATION HA VMM SERVER HYPER-V BARE METAL DELEGATION & QUOTA APP DEPLOYMENT DYNAMIC OPTIMIZATION UPGRADE HYPER-V MANAGEMENT SELF-SERVICE IMAGE BASED SERVICING POWER MANAGEMENT CUSTOM PROPERTIES VMWARE MANAGEMENT NETWORK MANAGEMENT POWERSHELL XENSERVER MANAGEMENT APP OWNER USAGE MONITORING INTEGRATION STORAGE MANAGEMENT
SCVMM 2012 in action: System Center Virtual Machine Manager 2012:Fabric Management for the Private Cloud http://www.msteched.com/2010/Europe/MGT306 System Center Virtual Machine Manager 2012:Service Lifecycle Management for the Private Cloud http://www.msteched.com/2010/Europe/MGT206
How can we provideSELF SERVICE? Self Service Portal V2 VMware
Datacenter and Line-of-Business Administrators End Users/Consumers DatacenterAdministrator Procurement LOB Administrator Security 24x7 InfrastructureManagement & Monitoring Flexible Management SLA Driven Focused Solutions
VMM Self-Service Portal 2.0 • Step 1- Configuration and Extensibility • Pool Infrastructure Assets in toolkit • Extend Virtual Machine actions through Extensibility UI • Step 2- Onboarding and Infrastructure Request • Onboard Business Unit • Create Infrastructure Request (i.e. request a sandbox) • Step 3- Approval /Provisioning • Verify Asset Availability and Capacity • Assign Assets • Approve Infrastructure Request and Provision • Step 4- Self Service VM Provisioning • Manage Environment • Manage VMs • Access Reports
VMM Self-Service Portal 2.0 • Shared Resource Pool of Storage Network Compute • Sales • Finance Legal HR Infrastructure • Infrastructure Service B • Infrastructure Service A Production Environment Test/Dev Environment • Web Front Ends • Web Front Ends • Reporting Servers • Reporting Servers Service Role Service Role Service Role Service Role • Devevelopment Network • Corporate Network Resources: Network access, storage allocation & quotas, access control Resources: Network access, storage allocation & quotas, access control
Learn More HYPER-V: http://www.microsoft.com/hyperv PRIVATE CLOUD: http://microsoft.com/privatecloud APPLICATION VIRTUALISATION: http://microsoft.com/appv SYSTEM CENTER: http://www.microsoft.com/systemcenter
© 2008 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.