920 likes | 1.21k Views
WSV 315: Best Practices & Implementing Hyper-V on Clusters. Jeff Woolsey Principal Group Program Manager Windows Server, Hyper-V. Agenda. Hyper-V Architecture Hyper-V Security Server Core : Introducing SCONFIG Enabling Hyper-V with Server Core
E N D
WSV 315: Best Practices & Implementing Hyper-V on Clusters Jeff Woolsey Principal Group Program Manager Windows Server, Hyper-V
Agenda • Hyper-V Architecture • Hyper-V Security • Server Core: Introducing SCONFIG • Enabling Hyper-V with Server Core • New Processor Capabilities and Live Migration • Hyper-V R2 & SCVMM 2008 R2 Live Migration & HA and Maintenance Mode • Designing a Windows Server 2008 Hyper V & System Center Infrastructure • SCVMM 2008 R2 • Microsoft Hyper-V Server 2008 R2 • Best Practices & Tips and Tricks
Provided by: Hyper-V Architecture OS ISV / IHV / OEM Microsoft Hyper-V VM Worker Processes Microsoft / XenSource Child Partitions Parent Partition Applications Applications Applications Applications User Mode WMI Provider VM Service Windows Server 2008 Non-Hypervisor Aware OS Windows Kernel Windows Kernel Linux Windows Server 2003, 2008 VSP IHV Drivers Kernel Mode Linux VSC VSC Emulation VMBus VMBus VMBus Windows hypervisor Ring -1 “Designed for Windows” Server Hardware
Security Assumptions • Guests are untrusted • Trust relationships • Parent must be trusted by hypervisor • Parent must be trusted by children • Code in guests can run in all available processor modes, rings, and segments • Hypercall interface will be well documented and widely available to attackers • All hypercalls can be attempted by guests • Can detect you are running on a hypervisor • We’ll even give you the version • The internal design of the hypervisor will be well understood
Security Goals • Strong isolation between partitions • Protect confidentiality and integrity of guest data • Separation • Unique hypervisor resource pools per guest • Separate worker processes per guest • Guest-to-parent communications over unique channels • Non-interference • Guests cannot affect the contents of other guests, parent, hypervisor • Guest computations protected from other guests • Guest-to-guest communications not allowed through VM interfaces
Isolation • We’re Serious • No sharing of virtualized devices • Separate VMBus per vm to the parent • No sharing of memory • Each has its own address space • VMs cannot communicate with each other, except through traditional networking • Guests can’t perform DMA attacks because they’re never mapped to physical devices • Guests cannot write to the hypervisor • Parent partition cannot write to the hypervisor
Windows Server Core • Windows Server frequently deployed for a single role • Must deploy and service the entire OS in earlier Windows Server releases • Server Core: minimal installation option • Provides essential server functionality • Command Line Interface only, no GUI Shell • Benefits • Less code results in fewer patches and reduced servicing burden • Low surface area server for targeted roles • Windows Server 2008 Feedback • Love it, but…steep learning curve Windows Server 2008 R2 Introducing “SCONFIG”
Windows Server Core • Server Core: CLI
Installing Hyper-V Role on Core • Install Windows Server and select Server Core installation
Enable SCONFIG • Log on and type sconfig
Rename Computer • Type 2 &enter computer name and password when prompted
Join Domain • Type 1 & D or W and provide name & password
Add domain account • Type 3 & <username> and <password> when prompted
Add Hyper-V Role • ocsetup Microsoft-Hyper-V • Restart when prompted
64 Logical Processor Support • Overview • 4x improvement over Hyper-V R1 • Hyper-V can take advantage of larger scale-up systems with greater amount of compute resources • Support for up to 384 Concurrently Running Virtual Machines & Support for up to 512 Virtual Processors PER SERVER • 384 single virtual processor vms OR • 256 dual virtual processor vms (512 Virtual Processors) OR • 128 quad virtual processor vms (512 Virtual Processors) OR • any combination so long as you're running up to 384 VMs and up to 512 Virtual Processors
Processor Compatibility Mode • Overview • Allows live migration across different CPU versions within the same processor family (i.e. Intel-to-Intel and AMD-to-AMD). • Does NOT enable cross platform from Intel to AMD or vice versa. • Configure compatibility on a per-VM basis. • Abstracts the VM down to the lowest common denominator in terms of instruction sets available to the VM. • Benefits • Greater flexibility within clusters • Enables migration across a broader ranger of Hyper-V host hardware
Forward & Backward Compatibility • How Does it Work? • When a VM is started the hypervisor exposes guest visible processor features • With Processor Compatibility Enabled, the guest processors is normalized and the following processors features are “hidden” from the VM.
Frankencluster • Hardware: • 4 Generations of Intel VT Processors • 4 Node Cluster using 1 Gb/E iSCSI • Test: • Created script to continuously Live Migrate VMs every 15 seconds Result: 110,000+ Migrations in a week!
More on Processor Compatibility • What about application compatibility? • How do applications work with these processors features hidden? • Any apps not work? • What about performance? • What’s the default setting?
Cluster Shared Volumes • All servers “see” the same storage
CSV Compatibility • No special hardware requirements • No file type restrictions • No directory structure or depth limitations • No special agents or additional installations • No proprietary file system • Uses well established traditional NTFS • Doesn’t suffer from VMFS limitations like: • VMFS limited to 2 TB LUNs It just works…
CSV & Live Migration • Create VM on target server Copy memory pages from the source to the target via Ethernet • Final state transfer • Pause virtual machine • Move storage connectivity from source host to target host via Ethernet • Run new VM on source; Delete VM on target Host 1 Host 2 Blue = Storage Yellow = Networking Shared Storage
Failover Cluster Configuration Program (FCCP) • New for Windows Server 2008 Failover Clustering • Customers have the flexibility to design failover cluster configurations • If the server hardware and components are logo’d and… • it passes the cluster validation tool, it’s supported! • Or customers can identify cluster-ready servers via the FCCP • OEMs have pre-tested these configurations and list them on the web • Microsoft recommends customers purchase FCCP-validated servers • Look for solutions with this tagline: • “Validated by Microsoft Failover Cluster Configuration Program”
Hyper-V Networking • Two 1 Gb/E physical network adapters at a minimum • One for management • One (or more) for VM networking • Dedicated NIC(s) for iSCSI • Connect parent to back-end management network • Only expose guests to internet traffic
Hyper-V Network Configurations • Example 1: • Physical Server has 4 network adapters • NIC 1: Assigned to parent partition for management • NICs 2/3/4: Assigned to virtual switches for virtual machine networking • Storage is non-iSCSI such as: • Direct attach • SAS or Fibre Channel
Each VM on its own Switch… VM Worker Processes Parent Partition Child Partitions Applications Applications Applications User Mode WMI Provider VM 3 Windows Server 2008 VM 1 VM 2 VM Service Windows Kernel Linux Kernel Windows Kernel VSC VSC VSC Kernel Mode VSP VMBus VMBus VMBus VMBus VSP VSP Windows hypervisor Ring -1 “Designed for Windows” Server Hardware Mgmt NIC 1 VSwitch 1 NIC 2 VSwitch 2 NIC 3 VSwitch 3 NIC 4
Hyper-V Network Configurations • Example 2: • Server has 4 physical network adapters • NIC 1: Assigned to parent partition for management • NIC 2: Assigned to parent partition for iSCSI • NICs 3/4: Assigned to virtual switches for virtual machine networking
Now with iSCSI… VM Worker Processes Parent Partition Child Partitions Applications Applications Applications User Mode WMI Provider VM 3 Windows Server 2008 VM 1 VM 2 VM Service Windows Kernel Linux Kernel Windows Kernel VSC VSC VSC Kernel Mode VMBus VMBus VMBus VMBus VSP VSP Windows hypervisor Ring -1 “Designed for Windows” Server Hardware Mgmt NIC 1 iSCSI NIC 2 VSwitch 2 NIC 3 VSwitch 3 NIC 4
New in R2: Core Deployment • There’s no GUI in a Core Deployment, how do I configure which NICs are bound to switches or kept separate for the parent partition?
No Problem… • Hyper-V R2 Manager includes option to set bindings per virtual switch…
Networking: Chimney Support • TCP/IP Offload Engine (TOE) support • Overview • TCP/IP traffic in a VM can be offloaded to a physical NIC on the host computer. • Benefits • Reduce CPU burden • Networking offload to improve performance • Live Migration is supported with Full TCP Offload
Networking • Virtual Machine Queue (VMQ) Support • Overview • NIC can DMA packets directly into VM memory • VM Device buffer gets assigned to one of the queues • Avoids packet copies in the VSP • Avoids route lookup in the virtual switch (VMQ Queue ID) • Allows the NIC to essentially appear as multiple NICs on the physical host (queues) • Benefits • Host no longer has device DMA data in its own buffer resulting in a shorter path length for I/O (performance gain)