620 likes | 752 Views
vSphere 5.0 – What’s New And some on SRM 5 too…. Licensing – Get it out of the way. Based on Annual Average. In 2011, VMware is Introducing a Major Upgrade of the Entire Cloud Infrastructure Stack…. New. Cloud Infrastructure Launch. vCloud Director 1.5. vCloud Director. vShield 5.0.
E N D
vSphere 5.0 – What’s New And some on SRM 5 too…
In 2011, VMware is Introducing a Major Upgrade of the Entire Cloud Infrastructure Stack… New Cloud Infrastructure Launch vCloud Director 1.5 vCloud Director vShield 5.0 vShield Security SRM 5.0 vCenter Management vSphere 5.0 vSphere
Agenda vSphere 5.0 What’s New? Welcome vSphere Core CLI Image Builder & Auto Deploy Platform Enhancements What’s New in vCenter Server What’s New in Availability Networking Enhancement Storage Enhancement vSphere Storage Appliance (VSA)
vSphere Core Confidential
vSphere 5.0 CLI Components • ESXi Shell • Rebranded Tech Support Mode • Local and remote (SSH) • vCLI • ‘esxcli’ Command Set • Local and remote CLI • New and improved in 5.0 • ‘vicfg’ Command Set • Remote CLI Only • Other Commands: • vmware-cmd, vmkfstools, etc. • vCLI available for Linux and Windows • vMA • vCLI Appliance • PowerCLI • Windows CLI Tool
Composition of an ESXi Image CIMProviders CoreHypervisor Plug-inComponents Drivers
Describing ESXi Components • VIB • “VMware Infrastructure Bundle” • Software packaging format used for ESXi • Often referred to as a “Software Package” • Used for all components • ESXi Base Image • Drivers • CIM providers • Other components • Can specify relationship with other VIBs • VIBs that it depends on • VIBs that it conflicts with
What is Auto Deploy • New host deployment method introduced in vSphere 5.0 • Based on PXE Boot • Works with Image Builder, vCenter Server, and Host Profiles • How it works: • PXE boot the server • ESXi image profile loaded into host memory via Auto Deploy Server • Configuration applied using Answer File / Host Profile • Host placed/connected in vCenter • Benefits • No boot disk • Quickly and easily deploy large numbers of ESXi hosts • Share a standard ESXi image across many hosts • Host image decoupled from the physical server • Recover host w/out recovering hardware or having to restore from backup
Auto Depoy Example – Initial Boot Provision new host vCenter Server ESXiVIBs DriverVIBs ImageProfile ImageProfile Depots Auto Deploy ImageProfile Host Profile Host Profile Host Profile Rules Engine OEM VIBs “Waiter” TFTP DHCP
Auto Depoy Example – Initial Boot 1) PXE Boot server vCenter Server ESXiVIBs DriverVIBs ImageProfile ImageProfile Depots Auto Deploy ImageProfile Host Profile Host Profile Host Profile Rules Engine OEM VIBs “Waiter” gPXE image DHCPrequest TFTP DHCP
Auto Depoy Example – Initial Boot 2) Contact Auto Deploy Server vCenter Server ESXiVIBs DriverVIBs ImageProfile ImageProfile Depots Auto Deploy ImageProfile Host Profile Host Profile Host Profile Rules Engine OEM VIBs http boot request “Waiter” Cluster A Cluster B
Auto Depoy Example – Initial Boot 3) Determine Image Profile, Host Profile and cluster vCenter Server ESXiVIBs DriverVIBs ImageProfile ImageProfile Depots Auto Deploy ImageProfile Host Profile Host Profile Host Profile Rules Engine • Image Profile X • Host Profile 1 • Cluster B OEM VIBs “Waiter” Cluster A Cluster B
Auto Depoy Example – Initial Boot 4) Push image to host, apply host profile vCenter Server ESXiVIBs DriverVIBs ImageProfile ImageProfile Depots Auto Deploy ImageProfile Host Profile Host Profile Host Profile Rules Engine Image Profile Host Profile cache OEM VIBs “Waiter” Cluster A Cluster B
Auto Depoy Example – Initial Boot 5) Place host into cluster vCenter Server ESXiVIBs DriverVIBs ImageProfile ImageProfile Depots Auto Deploy ImageProfile Host Profile Host Profile Host Profile Rules Engine Image Profile Host Profile cache OEM VIBs “Waiter” Cluster A Cluster B
Auto Depoy Example –subsequent reboot Reboot Auto Deploy Host vCenter Server ESXiVIBs DriverVIBs ImageProfile ImageProfile Depots Auto Deploy ImageProfile Host Profile Host Profile Host Profile Rules Engine Image Profile Host Profile cache OEM VIBs “Waiter” TFTP DHCP
New Virtual Machine Features Items which require HW version 8 in Italics • vSphere 5.0 supports the industry’s most capable virtual machines VM Scalability • 32 virtual CPUs per VM • 1TB RAM per VM • 4x previous capabilities! • 3D graphics Richer Desktop Experience • VM BIOS boot order config API and PowerCLI interface • EFI BIOS • Client-connected USB devices • USB 3.0 devices • Smart Card Readers for VM Console Access Broader Device Coverage • UI for multi-core virtual CPUs • Extended VMware Tools compatibility • Support for Mac OS X servers Other new features
What’s New in vCenter Server Confidential
Current Use Case • The vSphere Web Client is tailored to met the needs of VM Administrators in the first release. This includes: • VM Management • VM Provisioning • Edit VM, VM power ops, Snapshots, Migration • VM Resource Management • View all vSphere objects (hosts, clusters, datastores, folders, etc) • Basic Health Monitoring • Viewing the VM console remotely • Search through large, complex environments • Save search queries, and quickly run them to find detailed information • vApp Management • vApp Provisioning, vApp Editing, vApp Power Operations
Component Overview • vCenter Server Appliance (VCSA) consists of: • A pre-packaged 64 bit application running on SLES 11 • Distributed with sparse disks • Disk Footprint • Memory Footprint • A built in enterprise level database with optional support for a remote Oracle databases. • Limits are the same for VC and VCSA • Embedded DB • 5 hosts/50 VMs • External DB • <300 hosts/<3000 VMs (64 bit) • A web-based configuration interface
Configuration • Complete configuration is possible through a powerful web-based interface!
What’s New in Availability Confidential
Release Enhancement Summary • Complete re-write of vSphere HA • Provides a foundation for increased scale and functionality • Eliminates common issues (DNS resolution) • Multiple Communication Paths • Can leverage storage as well as the mgmt network for communications • Enhances the ability to detect certain types of failures and provides redundancy • IPv6 Support • Enhanced Error Reporting • One log file per host eases troubleshooting efforts • Enhanced User Interface • Enhanced Deployment Mechanism
vSphere HA Primary Components • Every host runs an agent • Referred to as ‘FDM’ or Fault Domain Manger • One of the agents within the cluster is chosen to assume the role of the Master • There is only one Master per cluster during normal operations • All other agents assume the role of Slaves • There is no more Primary/Secondary concept with vSphere HA FDM FDM ESX 01 ESX 03 FDM FDM FDM ESX 02 ESX 04 vCenter
vCenter Communications • VC communicates with the Master primarily • Once a Master is elected and contacts vCenter, vCentersends a compatibility list to the Master. The Master saves this on a local disk, then pushes it out to the other hosts in the cluster. • vCenter also communicates with the Master to update changes to VM states and configuration information. • vCenter may communicate to the Slaves in certain situations, such as: FDM FDM ESX 01 ESX 03 • Scanning for a existing Master • If the Master states that it cannot reach a Slave. In this case, vCenter will try to contact the Slave to determine why. • When powering on a FT Secondary VM • When host is reported isolated or partitioned. FDM FDM FDM ESX 02 ESX 04 vCenter
Storage Level Communications • One of the most exciting new features of vSphere HA is its ability to use a storage subsystem for communication. • The datastores used for this are referred to as ‘Heartbeat Datastores’. • This provides for increased communication redundancy. • Heartbeat datastores are used as a communication channel only when the management network is lost - such as in the case of isolation or network partitioning. FDM FDM ESX 01 ESX 03 FDM FDM FDM ESX 02 ESX 04 vCenter
Failure Scenarios – Network Partition • Occurs when: • Master can see heartbeat datastores • Master can not reach hosts over the management network. FDM FDM FDM • Results in: • A Master in each partition • VMs in other partition are: • Monitored via the storage subsystem • Restarted after a host of VM failure • vCenter will only report the state of one of the Masters ESX 01 ESX 03 FDM FDM FDM • When the situation is resolved, the Masters communicate and one is chosen to be the Master. ESX 02 ESX 04 vCenter
Failure Scenarios – Host Network Isolation • Occurs when: • Host is partitioned from the Master and sees no vSphere HA network traffic • The host can not ping the isolation address FDM FDM ESX 01 ESX 03 • Results in: • Isolation response applied (if configured and the Master can restart the VMs) • VMs left running are monitored via the storage subsystem and restarted as needed • Note the default isolation response has been changed to Leave Powered On FDM FDM FDM ESX 02 ESX 04 vCenter Isolation Address
Networking Section vSphere 5.0 – What’s New
New Networking Features • Two broad categories of features • Network Discovery and Visibility/Monitoring features • LLDP • NetFlow • Port Mirror • IO Consolidation (10 Gig) related features • New traffic types • User Defined Network Resource Pool ( VM traffic ) • Host Based Replication traffic • 802.1p Tagging (QoS) • TCP IP stack improvement • Vmknics will see following improvement • Higher throughput with small messages • Better IOPs scaling for iSCSI traffic
What is Network I/O Control (NETIOC) ? • Network I/O control is a traffic management feature of vSphere Distributed Switch (vDS) • In a consolidated IO (10 gig) deployments this feature allows customer to • Allocate Shares and Limits to different traffic types. • Provide Isolation • One traffic type should not dominate others • Guarantee Service Levels when different traffic types compete • Enhanced Network I/O Control — vSphere 5.0 builds on previous versions of Network I/O Control feature by providing • User-defined network resource pools • New Host Based Replication Traffic Type • QoS tagging
Coke VM Pepsi VMs HBR vMotion FT Mgmt NFS iSCSI Server Admin vNetwork Distributed Portgroup Teaming Policy vNetwork Distributed Switch Load Based Teaming Shaper Scheduler Scheduler Limit enforcement per team Shares enforcement per uplink NETIOC VM traffic
vStorage - What’s new Storage Track vSphere 5.0 – What’s New
Introduction to VMFS-5 • Enhanced Scalability • Increase the size limits of the filesystem & support much larger single extent VMFS-5 volumes. • Support for single extent 64TB Datastores • Better Performance • Uses VAAI locking mechanism with more tasks • Easier to manage and less overhead • Space reclamation on thin provisioned LUNs • Smaller sub blocks • Unified Block size
VMFS-3 to VMFS-5 Upgrade • The Upgrade to VMFS-5 is clearly displayed in the vSphere Client under Configuration -> Storage view. • It is also displayed in the Datastores -> Configuration view. • Non-disruptive upgrades.
VAAI - Introduction • vStorage API for Array Integration = VAAI • VAAI’s main purpose is to leverage array capabilities • Offloading tasks to reduce overhead • Benefit from enhanced mechanisms arrays mechanisms • The “traditional” VAAI primitives have been improved • We have introduced multiple new primitives • Support for NAS! Application VI-3 Hypervisor Non-VAAI Fabric Array VAAI LUN 01 LUN 02
Storage vMotion - Introduction • In vSphere 5.0, a number of new enhancements were made to Storage vMotion. • Storage vMotion will work with Virtual Machines that have snapshots, which means coexistence with other VMware products & features such as VCB, VDR & HBR. • Storage vMotion will support the relocation of linked clones. • Storage vMotion has a new use case – Storage DRS – which use Storage vMotion for Storage Maintenance Mode & Storage Load Balancing (Space or Performance).
Storage vMotion Architecture Enhancements Guest OS VMM/Guest VMkernel Mirror Driver Datamover Userworld Destination Source
What does Storage DRS provide? • Storage DRS provides the following: • Initial Placement of VMs and VMDKS based on available space and I/O capacity. • Load balancing between datastores in a datastore cluster via Storage vMotion based on storage space utilization. • Load balancing via Storage vMotion based on I/O metrics, i.e. latency. • Storage DRS also includes Affinity/Anti-Affinity Rules for VMs & VMDKs; • VMDK Affinity – Keep a VM’s VMDKs together on the same datastore. This is the default affinity rule. • VMDK Anti-Affinity – Keep a VM’s VMDKs separate on different datastores • Virtual Machine Anti-Affinity – Keep VMs separate on different datastores • Affinity rules cannot be violated during normal operations.
Datastore Cluster • An integral part of SDRS is to create a group of datastores called a datastore cluster. • Datastore Cluster without Storage DRS – Simply a group of datastores. • Datastore Cluster with Storage DRS - Load Balancing domain similar to a DRS Cluster. • A datastore cluster , without SDRS is just a datastore folder. It is the functionality provided by SDRS which makes it more than just a folder. 2TB datastore cluster datastores 500GB 500GB 500GB 500GB
Storage DRS Operations – Initial Placement Initial Placement - VM/VMDK create/clone/relocate. • When creating a VM you select a datastore cluster rather than an individual datastore and let SDRS choose the appropriate datastore. • SDRS will select a datastore based on space utilization and I/O load. • By default, all the VMDKs of a VM will be placed on the same datastore within a datastore cluster (VMDK Affinity Rule), but you can choose to have VMDKs assigned to different datastore clusters. 2TB 500GB 500GB 500GB 500GB datastore cluster datastores 300GB available 260GB available 265GB available 275GB available
Storage DRS Operations – Load Balancing Load balancing -SDRS triggers on space usage & latency threshold. • Algorithm makes migration recommendations when I/O response time and/or space utilization thresholds have been exceeded • Space utilization statistics are constantly gathered by vCenter, default threshold 80% • I/O load trend is currentlyevaluated every 8 hoursbased on a past day history, default threshold 15ms • Load Balancing is based on I/O workload and space which ensures that no datastore exceeds the configured thresholds. • Storage DRS will do a cost / benefit analysis! • For I/O load balancing Storage DRS leverages Storage I/O Control functionality
So what does it look like? Load Balancing • It will show “utilization before” and “after” • There’s always the option to override the recommendations
What are vStorage APIs Storage Awareness (VASA)? • What are vStorage APIs Storage Awareness (VASA)? • VASA is an Extension of the vSphere Storage APIs, vCenter-based extensions. Allows storage arrays to integrate with vCenter for management functionality via server-side plug-ins or Vendor Providers. • This in turn allows a vCenter administrator to be aware of the topology, capabilities, and state of the physical storage devices available to the cluster. • VASA enables several features. • For example it delivers System-defined (array-defined) Capabilities that enables Profile-driven Storage. • Another example is that it provides array internal information that helps several Storage DRS use cases to work optimally with various arrays.
Storage Capabilities & VM Storage Profiles Storage Capabilities surfaced by VASA or user-defined VM Storage Profile referencing Storage Capabilities xxx VM Storage Profile associated with VM Not Compliant Compliant