250 likes | 412 Views
Introducing AutoCache 2.0 December 2013. Company Profile. Team Rory Bolt, CEO - NetApp, EMC, Avamar, Quantum Clay Mayers, Chief Scientist - Kofax, EMC Rich Pappas, VP Sales/Bus-Dev – DDN, Storwize, Emulex, Sierra Logic Vision I/O intelligence in the hypervisor is a universal need
E N D
Company Profile • Team • Rory Bolt, CEO - NetApp, EMC, Avamar, Quantum • Clay Mayers, Chief Scientist - Kofax, EMC • Rich Pappas, VP Sales/Bus-Dev – DDN, Storwize, Emulex, Sierra Logic • Vision • I/O intelligence in the hypervisor is a universal need • Near term value is in making use of flash in virtualized servers • Remove I/O bottlenecks to increase VM density, efficiency, and performance • Must have no impact to IT operations; no risk to deploy • A modest amount of flash in the right place makes a big difference • Product • AutoCache™ hypervisor-based caching software for virtualized servers
Solution: AutoCache VMware ESXi • I/O caching software that plugs in to ESXi in seconds • Inspects all I/O • Uses a PCIe Flash Card or SSD to store hot I/O • Read cache with write thru and write around • Transparent to VM’s • No Guest OS Agents • Transparent to storage infrastructure + + CPU Utilization
AutoCache Results VMware ESXi • Up to 2-3X VM density improvement • Business Critical Apps accelerated • Transparent to ESXi value add like vMotion, DRS, etc. • Converts a modest flash investment into huge value + + CPU Utilization
Simple to deploy • Buy Flash device, download PD software • Single ‘vib’ to install AutoCache • Install Flash-based device • Turn off server to install PCIe card, power on • -or- partition SSD • Global cache relieves I/O bottleneck in minutes • All VM’s accelerated regardless of OS, without use of agents • Reporting engine displays the results over time + Proximal Data turned on
Uniquely Designed forCloud Service Providers • Broad support • Any backend datastore • Effectively any flash device, plus Hyper-V soon • Adaptive caching optimizes over time • Takes “shared” environment into consideration • Latencies and cache access affect other guests • Easy retrofit • No re-boot required • Pre-warm to maintain performance SLA on vMotion • Role Based Administration
PD vMotion Innovation VMware ESXi VMware ESXi 1. AutoCache detects vMotion request + + Source Host Target host Shared storage
PD vMotion Innovation VMware ESXi VMware ESXi X + + 2. Pre-warm VM metadata sent to target host to fill cache in parallel from shared storage Source Host Target host Shared storage Key Benefit: Minimized time to accelerate moved VM on target host
PD vMotion Innovation VMware ESXi VMware ESXi X + + 2. Pre-warm VM metadata sent to target host to fill cache in parallel from shared storage Source Host Target host Shared storage Key Benefits: Eliminates the chance of cache coherency issues, and frees up source host cache resources 3. Upon vMotion action, atomically and instantly invalidates VM metadata on source host
Role Based Administration VMware ESXi Customer B Customer A • Creates specific access rights for AutoCache vCenter plug in • Enables customer to modify: • Host level cache settings • VM cache settings • AutoCache retains statistics for a month CSP Infrastructure + + CPU Utilization
RBA in practice • CSP creates vCenter account for customer • With RBA, now can also grant AutoCache rights for customer accounts that allow the customer to control caching for their VM’s • Enables varying degree of rights for the customer • One user at the customer might see all VMs • Another might see a subset of VMs • Yet another might see some VMs, but only have rights to certain aspects • Say, could turn on/off cache, but not impact caching on a device • Usage statistics are available for the last month, and may be exported for billing purposes
Pricing and Availability • AutoCache 2.0 is available now from resellers • CMT, Sanity Solutions, Champion, CDW, AllSystems, BuyOnline, Pact Informatique, Commtech, etc. • Direct reps support channel in target markets in US • OEM Partnerships coming in Sept • Support for • ESXi 4.1, 5.0, 5.1, 5.5 (when available), Hyper-V in 2013 • PCIe cards from LSI, Micron, Intel, and server vendors • SSD from Micron, Intel, and server vendors • AutoCache Suggested Retail Price • Prices start at $1000 per host for cache sizes under 500GB
SummaryTheProximal Data Difference • Innovative I/O Caching Solution • Specifically Designed for Virtualized Servers & FLASH • Dramatically improved VM Density and Performance • Fully Integrated into VMware Utilities and Features • Transparent to IT Operations • Simple to Deploy • Low Risk • Cost Effective
The simplest, most cost-effective use of Enterprise FlashThank You
Outline • Brief Proximal Data Overview • Introduction to FLASH • Introduction to Caching • Considerations for Caching with FLASH in a Hypervisor • Conclusions
Considerations… • Most caching algorithms developed for RAM caches • No consideration for device asymmetry • Placing data in read cache that is never read again has negative effects on performance and device lifespan. • Hypervisors have very dynamic I/O patterns • vMotion affects I/O load as well as coherency issues • Adaptive algorithms are very beneficial • Must consider “shared” environment • Latencies and cache access affect other guests • Quotas/allocations may have unexpected side effects • Hypervisors are I/O blenders • The individual I/O patterns of guests are aggregated; devices see a blended average • Write-Around provides best performance/wear trade-off
Complications of Write-Back Caching • Writes from VM’s fill the cache • Cache wear increased • Cache ultimately flushes to disk • Cache withstands write bursts • Cache over runs when disk flushes can’t keep up • If you are truly write bound, a cache will not help • Write-Back cache handles write bursts and benchmarks well but is not a panacea
Complications of Write Back caching(continued) VMware ESXi VMware ESXi 2. Write is acknowledged by mirrored host 1. Write I/O mirrored on destination Ack + + (New, dedicated I/O channel for write back cache sync) Source Host In either case, network latency limits performance Mirrored host Shared storage with a performance tier Existing HA Storage infrastructure
Disk Coherency… • Cache flushes MUST preserve write ordering to preserve disk coherency • Hardware copy must flush caches • Hardware snapshots do not reflect current system state without a cache flush • Consistency groups must now take into account the write back cache state • How is backup affected?
Outline • Brief Proximal Data Overview • Introduction to FLASH • Introduction to Caching • Considerations for Caching with FLASH in a Hypervisor • Conclusions
Evaluating caching • Results are entirely workload dependent • Benchmarks are good for characterizing devices • It is VERY hard to simulate production with benchmarks • Run your real workloads for meaningful results • Run your real storage configuration for meaningful results • Steady state is different from initialization • Large caches can take days to fill • Beware caching claims of 100s or 1000s times improvement • It is possible, just not probable
FLASH Caching in Perspective • Flash will be pervasive in the Enterprise • Ten years in the making, but deployment is just beginning now • Choose the right amount in the right location • Modest flash capacity in host as read cache – the best price/performance and lowest risk/impact • More flash capacity in host as write back cache can help for specific workloads – but at substantial cost/complexity/operational impact • Large scale, centralized write back flash cache in arrays that leverage existing HA infrastructure and operations – highest cost – highest performance - medium complexity – low impact to IT