1 / 36

Windows Server Scalability And Virtualized I/O Fabric For Blade Server

Windows Server Scalability And Virtualized I/O Fabric For Blade Server. Chris Pettey CTO, NextIO Son VoBa Program Manager, Microsoft Corporation Windows Server Platform Architecture. Agenda. What is Shared I/O? Value of Shared I/O Architecture for I/O Virtualization User experience.

nansen
Download Presentation

Windows Server Scalability And Virtualized I/O Fabric For Blade Server

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Windows Server Scalability And Virtualized I/O Fabric For Blade Server Chris PetteyCTO, NextIO Son VoBaProgram Manager, Microsoft CorporationWindows Server Platform Architecture

  2. Agenda • What is Shared I/O? • Value of Shared I/O • Architecture for I/O Virtualization • User experience

  3. What Is Shared I/O?

  4. Dedicated I/O • Independent Operating Systems (OS) • Each OS owns a physical adapter • Each OS owns the system components to communicate to the adapter • Example: Chipset and PCI Express bus OS #1 OS #2 OS #3 I/OController I/OController I/OController

  5. Shared I/O • Independent Operating Systems (OS) • Each OS owns a virtualized I/O controller • Corresponds to a shared physical I/O controller • Virtualization Enablers control access to shared devices OS #1 OS #2 OS #3 Virtualization Enablers I/OController I/O Controller

  6. Characteristics Of Shared I/O • Multiple, independent operating systems • OSs do not coordinate with each other • Each OS has a virtual set of resources to control • Virtual resources behave and interact in manners resembling physical resources • Single resources accessed by one or more OSs • System resources, e.g., Chipset, act as access points to I/O • PCI Express acts as the connectivity from CPU and Chipset to I/O • I/O devices are simultaneously accessed by each OS • Virtualization Enablers control shared components • Isolate each OS for protection • Provide virtual view of physical shared resource • Manage functions for physical devices

  7. Value Of Shared I/O

  8. InfiniBand InfiniBand InfiniBand Switch InfiniBand Shared I/O = Lower Cost Ethernet CPU CPU Ethernet Switch Fibre Channel PCIe PCIe CPU CPU Ethernet Fibre Channel Switch CPU CPU Fibre Channel PCIe PCIe CPU CPU Ethernet CPU CPU Fibre Channel PCIe PCIe CPU CPU

  9. Shared I/O = Lower Cost • Lower Acquisition cost • Lower TCO • Lower Maintenance Cost • Lower Upgrade Cost CPU CPU Shared Ethernet Controller PCIe PCIe CPU CPU Shared Fibre Channel Controller Shared InfiniBand Controller Shared I/OPCI ExpressSwitch CPU CPU PCIe PCIe New Technology? CPU CPU CPU CPU PCIe PCIe CPU CPU

  10. Dedicated I/O Blade Servers 10 Ethernet LOM chips 10 Fibre Channel or InfiniBand Daughter Cards 4 Switches: 2 Ethernet Switches and 2 Fibre Channel or 2 InfiniBand

  11. Shared I/O Blade Servers 2 Shared Ethernet chips 2 Shared Fibre Channel chips 2 Shared InfiniBand chips 2 Shared I/O PCI Express Switches

  12. Flexibility = Future Proof CPU CPU Shared 1G Ethernet Controller PCIe PCIe CPU CPU Shared 10G Ethernet Controller Shared 4G Fibre Channel Controller Shared I/OPCI Express Switch CPU CPU PCIe PCIe Shared InfiniBand Controller CPU CPU Shared SAS Controller CPU CPU New Technology? PCIe PCIe CPU CPU

  13. Architecture For I/O Virtualization

  14. Dedicated I/O Blade Server Design Ethernet NIC CPU CPU Ethernet Switch PCIe EnterpriseNetwork PCIe CPU CPU Fibre Channel HBA Multiple intra-chassis fabrics Fixed I/O configuration at order Internal versus external switch compatibility concerns Management tools mustcontemplate multiple fabrics Ethernet NIC CPU CPU PCIe PCIe CPU CPU Fibre Channel HBA Ethernet NIC FCFabric CPU CPU Fibre ChannelSwitch PCIe PCIe CPU CPU Fibre Channel HBA Blade Server Chassis

  15. Shared I/O Blade Server Design Standard PCI Express Mid-plane CPU CPU Shared Ethernet NIC EnterpriseNetwork PCIe CPU CPU Shared I/OPCI ExpressSwitch Increased Flexibility Simplified Management Higher Performance Lower Cost CPU CPU PCIe CPU CPU Enhanced PCI Express Protocol FCFabric CPU CPU Shared Fibre Channel HBA PCIe CPU CPU Blade Server Chassis

  16. Shared I/O In Blades • Blades run independently • Software for OS, applications, etc. is unique for each blade • Blades have independent PCIe hierarchies • PCIe hierarchy is virtualized • Root Complex, BIOS, etc. utilize PCIe as if it was dedicated • Shared I/O components present virtualized components • Switches present multiple, virtual switches • Controllers present multiple, virtual controllers • I/O Sharing through hardware components • Switches and Controllers combine to enable sharing

  17. Each OS runs independently Virtual I/O adapters appear as physical components Virtualization Enableris a combination of software and hardware Hypervisor software PCIe Root Complex enablement, e.g., Address Protection and Translation Table (ATPT) technology Physical System is PCIe I/O solution Physical I/O comprised of Single PCIe fabric Dedicated I/O adapters Shared I/O In Virtual Machine Virtual Machine #1 Virtual Machine #2 OS #2 OS #1 Virtual I/O Virtual I/O Virtualization Enabler Physical System Physical I/O

  18. Similarities Each OS is independent Legacy OSs are fully supported PCIe I/O system Single PCIe adapter used by multiple OSs Virtual adapters appear as PCIe devices Differences Virtual Machine utilizes software Hypervisor for virtualization Blades utilize PCIe Switch for virtualization Virtual Machine may use traditional or Shared I/O enabled adapters Blades use Shared I/O enabled adapters Blade & Virtual Machine Relationship

  19. Blade & Virtual Machine Combination • Blade and Virtual Machine I/O Sharing are complementary • Component solutions for Blades benefit Virtual Machine • Management model can service both Blades and Virtual Machines OS #3 CPU CPU OS #2 Shared Ethernet NIC EnterpriseNetwork PCIe Virtualization Enablers CPU CPU OS #1 Shared I/OPCI ExpressSwitch OS #3 CPU CPU FCFabric Shared FibreChannel HBA OS #2 PCIe Virtualization Enablers CPU CPU OS #1 Blade Server Chassis

  20. Component Impact for Shared I/O

  21. User Experience

  22. Legacy OS and Applications • Shared I/O fully supports legacy OS and application software • Applies equally to Blades and Virtual Machines • Blade solution fully supports all Microsoft OS • No guest OS software introduced • Legacy software is fully contained • No new “intermediate” drivers or 3rd party software • Guest OS and applications function normally as in stand-alone server • Migration from single server to Shared I/O is seamless

  23. Devices And Drivers • Blades support legacy devices dedicated to a single blade • Virtual Machine enables software virtualization of legacy devices • Blades and Virtual Machine enable Shared I/O enabled devices to be shared by multiple OSs OS #3 Dedicated Ethernet NIC CPU CPU OS #2 EnterpriseNetwork PCIe Virtualization Enablers Shared Ethernet NIC CPU CPU OS #1 Shared I/OPCI ExpressSwitch OS #3 Shared FibreChannel HBA CPU CPU FCFabric OS #2 PCIe Virtualization Enablers Dedicated Fibre Channel HBA CPU CPU OS #1 Blade Server Chassis

  24. Management Model • Each OS manages its own virtual environment • Blades = Chipset, Virtual PCIe switch, and Virtual Adapters • Virtual Machine = Virtual Chipset and Virtual Adapters • Single control point manages common components • Blades = Switch Firmware for Switch and Shared I/O Adapters • Virtual Machine = Hypervisor for all hardware components • Blade and Virtual Machine management can stage • Blade management partitions Switch and Adapters into virtual PCIe systems • Hypervisor partitions virtual PCIe system and chipset into Virtual Machines

  25. Common Management SwitchManagementTool Virtual Machine #1 Virtual Machine #2 OS #2 OS #1 DeviceManagementTool Virtual I/O Virtual I/O DeviceManagementTool Virtualization Enabler CPU CPU Physical System Shared I/OAdapter PCIe CPU CPU Physical I/O Shared I/OPCI ExpressSwitch CPU CPU Shared I/OAdapter PCIe CPU CPU

  26. Common Management Capabilities • Discovery of devices and capabilities • Discovery, enumeration, etc. of PCIe devices • Partitioning of resources • Assigning virtual resources to OS • Programming of shared functions • Setting operational parameters for physically shared functions • E.g., 10/100/1000 Ethernet link speed • Device specific function • Support for vendor specific device management functions

  27. WS-Management A DMTF Preliminary Standard defining a web services based protocol Is data model neutral A suitable management protocol for both virtual devices and switch DMTF CIM Provides a consistent abstraction of devices and their virtual environments Independent of the I/O Virtualization techniques and implementations Common Management Interface

  28. Shared I/O and Industry Standards • PCI SIG is standardizing elements of I/O Virtualization • Virtual Machines and Blades • Focused on I/O devices for PCIe • Microsoft and NextIO are active participants • Shared I/O leverages existing standards • Protocols for Ethernet, FC, SAS, etc. unchanged • Management models for DMTF, etc. leveraged

  29. PCIe Switch In Blade Server Designs WinHEC Technical Session WinHEC Microsoft Pavilion

  30. Technology Demo • Shared I/O solution for Blade Servers • PCI Express Switch with virtualization support • Virtualized Fibre Channel HBA • Unmodified Blade Chassis • Dell PowerEdge 1855 (Intel Xeon) • FSC PRIMERGY BX630 (AMD Opteron) • Legacy support • No change to OS or legacy driver • No change to Chipset, Blades or Chassis

  31. Hit Enter Shared 4G Fibre Channel NextIO PCIe Switch Technology Demo …Insert PCIepass-thru cardsinstead of Fibre Channel Cards EthernetController CPU PCIe Fibre ChannelController PCI ExpressPass-thru CPU EthernetSwitch EthernetController Note: The Fibre Channel midplanebecomes a PCI Express midplane CPU PCIe PCI ExpressPass-thru Fibre ChannelController CPU Fibre ChannelSwitch EthernetController CPU PCIe Fibre ChannelController PCI ExpressPass-thru PowerEdge 1855 or PRIMERGY BX630 CPU

  32. Shared I/O Value • Reduced component cost • Reduced TCO • Increased flexibility • Increase performance • Zero impact to Legacy Software • Evolutionary solution starting in 2007

  33. Call To Action • Plan for shared I/O in your next Blade Server design • Visit NextIO demo at the Microsoft Pavilion • Attend virtualization and management sessions • Device Virtualization Architecture (VIR040) • How to Use the WMI Interfaces with Windows Virtualization (VIR043) • Hypervisor, Virtualization Stack, and Device Virtualization Architectures (VIR047) • PCIe Address Translation Services and I/O Virtualization (VIR071) • Windows Virtualization Best Practices and Future Hardware Directions (VIR124) • Storage Management Directions (STO085) • Windows Server Manageability Directions and Updates (SER120)

  34. Additional Resources • Web Resources • PCI-SIG (www.pcisig.org) • Distributed Management Task Force (www.dmtf.org)

  35. © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

More Related