380 likes | 660 Views
Deploying OpenStack with Cisco Compute, Network and Storage. Duane DeCapite, OpenStack Product Management Ashok Rajagopalan , UCS Product Management November 2013. COMMUNITY PARTICIPATION Code Contributions and blueprints across Core services
E N D
Deploying OpenStack with Cisco Compute, Network and Storage Duane DeCapite, OpenStack Product Management Ashok Rajagopalan, UCS Product Management November 2013
COMMUNITY PARTICIPATION Code Contributions and blueprints across Core services • Networking model, Compute Service and Dashboard, HA, Scheduling, OpenStack Foundation Board member CISCO OPENSTACK ENGINEERING Automation (Puppet) and architectures (HA) for production deployment and operational support Quantum/Neutron/Nova Plug-ins for Cisco product lines • UCS, Nexus, CSR1000V Scalable networking services • FWaaS, LBaaS, VPNaaS CUSTOMERS Private, Public Extend cloud model for rapid provisioning of network services, bare-metal, intelligent workload placement Drive innovation through real-world use cases OpenStack @ Cisco
Innovation in Cloud Computing through OpenStack’s Network Service and Cisco’s Open Network Environment (SDN) Applications each see their own logical DC VM VM VM VM API-drivenOpen Cloud Platform VM VM VM Physical Virtual OpenStack Compute (Nova) OpenStack Networking (Neutron) OpenStack Storage (Swift, Cinder, CEPH) Programmable Infrastructure Platform APIs Virtual Overlays OPEN NETWORK ENVIRONMENT a Controllers and Agents One Platform Kit (onePK) on ISRG2, ASR 1K VXLAN Gateway, Openstack, Service Chaining CSR 1KV Cisco ONE Controller SW Openflow Agents
Comprehensive Cisco Integrated Solution for OpenStack • Operational efficiency with UCS and networking integrations with OpenStack • Pre-defined reference configurations and performance optimized solutions • SaaS applications and GRID scaleout applications Lighthouse Customers in Production with Cisco OpenStack Solutions
Cisco UCS Leadership and Momentum • As of Q3FY13 UCS revenue reached a $2B annualized run rate. • InQ3FY13, Data Center revenuewas 515M growing 77% Y/Y • As of May 2013, there are over 23,000unique UCS customers which represents 89% Y/Y growth • More than half of all Fortune 500 customershave invested in UCS • Over 500 customers have booked over 1 Million in UCS solutionsand over 1,200 have booked over $500,000 • Over 3,400 Channel Partners are actively selling UCS worldwide and over 1700 UCS specialized partners in the channel world wide • As of CY12 Q4 Cisco is one of the Top 5 Server Vendors, #2 in Blade serversbased on Worldwide Revenue Share1 • 73 World Record Performance Benchmarksto date Source: 1 IDC Worldwide Quarterly Server Tracker, Q1 2013, May 2013, Revenue Share
UCS Compute PortfolioPerformance Optimized for Bare Metal, Virtualized, and Cloud Applications Cisco UCS: Many Server Form Factors, One System Industry-Leading Compute Without Compromise Enterprise Performance Intensive / Mission Critical Scale Out UCS C240 M3 Ideal Platform for Big Data, ERP, and Database Applications UCSC24 M3 Entry, Expandable Rack Server for Storage Intensive Workloads UCSC420M3 Enterprise Class, 4-Socket Server for Large, Memory-Intensive Bare Metal, and Virtualized Applications Rack UCS C460 M2 Mission-Critical, 4-Socket Server for Large, CPU-Intensive Applications UCS C260 M2 Mission-Critical, 2-Socket Extended Memory Server for Large, Memory-Intensive Applications UCS C220 M3 Versatile, General Purpose Enterprise Infrastructure, and Application Server UCSC22 M3 Entry Rack Server for Distributed and Web Infrastructure Applications Blade UCS B440 M2 Mission-Critical, 4-Socket Blade for Large, CPU-Intensive Bare Metal and Virtualized Applications UCS B200 M3 Optimal Choice for VDI, Private Cloud, or Dense Virtualization/ Consolidation Workloads UCSB22 M3 Entry Blade Server for IT Infrastructure and Web Applications UCS B420M3 Enterprise Class, 4-Socket Blade for Large, Memory-Intensive Bare Metal and Virtualized Applications UCS B230 M2 Density-optimized CPU andMemory-Intensive 2-Socket Blade for Bare Metal and Virtualized Applications
Unified Management Blade and Rack Servers Managed a Cohesive Resource Pool UCS Manager UNIFIED MANAGEMENT A SINGLE UNIFIED SYSTEM FOR BLADE AND RACK SERVERS Service Profile: HR_App1 VNIC1 MAC: 08:00:69:02:01:2E HR_WEB_VLAN (ID=50) VNIC2 MAC: 08:00:69:02:01:2F HR_DB_VLAN (ID=210) HBA 1 and 2 WWN: 5080020000075740 WWN: 5080020000075741 VSAN ID: 12 Boot Order: SAN BIOS Settings: Turbo On HyperThreading On A Major Market Transformation in Unified Server Management Benefits of UCS Manager and Service Profiles for Both Blade and Rack-Optimized Servers Add Capacity Without Complexity UCS Service ProfileUnified Device Management C-Series Rack Optimized Servers B-Series Blade Servers Network Policy Storage Policy Server Policy
Scaling the Cisco Cloud Architecture L2/L3 Switching Single Domain Up to 10 racks, 160 servers Single Rack 16 servers Multiple Domains, up to 10K nodes UCS Manager UCS Central
Cisco UCS OpenStack Solution Accelerator Paks Compute-intensive Mixed-use Storage-intensive
OpenStack Compute-Intensive Solutions Pak High-Density 2 Control Nodes C220-M3s 2 Compute Nodes C220-M3s 2 Storage Nodes C220-M3s cinder-api CEPH Deploy nova-api nova-api CEPH Object Storage Devices keystone-api quantum-api glance-api cinder-api Network Node KVM Hypervisor quantum-api cinder-api horizon-UI CEPH MON/MDS/RADOS
OpenStack Mixed-Use Solutions Pak Mixed-Workload 2 Control Nodes C220-M3s 4 Compute Nodes C220-M3s 2 Storage Nodes C240-M3s cinder-api CEPH Deploy nova-api nova-api CEPH Object Storage Devices keystone-api quantum-api glance-api cinder-api Network Node KVM Hypervisor quantum-api cinder-api horizon-UI CEPH MON/MDS/RADOS
OpenStack Storage-Intensive Solutions Pak Storage-intensive 2 Control/Storage Nodes C240-M3s 6 Compute/Storage Nodes C240-M3s nova-api nova-api KVM Hypervisor quantum-api keystone-api cinder-api glance-api Network Node CEPH Deploy quantum-api cinder-api CEPH Object Storage Devices horizon-UI CEPH MON/MDS/RADOS
Nova: how it works today Infrastructure Compute (Nova) 1 Hypervisors Client API calls nova-api 4 Bare-metal Nodes nova-compute nova-volume (will be replaced by Cinder) 2 3 nova-network (will be replaced by Neutron) nova-schedule
Nova bare-metal with UCS Manager – Blueprint (CDN) Infrastructure Compute (Nova) 1 Hypervisors Client API calls nova-api 6 Bare-metal Nodes nova-compute 4 nova-volume (will be replaced by Cinder) 2 5 nova-network (will be replaced by Neutron) UCS Manager Creates Server Profile based on request parameters Returns identity for storage in OS DB nova-schedule UCS Edition 3
Cisco Unified Fabric Continuous Market Leadership DC TECHNOLOGY LEADER 40,000+ 1,500+ • Cisco NX-OS Customers • Cisco FabricPath Customers 11M+ 11,000+ • 10GE Ports Shipped • Cisco FEX Customers DATA CENTER SWITCHING LEADER 1 1 Market share by revenue in Q3 2012 for FCoE SAN Switching at 87.3%** Market share by revenue in Q3 2012 for DC Ethernet Switching at 71.7%* # # *Source: Infonetics, Q3 2012 DC Network Equipment Report, December 2012 **Source: Dell’Oro, SAN Switching, November 2012 Data current as of December 2012. Subject to change without notice.
Cisco Unified Fabric Innovations LAN LAN/SAN CISCO NX-OS: From Hypervisor to Core CISCO DCNM: Single Pane of Management CiscoNexus 7000 CiscoNexus 6000 CiscoNexus 2000 CiscoNexus 5000 CiscoNexus 4000 Cisco Nexus ®1010 Resilient, High Performance, Scalable Fabric Workload Mobility Within/ Across DCs Secure Separation/ Multitenancy LAN+SAN Convergence Operational Efficiency CiscoNexus 3000 Cisco Nexus 1000V DELIVERING TO YOUR DATA CENTER NEEDS
Cisco Nexus Plugin Diagram http://docwiki.cisco.com/wiki/OpenStack:Grizzly-Nexus-Plugin
Cisco Nexus Plugins for Neutron Benefits • Automated VLAN Provisioning • Configure VLANs on the Nexus switch • Layer 3 Gateway • Map Nexus Switch Virtual Interface (SVI) to tenant VLAN • Scalability with Top of Rack (ToR) Nexus as default Layer 3 Gateway • Eliminates configuration and bottleneck of host-based software L3 forwarding Agent • Multi-Homed Host Deployments • Virtual Port Channel (vPC) for High Availability (HA) and link optimization to multiple Nexus switches • Hardware and Software-based Networking • Performance benefits of hardware-based ToR switch (Nexus 3000, 5000, 6000, 7000) • Flexibility of software-defined Networking with Nexus 1000V
Nexus Switch as Layer 3 Gateway Management Network • Flat Networking Traffic • VLAN Traffic across Nodes • GRE or VXLAN tunnels across Nodes L2B/OVS Cloud Controller Node nova-api L2B/OVS L2B/OVS nova-scheduler Nexus PI Compute Node Compute Node Compute Node Compute Node nova-compute nova-compute nova-compute nova-compute Network Node neutron-server Data Network dhcp-agent *-plugin-agent *-plugin-agent *-plugin-agent *-plugin-agent keystone *-plugin-agent API Network • SVI configured on Nexus for L3 forwarding and external Gateway • Removes bottleneck of generic server-based network node with Linux IP tables mysql, rabbit... External Network API Network is typically routable to enable public access Internet
Service Chaining with Nexus 1000V Management Network • Foundation of Virtual Services Architecture • vPath Service Insertion/Chaining • VXLAN Overlay Networking Cloud Controller Node nova-api VSM/N1000V N1000V nova-scheduler Compute Node Compute Node Compute Node Compute Node nova-compute nova-compute nova-compute nova-compute Network Node Network Node Network Node neutron-server Data Network dhcp-agent dhcp-agent dhcp-agent *-plugin-agent *-plugin-agent *-plugin-agent *-plugin-agent keystone *-plugin-agent *-plugin-agent *-plugin-agent API Network mysql, rabbit... l3-agent l3-agent l3-agent External Network API Network is typically routable to enable public access Internet
CSR 1000V Routing Management Network Network or Compute node(s) hosts CSR CSR Provides per tenant isolation and full IOScapabilities including VPN, BGP, OSFP, MPLS, etc. CSR 1000V Cloud Controller Node nova-api VSM/N1000V N1000V nova-scheduler Compute Node Compute Node Compute Node Compute Node nova-compute nova-compute nova-compute nova-compute Network Node quantum-server Data Network dhcp-agent *-plugin-agent *-plugin-agent *-plugin-agent *-plugin-agent keystone *-plugin-agent API Network mysql, rabbit... External Network API Network is typically routable to enable public access Internet
NewOpenStack Services from Cisco Advanced Services Key Deliverables Key Benefits Portfolio Problems Solved • Is OpenStack the correct platform for my business • What are my key requirements for OpenStack? • Strategy Assessment high level roadmap and architecture • Prioritization of use cases • Understand role of OpenStack in your DC/Cloud strategy Strategy and Assessment (Available Now) • Pre-defined design • Rapid installation & Test • Lack of OpenStack skillsets • Pre-defined design • Test plan • Knowledge Transfer • Experiment with OpenStack installation in your data center environment • Network Scale and High availability design • Storage Integration • Cell deployment design • How to create or add production safety, availability and scale to my openstack deployment. Design & Deployment (December 2013) • Accelerate production readiness • Optimally deployed on Cisco hardware Validation (Available Now) • Ensure deployment evolution • Targeted support expertise for your customized solution • Design review • Software Upgrade procedures • Day 2 Support for Customized deployments • Custom application assistance • Topology and requirements evolution Optimization (December 2013)
Cisco OpenStack Installer To run the install script, copy and paste the following on your command line (as root with your proxy set if necessary as above): curl -s -k -B https://raw.github.com/CiscoSystems/grizzly-manifests/multi-node/install_os_puppet | /bin/bash With a proxy, use: https_proxy=http://proxy.example.com:80/ curl -s -k -B https://raw.github.com/CiscoSystems/grizzly-manifests/multi-node/install_os_puppet > install_os_puppet chmod +x install_os_puppet ./install_os_puppet -p http://proxy.example.com:80/
High Availability Option The Cisco OpenStack High-Availability Guide differs from the OpenStack High Availability Guide by providing an active/active, highly scalable model for OpenStack deployments. The architecture consists of the following components used to provide high-availability to OpenStack services Galera Cluster for MySQL ,RabbitMQ Clustering, RabbitMQ Mirrored Queues, HAProxy, Keepalived http://docwiki.cisco.com/wiki/COE_Grizzly_Release:_High-Availability_Manual_Installation_Guide
Summary and Next Steps Cisco offers a complete Compute, Networking and Storage Solution for OpenStack Cisco provides Advanced and Technical Services to help migrate from Pilot to Production Please let us know how we can help you with OpenStack by contacting us at Openstack-support@cisco.com More information can be found at www.cisco.com/go/OpenStack
Deployment Automation of OpenStack on UCS Step 1 : Configuring Nodes using Python SDK Pre-configure UCS Provision UCS Servers OpenStack Handover Event Listener Register Nodes Host OS Install Updates the newly added node info in puppet Puppet apply Add hosts/system in OpenStack Hostname / IP address Logical credentials Resource allocation preferences Only Point of User Touch Chassis/Server Discovery Service Profile Association PXE boot devices deployed PXE boot for initial OS install RHEL 6.4 installation on bare-metal servers Sync all the plugins from Puppet Master Inventory of nova nodes on controller VM Provisioning OpenStack Services Deployment Step 2 : Cobbler/Puppet based Node Subscription Cobbler database update
Cobbler/Puppet based Node Subscription 1. Read conf file 2. Apply policies 3. Update Puppet/Cobbler DB 4. PXE Boot 5. Puppet sync (glance, scheduler, API-deamons) Compute Nodes (nova-compute, libvirtd) Control Node Build Node
OpenStack Neutron Architecture Clients Neutron (Formerly Quantum) Service Networks
Getting Started with Cisco Nexus Plugins for Neutron • OpenStack Module Structure • /neutron/plugins/cisco/ - Contains the Network Plugin Framework • /client - CLI module for core and extensions API • /common - Modules common to the entire plugin • /conf - All configuration files • /db - Persistence framework • /models - Class(es) which tie the logical abstractions to the physical topology • /nexus - Nexus-specific modules • /test/nexus - A fake Nexus driver for testing the plugin https://wiki.openstack.org/wiki/Cisco-quantum
Edit ../neutron/conf/neutron.conf • core_plugin = neutron.plugins.cisco.network_plugin.PluginV2 • [keystone_authtoken] • auth_host = <authorization host's IP address> • auth_port = 35357 • auth_protocol = http • admin_tenant_name = service • admin_user = <keystone admin name> • admin_password = <keystone admin password> https://wiki.openstack.org/wiki/Cisco-quantum
Configure Database, vSwitch & VLAN Parameters • /neutron/plugins/cisco/cisco_plugins.ini file • mysql -u<mysqlusername> -p<mysqlpassword> -e "create database neutron_l2network” • vswitch_plugin=neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 • /neutron/plugins/openvswitch/ovs_neutron_plugin.ini • [OVS] • bridge_mappings = physnet1:br-eth1 • network_vlan_ranges = physnet1:1000:1100 • Tenant_network_type = vlan https://wiki.openstack.org/wiki/Cisco-quantum
Configure Nexus Switch Credentials • /neutron/plugins/cisco/cisco_plugins.ini file • [NEXUS_SWITCH:1.1.1.1] • # Hostname and port used of the node • compute-1=1/1 • # Hostname and port used of the node • compute-2=1/2 • # Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default) • ssh_port=22 • # Provide the Nexus credentials, if you are using Nexus switches. If not this will be ignored. • username=admin • password=mySecretPasswordForNexus https://wiki.openstack.org/wiki/Cisco-quantum