480 likes | 689 Views
May Triangle OpenStack Meetup. Organizers: Mark T. Voelker, Arvind Somya , Amy Lewis 2013 -05-30. What’s Happening Tonight?. 4: 3 0pm: Welcome & Introductions 4:45pm: “What’s New In Grizzly” 5:00pm: “OpenStack Automation with Puppet” 5:30pm: Open Forum – Q&A 5:45( ish )pm: Pizza!.
E N D
May Triangle OpenStack Meetup Organizers: Mark T. Voelker, ArvindSomya, Amy Lewis 2013-05-30
What’s Happening Tonight? • 4:30pm: Welcome & Introductions • 4:45pm: “What’s New In Grizzly” • 5:00pm: “OpenStack Automation with Puppet” • 5:30pm: Open Forum – Q&A • 5:45(ish)pm: Pizza! * All times “-ish”
Who Are These People? • A few introductions are in order….
IRC: markvoelker Twitter: @marktvoelker GitHub: markvoelker Bio Mark T. Voelker • Technical Leader/Developer/Manager/”That Guy” • Systems Development Unit at Cisco Systems • Lead one of the Cisco dev teams working on Quantum in the initial release • Currently working on: OpenStack solutions, Big Data, Massively Scalable Data Centers
IRC: asomya Twitter: @ArvindSomya GitHub: asomya ArvindSomya • Software Engineer • Data Center Group/Office of the Cloud CTO at Cisco • Developed the initial representation of Quantum in Horizon • Currently working on: Quantum
Twitter: @CommsNinja LinkedIn: amyhlewis YouTube: engineersunplugged Bio Amy Lewis • Community Evangelist for Data Center Virtualization • Social Media Strategist at Cisco • Creator of Engineers Unplugged • Currently working on: Listening to and developing the technologist community across various platforms and in real life (gasp!).
Everyone Else • You people: • Are OpenStack developers, OpenStack deployers, and OpenStack newbies • …..are hopefully here for the Triangle OpenStack Meetup. Otherwise, you’re in the wrong place. • Introductions?
A Few Notes Before We Start… • We have WebEx! • Tonight’s talks will be broadcast/recorded via WebEx. Feel free to tune in! We’ll also post content after we wrap up tonight. • We want content! • Interested in giving a talk next time? Contact Mark, Arvind, or Amy! • We want feedback! • Help us shape future Triangle OpenStack Meetups by answering a few questions when we’re done. • Mark your calendars! • Proposed date for next meetup: Monday, July 1
Grizzly: What’s New? Mark T. Voelker Technical Leader, Cisco Systems May Triangle OpenStack Meetup 2013-05-30
Grizzly: Some Figures • Release date: April 4, 2013 • Contributors: 517 (up ~56%) • New features: ~230 • Growth by lines of code: 35% • Patches merged: ~7,620 • New networking drivers: 5 • New block storage drivers: 10 • New docs contributors: 27 • Release notes: https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly • Next release name and date: Havana, Oct. 17 • Next design summit: Nov. 5-8 in Hong Kong Stats referenced from: http://www.slideshare.net/laurensell/openstack-grizzly-release
Grizzly: What’s New? With numbers like those….. Tonight’s list of new features won’t be comprehensive… (or anywhere close) But it should be enough to whet your appetite.
What’s New: Nova Cells • “Cells” are a way to manage distributed clusters within an OpenStack cloud, allowing for greater scalability and some resource isolation • Originated at Rackspace (in production since 8/1/2012) • Cells provide a way to create isolated resource pools within an OpenStack cloud—similar in some respects to AWS Availability Zones • OpenStack had a “zone” concept dating back to Bexar. • Through Diablo, zones shared nothing and communicated via the OpenStack public API • Zones were broken by the introduction of Keystone and were removed in Essex • Cells replace the old zone functionality • More information on cells: • The blueprint • The Grizzly OpenStack Compute Admin Guide • Chris Behrens’s cells presentation from the Grizzly Design Summit
Nova Cells: How They Work • Compute resources are partitioned into hierarchical pools called “cells”: • Each top-level “API cell” has a nova-api service, AMQP broker, DB, and nova-cells service • Each “child” cell has all the normal nova services except for nova-api • Each child cell has it’s own database server, AMQP broker, etc. • Glance/keystone are global • The nova-cells service provides communication between cells. • Also selects cells for new instances…cell scheduling != host scheduling • Host scheduling decisions are made within a cell • The future of cells • Other options besides AMQP for inter-cell communication (pluggable today, but only one option available) • More cell scheduler options (currently random)
More About Cells • Today, cells primarily address scalability and geographic distribution concerns rather than providing complete resource isolation • Cells can be nested (e.g. “grandchild cells”) • Cells are optional…small deployments aren’t forced to use them • Each child cell database has only the data for that cell • API cells have a subset of all child data (instances, quotas, migrations) • Quotas must be disabled in child cells…quota management happens on the API cell
Nova “No-DB” architecture • Each nova-compute service used to have direct access to a central database • Scalability concern • Security concern • Upgrade concern • In Grizzly, most DB access by the nova-compute service was eliminated • Some information is now conveyed over the RPC system (AMQP) • Some information is now conveyed over the new nova-conductor service which essentially proxies database calls or proxies calls to RPC services • More information in the blueprint
Quantum: New Plugins • Upgrades to existing plugins: • New plugins introduced:
Quantum • Multihost distribution of L#/L4 and DHCP services • Improved handling of security groups and overlapping IP’s • Simplified configuration requirements for metadata service • v2 API support for XML and pagination • Introduction of Load Balancing as a Service (LBaaS) • API model and pluggable framework established • Tenant and cloud admin API’s • Basic reference implementation with HAProxy • Vendor plugins to come in Havana
Horizon Slick new network topology visualization
Horizon • Vastly improved networking support • Visualization • Support for routers and load balancers • Simplified floating IP workflow • Direct image upload to Glance • Makes uploading images easier/faster, but some constraints • Live migration support
Keystone • PKI tokens replace UUID tokens as the default format • Allows offline validation and improved performance • API v3 • Domains provide namespace isolation and role management • RBAC improvements • Trusts provided via CGI-style REMOTE_USER params to make external authentication simpler
Cinder • Fibre channel attach support • Multiple backends with the same manager & scheduler improvements • New drivers:
Swift • User container quotas • CORS (cross-origin resource sharing) support for easier integration with web/HTML5 apps • Bulk operations support • StatsD updates
Much, much, much more • Nova: https://launchpad.net/nova/+milestone/2013.1 • Quantum: https://launchpad.net/quantum/+milestone/2013.1 • Keystone: https://launchpad.net/keystone/+milestone/2013.1 • Horizon: https://launchpad.net/horizon/+milestone/2013.1 • Swift: https://launchpad.net/swift/grizzly/1.8.0 • Glance: https://launchpad.net/glance/+milestone/2013.1 • Cinder: https://launchpad.net/cinder/+milestone/2013.1 • Grizzly release notes: https://wiki.openstack.org/wiki/ReleaseNotes/Grizzly • Grizzly Overview: http://www.openstack.org/software/grizzly/
OpenStack Automation with Puppet Mark T. Voelker Technical Leader, Cisco Systems May Triangle OpenStack Meetup 2013-05-30
Meet Puppet • Puppet is open source software designed for to manage IT configuration and state of systems of all sizes. • It is primarily used on servers, but can also work with other types of devices (like switches). • It is *not* a baremetal installer, but it can handle most tasks once an OS is installed, including software installation, configuration, and maintenance. • It is written and backed by Puppet Labs. • Puppet Labs offers a commercial, supported version of Puppet called Puppet Enterprise, which features additional scale and management.
Why Puppet • Because it beats the heck out of managing a pile of bash scripts. • The Puppet DSL is designed to be easy to use and easier to read. • Puppet allows you describe the state of systems, and store those states in a single place. You don’t have to configure systems individually. • Puppet lets you codify many systems administration tasks. • Puppet can be used to ensure compliance. • If a rogue changes a configuration you provided, Puppet will change it back. • It can also be used to provide auditability, showing when changes were made.
Puppet Alternatives Pile of Bash Scripts
Puppet Basics: Terminology • Puppet is a declarativelanguage, meaning you describe the state you want the system to be in (not what action you want to take). • A manifest is essentially a Puppet “program”…it’s what you write to make stuff happen to your infrastructure, where “stuff” includes things like: • Installing/removing packages • Adding or modifying configuration files • Starting/stopping/restarting services • Setting file permissions or modes • A module is a self-contained bundle of Puppet code and data. Generally, you’ll write one module to accomplish a given state. • Such as “install and configure Apache and make sure it’s always running.” • Generally includes manifests, templates, and other data. • Treated as source code and (frequently) shared on PuppetForge.
Puppet Basics: Terminology • Resource Types define the attributes and actions of a kind of thing • Such as: a file, a host, a service, a package, or a cron job. • Somewhat analogous to programming language variable types (int, struct, float, char, etc) • Providers provide the low-level functionality of a given type. • For example, a “package” resource has providers for apt, yum, PyPI, etc. • Different providers might extend different features for the same resource type. • There are many kinds of types and providers built in to Puppet, but you can also write your own (with a bit of Ruby).
Puppet Basics: Modes of Operation • Standalone Mode • Puppet operating on a single machine • Good for learning and small deployments • Client/Server (aka “Master/Agent”) Mode • A server acts as a “master” where modules and manifests live • Each managed node runs an “agent” which periodically checks in with the master to see if any changes need to be applied. • Communication is via SSL (see caveats), scales horizontal behind load balancers. • Makes it easy to manage lots of nodes by only touching one • Master can be run with a built-in server, or can be run via Phusion Passenger or similar tools for greater scalability. • The most common mode in production. • Massively Scalable Mode • Not really one type of mode at all: you define how Puppet code is distributed • Usually involves rsync, git, or shared filesystems and cron • Invokes Puppet in standalone mode, but you provide the glue that determines how code gets to the managed nodes. • Allows you to sidestep the Puppet Master as a bottleneck.
A Simple Manifest <- Installs the openssh-server package (before we place a config file) <- Creates an SSHdconfig file by copying one we had in /root and sets the mode <- Makes sure the sshd service is always running, and restarts it if we make any changes to sshd_config
More Puppet Basics: Facts • Facts are information about the specific system a given Puppet agent is running on. • They are collected by a program called Facter that ships with Puppet itself. • Facts can be inserted in manifests as variables. • Puppet supports a variety of facts already, but you can add more with a bit of Ruby.
Get Your Hands Dirty Puppet has very good “getting started” training online! http://docs.puppetlabs.com/learning/ Some other resources to check out: • Look for “Pro Puppet” and “Puppet 2.7 Cookbook”, at your favorite tech book library. • Puppet has IRC channels where you can ask questions. • Puppet has documentation.
Puppet and OpenStack • Puppet Labs has been active participant in the OpenStack community, as have Puppet users • Stop by the #puppet-openstack channel on IRC • Check out the Google Group • Say “hi” to Dan Bode • Many OpenStack clouds are deployed with Puppet • Such as Rackspace’s public cloud, eNovance, Morph Labs, Cisco WebEx, and clouds built with PackStack • Puppet is also used to manage portions of the OpenStack community’s project infrastructure • Puppet modules for OpenStack are maintained on StackForge • StackForge is a way for projects related to OpenStack to make use of OpenStack project infrastructure • Puppet modules are mirrored to GitHubat:https://github.com/stackforge/puppet-openstack
IRC: bodepd Twitter: @bodepd GitHub: bodepd O’Reilly Bio Dan Bode • Puppet Labs integration specialist • Frequent OpenStack Design Summit speaker and community guy • Co-author of “Puppet Types and Providers” • Did a workshop on installing OpenStack with Puppet at the Havana Design Summit recently
How it Works: The Abridged Version • Start by reading over requirements and notes here. • Install Puppet 2.7.12 or higher and configure a Puppet Master. • Install the modules. • Edit site.pp to provide information about your environment. • This is where you define things like where your compute, storage, and control nodes are. • Run puppet agents on each host. • Go get coffee. • Cloud!
Anatomy • puppet-openstack is the “root” module • Probably the only one you need to really touch • Intended to make bootstrapping an OpenStack environment fast and easy • It provides the site.pp file where you define your infrastructure (IP addresses, etc) • Individual OpenStack components handled by their own modules (you may or may not use all of them) • puppet-nova • puppet-swift • puppet-quantum • puppet-glance • puppet-cinder • puppet-horizon • puppet-keystone
…but what about initial baremetal provisioning? • Using the StackForge Puppet modules assumes that you have an operating system and Puppet installed on all of the servers you want to participate in your cloud. • Remember, Puppet doesn’t do baremetal provisioning…e.g. loading an operating system on a freshly unboxed server. • Probably fine if your deployment is small, but baremetal provisioning becomes more time consuming with more nodes. • So how can you handle baremetal? Several options… • PXE booting with Kickstart(Red Hat derivatives) or preseeding (Debian derivatives) • Razor • Cobbler
Meet Cobbler • A simple (~15k lines of Python code) tool for managing baremetal deployments • Flexible usage (API, CLI, GUI) • Allows you to define systems (actual machines) and profiles (what you want to do with them) • Provides hooks for Puppet so you can then do further automation once the OS is up and running • Provides control for power (via IPMI or other means), DHCP/PXE (for netbooting machines), preseed/kickstart setup, and more.
Putting It All Together + + =
Cisco OpenStack Installer • In our labs (and at some of our customer sites), we deploy OpenStack using Cobbler and Puppet with the Cisco OpenStack Installer. • Installs OpenStack with Quantum networking using the Open vSwitch driver (so it works on almost any hardware). • Also installs some basic monitoring utilities (Nagios, collectd, graphite) • Open source, freely available • Documentation/install instructions here: http://docwiki.cisco.com/wiki/OpenStack • Video walk-through here: • Part 1: Build Server Deployment • http://www.youtube.com/watch?v=sCtL6g1DPfY • Part 2: Controller and Compute Node Deployment • http://www.youtube.com/watch?v=RPUmxdI4M-w • Part 3: Quantum Network Setup and VM Creation • http://www.youtube.com/watch?v=Y0qjOsgyT90
The Basics • Start with a single Ubuntu 12.04 machine (can be virtual or physical). • Download base manifests and set up site.pp. • Run “puppet apply” to turn your Ubuntu machine into a “build node” • Build node is now a Puppet master, a Cobbler server, and a Nagios/Graphite host. • Use Cobbler on the build node to PXE boot a Control Node • Control node runs most of the OpenStack “control” services (e.g. API servers, nova-scheduler, glance-registry, Horizon, etc) • Use Cobbler on the build node to PXE boot as many compute nodes as you like
So what’s in site.pp? • Mostly information about your physical nodes • NIC, MAC, and IP Address info (for PXE booting, etc) • NTP and Proxy server info (if necessary) • Password for databases • Let’s take a look…..
Abbreviated Demo: Building a Compute Node • Building a multi-node cloud takes some time and the pizza is on it’s way, so let’s look at an abbreviated demo. • We’ll assume that you’ve downloaded the Puppet modules to your build node and applied them. • We’ll also assume you’ve booted your control node with Cobbler and let Puppet set it up • We’ll now use Cobbler to boot up a new compute node.
Questions? http://www.cisco.com/go/openstack