800 likes | 1.2k Views
-Ajeet S Raina. Demystifying Puppet. What is Puppet?. Infrastructure-as-a-Code Puppet is an open source configuration management utility It is written in Ruby and released as free software under the GPL Built to be cross-platform A Declarative Language A Model Driven Architecture.
E N D
-Ajeet S Raina Demystifying Puppet
What is Puppet? • Infrastructure-as-a-Code • Puppet is an open source configuration management utility • It is written in Ruby and released as free software under the GPL • Built to be cross-platform • A Declarative Language • A Model Driven Architecture
The Puppet Model A simple client-server model
Declarative Language Procedural Language Puppet Language
Resource Abstraction Layer • Talks about Resource Management when agents connect • Handles the “How” by knowing how different platforms and OS manages certain type of resources • Each types has number of providers • A Providers contains “how” of managing packages using a Package Management tool. • When agents connect , Puppet uses a tool called “Facter”. To return the information about that agent, including what OS is it running. • Puppet chooses the appropriate package provider for that OS. Reports Success or Failure
Facter • A System Inventory tool • Returns facts about each agent(Hostname, IP address, OS and version information) • These facts are gathered when agent runs. • Facts are sent to the Puppet master, and automatically created as variable available to Puppet • How to run facter? • Facts are made available as variables that can be used in your puppet configuration • Helps Puppet understand on how to manage particular resources in an agent.
Transactional Layer • Its Puppet Engine, after all.
Puppet Code • Resources - The core of the Puppet language is declaring resources. An Individual configuration items • Files – Physical files to serve to your agents • Templates – Templates files that you can use to populate files • Modules – Portable collection of resources.Reusable, sharable units of Puppet Code • Classes – Modules can contain many Puppet classes. Groups of resource declarations and conditional statements • Manifests - Puppet code is saved in files called manifests, which are in turn stored in structured directories called modules. • Pre-built Puppet modules can be downloaded from the Puppet Forge, and most users will write at least some of their own modules.
Searching/Installing Module Puppet Tools & Integration • Puppet Forge
Managing Windows through Puppet Installing Win7zip through Puppet Installing Chrome through Puppet
Problem Statement – How shall you approach this? • All servers that are physical with 4 CPU, deploy ESX. • All servers that are virtual with 1 CPU and 4GB of memory, deploy CentOS, and handoff system to puppetmaster.cse.com for management. • All servers that are virtual with 32GB of memory, deploy Debian, and handoff system to puppetmaster.cse.com for management.
Razor – A Rapid Bare Metal Provisioning Tool - A Software tool for rapid provisioning ofOS and Hypervisor ~ on both physical and virtual servers • A policy-based bare-metal provisioning lets you inventory and manage the lifecycle of your physical machines. • Automatically discovers bare-metal hardware, dynamically configure operating systems and/or hypervisors, and hand nodes off to PE for workload configuration - Two major Components: ~ The Razor Server (Ruby, MongoDB, Node.js) ~ The Razor Microkernel (~20MB LInux Kernel, Facter, MCollective)
Razor Work Flow • - Discovery ( Tags, Matcher Rules) • - Models(Defining OS Templates,..) • - Policies(Rules that apply Models to Nodes based on Discovery) • - Broker( Configuration Management)
How Razor Works? When a new node appears, Razor discovers its characteristics by booting it with the Razor microkernel and inventorying its facts. The node is tagged based on its characteristics. Tags contain a match condition — a Boolean expression that has access to the node’s facts and determines whether the tag should be applied to the node or not.
Virtual Environment for Testing Razor • Install PE in Your Virtual Environment • Install and Configure dnsmasq DHCP/TFTP Service • Temporarily Disable SELinux to Enable PXE Boot • Edit the dnsmasq Configuration File to Enable PXE Boot • Install the Razor Server • Load iPXE Software • Verify the Razor Server • Install and Set Up the Razor Client • Setup Razor Provisioning • Include Repos • Include Brokers • Include Tasks • Create Policies • Identify and Register Nodes LL Loading Microkernel into Razor Imagedng Microkernel into Razor Image
Razor Tags - A Tag consists of a unique name and rule • The tag matches a node if evaluating it against the tag's facts results in true. - The syntax for rule expressions is defined in lib/razor/matcher.rb
A Razor Policy - Policies orchestrate repos, brokers, and tasks to tell Razor what bits to install, where to get the bits, how they should be configured, and how to communicate between a node and PE. • Policies contain a good deal of information, it’s handy to save them in a JSON file that you run when you create the policy Example: It should be applied to the first 20 nodes that have no more than two processors that boot
A Razor Policy - Create a file called policy.json and copy the following template text into it: - Edit the options in the policy.json template with information specific to your environment. - Apply the policy by executing: razor create-policy --jsonpolicy.json.
Razor Workflow in Action Step -1 – A Fresh Razor with no new node. Step-2 – Create a new VM. It retrieves DHCP IP and loads microkernel
Create a new Repo Step-3 - Razor API server is being contacted.
Razor Workflow in Action Step-4 – Razor server shows a new node registered.
Razor Workflow in Action Step-5- Razor Tag – Create a new tag. Let the node be tagged based on its characteristics.
Razor Workflow in Action Step-6 – Check the characteristic of the new tagged node. Count =1 shows that the new node is tagged successfully.
Razor Workflow in Action Step-7- Verify the Razor Tag by name.
Razor Workflow in Action Step-8 – Creating a repo for a new VM to be deployed
Razor Workflow in Action Step-9 - Creating a broker ( Puppet Enterprise for Configuration Management)
Razor Workflow in Action Step-10 – Creating a Policy for new node
Razor Workflow in Action Step-11- A New Node starts loading as per the policy specified.
Razor Workflow in Action Step-12 - Verify the node2 policy attached through puppet master
Razor Workflow in Action Step-13 - The New OS comes up , shows that it has been installed through Razor.
Problem Statements - How shall you address this? • Deploy version 1.2.3 on my application to all 3000 systems • Deploy version 1.2.5rc2 of my application to all 340 development systems • Restart the Apache service on all the systems in North America zones • What systems are online right now? • Run puppet on all systems, ensuring that at most 10 runs are happenings at once • Upgrade the Hadoop version from 0.1 to 1.1 on all those 2500 nodes
mCollective – Puppet’s Orchestration Framework • A framework to build server orchestration or parallel job execution systems • Uses a Publish Subscribe Middlewarephilosophy ~ real time discovery of network resources using meta data and not hostnames ~ a messaging pattern where senders of messages, called publishers, do not program the messages to be sent directly to specific receivers, called subscribers. Instead, published messages are characterized into classes, without knowledge of what, if any, subscribers there may be. Similarly, subscribers express interest in one or more classes, and only receive messages that are of interest, without knowledge of what, if any, publishers there are.“ • Uses a broadcast paradigm for request distribution. ~ all servers get all requests at the same time, requests have filters attached and only servers matching the filter will act on requests. There is no central asset database to go out of sync, the network is the only source of truth “
mCollective – Architecture An MCollective client can send requests to any number of servers, using a security plugin to encode and sign the request and a connector plugin to publish it. It can also receive replies from servers, and format the response data for a user or some other system. Example : mco command-line client