Skip to main content

Puppet Professional Certification – PPT206

Course

Intro Video

Photo of Elle Krout

Elle Krout

Content Team Lead in Content

Elle is a Course Author at Linux Academy and Cloud Assessments with a focus on DevOps and Linux. She's a SaltStack Certified Engineer, and particularly enjoys working with configuration management. Prior to working as a Course Author, she was Linux Academy's technical writer for two years, producing and editing written content; before that, she worked in cloud hosting and infrastructure.Outside of tech, she likes cats, video games, and writing fiction.

Length

16:21:36

Difficulty

Advanced

Videos

65

Hands-on Labs

16

Quizzes/Exams

1

Course Details

This course serves as preparation for learners who seek to pass the 206 Puppet Professional certification exam offered by PuppetLabs. It focuses on Puppet 5.5/Puppet Enterprise 2018.1 Long-Term Spport (LTS) and later releases. We will be learning from the ground up, starting with the installation process for Puppet Enterprise (PE), and then work through each component as we would when applying Puppet to an actual production environment. We will cover everything from basic module authoring to setting up multi-master environments that allow us to push code across massive infrastructures and manage our hosts with features, such as Code Manager, Bolt, and PuppetDB. Upon the completion of this course, learners will be able to tackle any DevOps assignments that must be orchestrated with Puppet or require configuration management.

Syllabus

Introduction

Welcome to the Course!

About This Course

00:04:15

Lesson Description:

Let's cover the objectives of this course! In this video, we'll go over the general structure of our course syllabus and review how we'll navigate the Puppet infrastructure for best results. Our overall course topics include: Background information and conceptsInstallationModule authoring from start to finish (including tests)Node management strategiesOrchestration options (Puppet orchestrator and Bolt)PuppetDB and the Puppet Query Language (PQL)Troubleshooting strategies

About the Training Architect

00:00:59

Lesson Description:

Hey there! I'm Elle and I'll be your instructor, and I'll be introducing myself in this short video. I'm looking forward to learning with you!

The Exam Syllabus

00:03:00

Lesson Description:

Puppet's exam syllabus looks a little different from our overall course syllabus. In this video, we'll review the exam syllabus and see how we've taken the overall topics and broken them down into this very course, so we can build our knowledge of Puppet from the ground up.

Puppet Basics

Concepts

Idempotence

00:07:35

Lesson Description:

Before we start using Puppet, there are a few core concepts we need to cover. One of these concepts is idempotency, which is when an operation will enforce changes the first time it is run but not any subsequent times. Bash scripts, for example, are not idempotent — if we had a script that installed sysstat:

#!/bin/bash

apt install sysstat -y
The apt install sysstat -y line will run every single time we use the script, even if sysstat already exists on the system. Puppet works differently, however, because Puppet is idempotent. Let's take a look at some Puppet code that installed sysstat — don't worry if it doesn't make sense yet:
class sysstat {
  package { 'sysstat':
    ensure => installed,
  }
}
If we were to run this code against a server managed by Puppet initially — or perform a "Puppet run" — it would install the sysstat package:
# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Info: Caching catalog for ellejaclyn2c.mylabserver.com
Info: Applying configuration version '1554750720'
Notice: /Stage[main]/Sysstat/Package[sysstat]/ensure: created
Notice: Applied catalog in 21.95 seconds
Pay attention to the line Notice: /Stage[main]/Sysstat/Package[sysstat]/ensure: created, which signals that we performed the task from our Puppet code and installed sysstat. But what if we were to run this a second time?:
# sudo puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Retrieving locales
Info: Loading facts
Info: Caching catalog for ellejaclyn2c.mylabserver.com
Info: Applying configuration version '1554750791'
Notice: Applied catalog in 15.94 seconds
Notice how that line is missing — that's because Puppet understands that sysstat has already been installed, and it doesn't need to try to install it again. This idempotency is a defining trait of all configuration management systems — of which Puppet is one. A configuration management system creates a layer of abstraction between the servers you need to manage and the commands you need to run to get the end state you desire. Instead, we describe the desired outcome of our system using whatever language the configuration management tool requires; this description is then used against every server that requires that specific setup, regardless of operating system or distribution. So that sysstat Puppet code, above? We can use that on CentOS as well as Ubuntu. In Puppet, this is due to something called the "resource abstraction layer," which we'll cover in our next lesson. Wrap Up We learned some core traits of configuration management systems, including:Configuration management systems are idempotentAn idempotent operation will make changes the first time it is run, but not any repeating timesIn Puppet, our end state descriptions are idempotentIdeally, this description can be used to manage servers regardless of OS or distribution Now that we have an understanding of these concepts, we can build off this to explore how Puppet achieves idempotency, and see what the language we will be using to actually configure our servers looks like.

Resource Abstraction

00:05:20

Lesson Description:

As a configuration management tool, Puppet needs to take our end state descriptions and somehow convert them to work across multiple platforms. To do this, it utilizes something called the resource abstraction layer (RAL), which allows us to write simplified code that Puppet can then take and adapt as needed. After all, think of all the ways our servers can differ: We have different init systems, different package managers, even different commands to add and remove users. Now let's once more consider the sysstat installation code provided earlier:

class sysstat {
  package { 'sysstat':
    ensure => installed,
  }
}
Notice package in the second line: This is our resource type. Part of this resource abstraction layer, a resource type, let's us perform an action regardless of the underlying server. So while this example was written on an Ubuntu 18.04 server, we can just as easily use it on other distributions. In fact, we can get a full list of all the package managers the package resource can use by running the following command:
# puppet describe package

...

Providers
---------
    aix, appdmg, apple, apt, aptitude, aptrpm, blastwave, dnf, dpkg, fink,
    freebsd, gem, hpux, macports, nim, openbsd, opkg, pacman, pip, pip3,
    pkg, pkgdmg, pkgin, pkgng, pkgutil, portage, ports, portupgrade,
    puppet_gem, rpm, rug, sun, sunfreeware, tdnf, up2date, urpmi, windows,
    yum, zypper
Puppet is able to perform this level of abstraction through the use of providers, which are underlying Puppet code created to translate our end states into workable commands for the managed servers. So while we'll never have to manually reference, say, yum, in our descriptions, that's still the manager that installs any packages we need on any servers whose package manager is yum — it's just that Puppet is figuring out the commands and running them for us. At the time of this writing, Puppet is on version 6.4, and there are 11 core resource types provided with our Puppet master. An additional 13 are added when we install the puppet-agent on our managed servers. In contrast, the oldest version of Puppet that the Puppet Professional tests on is 5.5, which contains 48 modules, the majority of which were Nagios-specific and have since been depreciated. We'll be touching on most of the primary 24 resource types as we go through this course, but don't worry about the details yet — we're still taking a high-level look at Puppet, although we're about to dive deeper. Wrap Up In this lesson, we took a high-level look at how Puppet works in regards to configuration management, learning:What the resource abstraction layer is and how it simplifies our workHow the resource abstraction layer works through the use of resource typesWhat a resource type looks likeHow providers exist beneath our resource types to ensure our systems are configured using the correct tools Now that we know how Puppet works on a basic level, however, we want to take a step deeper to look at how we'll set up our systems to work with Puppet and how Puppet actually goes about taking our end states and making them a reality. ResourcesResource Types List - Puppet 6.4Resource Types List - Puppet 5.5

Internals

Architecture

00:10:59

Lesson Description:

Puppet uses a "master-agent" setup, where a primary server — known as the Puppet master — stores configuration information that is queried by the agents, which are our managed nodes. This setup is consistent across both versions of Puppet — open source and enterprise — with the caveat that in versions prior to Puppet 6, we had the option of running a standalone Puppet architecture using the Puppet apply application. To break down everything that Puppet contains, we're going to look at it starting from the components in all Puppet setups, then add on our additional Puppet Enterprise features. Master (Open Source and Enterprise) All Puppet masters are comprised of the Puppet Server and certification authority. The Puppet Server is a Java Virtual Machine and compiles our Puppet code for use on our agents, while the certificate authority lets us manage our master and agent certificates so Puppet is only communicating with hosts we deem safe. We also have the option to add PuppetDB (automatically added in Enterprise), which collects Puppet-generated data and enables some advanced features. PuppetDB can also be accessed by other services that may need Puppet data. Agent (Open Source and Enterprise) The Puppet Agent application needs to be on all nodes we manage with Puppet — including our masters! The Puppet agent runs as a background service and periodically queries for compiled Puppet code from the server to enforce change with. Puppet agents also contain Facter, which records facts about our system that are then passed to the master. We can use these facts in our Puppet code. Master of Masters (Puppet Enterprise) PE adds the concept of a "Master of Masters" — or a Puppet master that contains all the additional features of PE. This includes the Puppet server and certificate authority, as well as role-based access control, orchestration for both multi-tiered applications and on-demand Puppet runs, a node classifier that lets us assign code to managed systems, and a web-based console that lets us manage much of the previously-mentioned tools. The Master of Masters also contains Code Manager and a file sync client, which are used to ensure our Puppet code is consistent across a multi-master setup. Compile Masters Using compile masters is entirely optional — depending on the size of your setup. Using only a Master of Masters, PE supports up to 4,000 nodes, which we can increase by 1,500 to 3,000 per additional master, depending on our PuppetDB limitations. A compile master contains the Puppet Server to compile code and the file sync server to receive and transmit Puppet code to the correct directory. Wrap Up In this lesson, we learned the difference between Puppet Open Source and Puppet Enterprise, and viewed how PE builds off Puppet OS to provide a full-featured platform for managing our nodes. We also learned that:A Puppet master coordinates the desired end state of a server with the Puppet agentThe Puppet Server is the component of the Puppet master that compiles our Puppet code for useAll nodes managed by Puppet run the Puppet agent application, including the Puppet master itselfPuppet Enterprise uses a "Master of Masters" which contains additional tools that allow for orchestration, code management, role-based access control, and morePuppet Enterprise compile masters run similar to open source Puppet, but also contain a file sync server that receives updated Puppet Code from Code Manager Now, this might seem like a lot of data at once — because it very much is. However, we'll get more in-depth with each of these tools as we move on through the course, starting in the next lesson by discussing catalog compilation.

The Lifecycle of a Puppet Run

00:08:27

Lesson Description:

At a high level, we understand that Puppet's resource abstraction layer takes our code and translates it so the agent can configure itself, but how does a Puppet run actually work? When a Puppet agent queries a master with a puppet agent -t, it begins a catalog compilation process. First, the agent initiates a handshake wherein certification information is shared. Assuming the connection to the master is successful, the master then creates a node object — a combination of information about a node, such as environment, parameters, and classes. Once created, the node object is sent to the agent node. The agent then makes any changes to the system it needs to enforce this node object, such as switching environments. Oftentimes this object is blank, and the node is configured based on its own parameters. With the configuration finished, the agent requests a catalog from the master and sends along all facts recorded by Facter for the master to use. The master then evaluates the main manifest, which is a mapping of modules — or end-state configurations — to nodes. Once it matches the agent to a set of modules, the master evaluates any variables and data in those modules using facts about the agent and adds them to the catalog. Any modules referenced by other modules but not in the main manifest are also evaluated and added to the catalog. The catalog is then sent to the agent, which performs any changes described in the modules. If enabled, the agent will send a report to Puppet when finished. Wrap Up We went through all the steps that occur during a Puppet run, including:The agent and master complete a handshake and create a connectionThe agent provides the master with facts about its system from FactorThe master provides the agent with a node object, which it uses to update parameters such as environment, if neededThe master provides the agent with a compiled catalogThe agent applies the catalog and returns a report (if reporting is enabled) And now that we know what Puppet's doing under the hood, we can start to get hands on and set up our environment.

Installation

Puppet Enterprise

Master of Masters

00:16:29

Lesson Description:

We can now finally get hands-on with Puppet! And to start, we have to set up our Puppet environment. We'll be installing Puppet Enterprise 18.1.7 on an Ubuntu 18.04 server (size large, if you're using a Playground server) in this lesson, getting what will work as both our sole master and Master of Masters set up. Most often, the Master of Master is set up using a monolithic install with the Puppet master, PuppetDB, and the web console all on the same server, and that is what we'll be doing here. For a monolithic Master of Masters, we need to follow the following system requirement guidelines:Node volume Cores RAM /opt/ /var/ EC2 Instance< 10 2 6 GB 20 GB N/A m5.largeUp to 4,000 16+ 32 GB + 100 GB 10 GB c4.4xlargeThis changes, however, if we're using compile masters. In a compile master setup, the Master of Masters needs to meet the following requirements:Node volume Cores RAM /opt/ /var/ EC2 Instance4,000 - 20,000 16 32 GB 150 GB 10 GB c4.4xlargeWith each compile master meeting these:Node volume Cores RAM /opt/ /var/ EC2 Instance1,500 - 3,000 4 8 GB 30 GB 2 GB m5.largeInstalling Puppet Enterprise We first want to set up our /etc/hosts file so our FQDN works with our loopback address:

$ sudo $EDITOR /etc/hosts

127.0.0.1 USERNAME#c.mylabserver.com localhost
Save and exit. We now want to pull down the Puppet Enterprise installer for PE 2018.1.7:
$ wget --content-disposition 'https://pm.puppetlabs.com/puppet-enterprise/2018.1.7/puppet-enterprise-2018.1.7-ubuntu-18.04-amd64.tar.gz'
Expand the .tar file:
$ tar -xf puppet-enterprise-2018.1.7-ubuntu-18.04-amd64.tar.gz
Move into the new directory:
$ cd puppet-enterprise-2018.1.7-ubuntu-18.04-amd64/
Now we can run the installer:
$ sudo ./puppet-enterprise-installer
From here, we're prompted with the option to choose either the text-mode or graphical-mode installer. Type 1 to choose text-mode. We'll be taken to our default text editor, where we're presented with a default pe.conf file. This is where we'll specify any configurations we have for our Puppet install. At a minimum, we have to provide a password for the admin console and the FQDN for our server. However, this is also where we can supply and custom settings for PuppetDB, the web console, the Puppet master itself, and orchestrator. We can also set up a split installation here, with the Puppet master, console, and PuppetDB all in separate servers. Save and exit the file. When prompted to continue, type n to exit. Let's now re-run the installation command to view the graphical install:
$ sudo ./puppet-enterprise-installer
How to proceed? [1]: 2
After an initial installation process, we're prompted to visit port 3000 of our server over HTTPS. Switch to your web browser and move to the web installer. Click Let's get started. From here, we're taken to a prompt where we're asked to set our admin password, just as we were in the text-mode installer, and any DNS aliases for Puppet. Do so, then click Continue. We're now asked to review the setup settings, and a link to the pe.conf file generated is provided. Click Continue again. Puppet now checks to see if our server meets installation requirements — it should, save for a warning about the size of the /opt directory. Since we're only working with two nodes, we can ignore this warning and Deploy now. The installation process now starts. Puppet Enterprise will take about 8 minutes to install. Once finished, you can log in as admin using the password you provided. Wrap Up In this lesson, we grabbed a demo version of Puppet Enterprise 2018.1.7 for us to use with 10 nodes, learning:The system requirements for various Puppet setupsHow to use both the text-mode and graphical-mode installerWhat a pe.conf file is and how we can alter our Puppet Enterprise installation by providing oneHow to log in to our Puppet console Next, we have set up our agent nodes so our Puppet master can start moving the strings.

Restarting the Master of Masters

00:02:40

Lesson Description:

This lesson is specifically geared towards anyone running their setup for this course on our Cloud Playground. Anyone using an environment where they'll be turning their Master of Masters off while not studying may also find they benefit. When restarting the server that hosts your Puppet Enterprise setup, you may find the server itself is accessible before the Puppet Enterprise Console is ready. This means, when you visit your mylabserver.com link, you'll be met with a broken page — either as an error from your web browser, or an Nginx bad gateway error. Most of the time, you just have to wait for a couple of minutes. However, if there was an error during startup, you can manually restart your Puppet service by running the following series of commands, in order:

# sudo systemctl stop pe-puppetdb
# sudo systemctl stop pe-puppetserver
# sudo systemctl stop pe-console-services
# sudo systemctl start pe-puppetdb
# sudo systemctl start pe-puppetserver
# sudo systemctl start pe-console-services
Note that you may also have to wait for a few minutes for everything to spin up when restarting this way, as well.

Puppet Agents

00:12:02

Lesson Description:

With our Master of Masters (henceforth referred to as just the master)set up, all we have left is to add some nodes to manage. We'll be using two different distros for our nodes: CentOS 7 and Ubuntu 18.04. Before we can install the Puppet agent on our nodes, however, we need to update the /etc/hosts file on both servers, mapping the private IP of the master to its FQDN:

PRIVATEIP   USERNAME#c.mylabserver.com
Ubuntu 18.04 Since our Ubuntu server shares its architecture with the master, all we have to do to install the agent service is log in to our PE console, and navigate to the Unsigned Certs page. Copy the provided command and run it on the Ubuntu 18.04 server:
# curl -k https://USERNAME#c.mylabserver.com:8140/packages/current/install.bash | sudo bash
When the installation process is complete, refresh the Unsigned Certs page. We're presented with the FQDN of our Ubuntu node, its fingerprint, and the option to accept or reject the certificate. Check the fingerprint against the server itself by running the following on the terminal:
# sudo puppet agent --fingerprint
If it matches, Accept the server. CentOS 7 To add the agent to servers of a different architecture than our master, we have to do a little work first. From the console, move to the Classification page. This takes us to a list of node groups that Puppet has already created for us. Expand the PE infrastructure group, and click PE Master. If we check the Matching nodes tab, we can see that our Puppet master is the only node this group contains. We want to provide it with the ability to [provide an installation script for our CentOS 7 server. To do this, we have to go to the Configuration tab. From the configuration tab, search for pe_repo::platform::el_7_x86_64 at the Add new class input. Click Add class. When we make any changes in our PE console, a blue bar will appear at the bottom of the console asking us to commit our changes. Think of it as just like Git — our master will presumably have access to swaths of our infrastructure, so we want to be certain of any changes we make. Commit 1 change to add the new class to our Puppet master. We now need to force a Puppet run against our master. While we can just run a sudo puppet agent -t from a terminal accessing our master, we can also do this from the console itself. Click on Nodes, then click on the node with your master's FQDN. Select Run Puppet at the top of the screen, then Run, leaving all additional options unchecked. Once the Puppet run is finished, return to the Unsigned certs page and copy the curl command. Run it against your CentOS 7 server:
# curl -k https://USERNAME#c.mylabserver.com:8140/packages/current/install.bash | sudo bash
Then, just as we did before, confirm the fingerprints match:
# sudo puppet agent --fingerprint
And Accept the server. From here, the agent service will automatically request a Puppet run every thirty minutes. Wrap Up In this lesson, we learned how to:Install the Puppet agent on a server of the same architecture as the masterAdd additional architecture types for installationView the certificate fingerprint on a nodeAccept certificates from the PE console We also learned that the agent process will query our master for a catalog every thirty minutes. In our next lesson, we'll be looking at how to further configure our Puppet setup depending on our needs.

Configuration: All Nodes

00:14:24

Lesson Description:

Use an agent node for this lesson.Although we could jump right into Puppet and get started now, we need to take a detour into our /etc/puppetlabs directory to look at our configuration options. This directory contains subdirectories for the majority of our Puppet services, and within these directories are conf.d files that contain HOCON-formatted settings. We won't need to work with most of these settings; however, the /etc/puppetlabs/puppet directory is a special case:

$ cd /etc/puppetlabs/puppet/
$ ls
auth.conf      classifier.yaml  hiera.yaml            puppet.conf    routes.yaml
autosign.conf  fileserver.conf  hiera.yaml.dpkg-dist  puppetdb.conf  ssl
This contains our puppet.conf file, which is the primary file used for configuring Puppet. Other files in this directory are also important — the puppetdb.conf file, for example — however are included separately either due to complicated data structures or limitations of the puppet.conf file itself. The puppet.conf file has "about 200" options we can set. Puppet has compiled a list of setting that will be most important to "90%" users (barring some self-explanatory ones, like ones that set port numbers). We can extrapolate that these settings will be the ones most frequently used on the exam. Let's go ahead and open our puppet.conf file and take a look. There are some default settings already provided: server, which is the FQDN of our Puppet master; and certname, which is the name of the Puppet agent itself, as Puppet knows it. The certname can be changed to anything, it does not need to be the FQDN. Beyond this, however, there are about 200 additional settings for this file. Puppet specifically calls out a few of them, and these "core" settings are the ones we're going to review:environment: Sets the default environment for the server; defaults to productionsourceaddress: Used in multi-home environments to define which address should be used for Puppetruninterval: How often the agent requests a catalog from the master; defaults to 30mwaitforcert: Boolean defining whether or not the agent will attempt to reconnect should there be no response from the master We also have some options specifically related to the agent's behavior during a Puppet run:noop: Sets the Puppet agent so it only checks if changes to the server will be made, but does not make the changespriority: Sets the "nice"ness of the service to prevent taking resources away from other tasksreport: Sets whether the agent returns a report to the mastertags: Lets us define which Puppet classes are run by defining which tags it should havetrace: Print stack trace during a Puppet runprofile: Enables experimental performance profilinggraph: Creates a .dot graph fileshow_diff: Saves a diff of any files that change during the runusecacheonfailure: Determines what the agent does should a Puppet run fail; if set to true, the agent falls back to the last successful catalogignoreschedules: Ignores the use of the schedule parameter within Puppet codeprerun_command: Runs a command before the Puppet runpostrun_command: Runs a command after the Puppet run Wrap Up This settings-heavy lesson took us through the basic structure of Puppet's configuration files as well as taught us:The default settings included in every puppet.confHow to assign our node to an environmentHow to overwrite certain configuration set in our Puppet codeWays to change the behavior of the Puppet agent during a Puppet runA set of debug commands In our next lesson, we'll take a look at master-specific configuration, then it's back to some more practical tasks, as we create an administrative user for ourselves to use and set up some RBAC rules. ReferencesConfiguration Reference (Puppet 5.5)Configuration Reference (Latest)

Configuration: Master

00:09:44

Lesson Description:

Open up the puppet.conf file on the master server for this lesson.While our nodes' puppet.conf files are short and to-the-point, our master's has a few more settings, and also makes use of the one line we ignored in the previous lesson: the [main] configuration grouping. On our Puppet master, we have two more: [agent] and [master] [agent] is where we place our values related to the Puppet agent. Meaning, anything we had under [main] in our last lesson could also go in [agent]. For this file, we're not concerned with our [agent] settings, however. Instead, we want to take a look at our new [main] settings and our [master] options. [main] We've seen some of these options before — server and certname are the same here as they are on our other nodes. Let's take a look at the other settings:user and group: These set the user and group that Puppet works asenvironment_timeout: This disables the caching of environment information (so Puppet checks the environment at every Puppet run)module_groups: Do not change this setting; this contains modules PE needs from the Puppet Forge to function Puppet also lists the following settings as important to know for the [main] section of our master configuration:dns_alt_names: This lets us set alternate names for our Puppet master; also where we set up our DNS information if we're using a round robin DNS setupenvironmentpath: Set the location of directory environments; defaults to /etc/puppetlabs/code/environmentsbasemodulepath: Sets the location of Puppet modules available to all environments; defaults to /etc/puppetlabs/code/modules [master] Beyond what's already configured for us, we'll only ever really need to touch this section where we're changing something related to any additional Puppet features or extensions. Let's first consider the current settings, however:node_terminus: Sets which node terminus is used during a Puppet run; defaults to plain in Open Source Puppet and classifier in Puppet Enterprisestoreconfigs and storeconfigs_backend: Lets Puppet store data about any nodes and sets which backend the data is sent toreports: Sets the report handler; this is different from reportalways_retry_plugins: When set to false, resource types and features will not attempt to reload any failuredisable_i18n: Turns off all translations of Puppet and module log messages Our additional settings:catalog_terminus: Sets an optional static compiler, using up CPU cycles in exchange for better catalog compilation performanceca: Determines whether the master acts as the Certificate Authority; there can only be one CAca_ttl: Sets how long our certificates remain valid; defaults to five years (5y)autosign: Sets the location of our autosigning rules Wrap Up Wrapping up our configuration lessons for this section, we took a look at our master's pe.conf, learning about:How Puppet groups its configuration settingsThe configuration settings included in a basic PE installAdditional options we can use to further configure our Puppet master depending on our needs. We'll be getting hands-on again next, this time returning to the Puppet Enterprise console to take a look at our role-based access control options. ReferencesConfiguration Reference (Puppet 5.5)Configuration Reference (Latest)

RBAC

00:09:09

Lesson Description:

Puppet Enterprise's role-based access control (RBAC), lets us implement the "principle of least privilege" on our Puppet infra, ensuring our users only have access to exactly what they need to succeed and nothing more. To do this, Puppet lets us create user roles that we can assign specific users — either new or pulled in from external directories. Add a Role Log in the Puppet Enterprise console and expand the Access control menu item. Click on User roles. Instead of looking at an existing role, we're going to create one. Let's assume our place of employment has a tiered systems administration system where tier-1 admins are granted the ability to perform only the most basic of tasks, such as turning a service off and on again. We're going to set up a role for those users. Give the role a Name of T1 Admin and the description of Can run basic admin tasks. Click Add Role. We're taken to a new page — specifically, the Member users tab. We don't have anything to do here, so switch to Permissions. We're given the ability to add abilities that our T1 admins can perform. From the Add a permission dropdown, select Tasks. There is no secondary option we can select under this, so leave Run Tasks as-is. We do need to specify what tasks our user can perform, however, so in the Enter an instance box type service. Next, under Permitted nodes, select Node group and set the node group to Production environment (production). Add the permission. Add a User We now want to create a user — but this one will be for us, so we're not logging in as admin all the time. Go to the Users page, then add your Full Name and a desired Login name. Click Add local user. Click on your user, then click Generate password reset so we can set our user's password. Follow the link in your browser and reset your password. Add the User to a Role To add our user to the correct role, we have to move back to the User roles page. From there, click on Administrators. Select your user from the dropdown list and Add user. Confirm the change when asked to do so. Log out as the admin user and log back in as your own. External Directories Finally, RBAC offers us one more option we should at least look at: the ability to import in an external directory, such as LDAP. If we go to the External directory page, we can see that to set this upm we would supply our directory information and credentials, and then set our querying information related to our existing setup. Wrap Up In tech, we frequently hear about the "principle of least privilege," where we ensure our users can only access exactly what they need in our systems. We learned how to implement this principle in our Puppet setup by:Creating a new user role through Puppet's RBAC systemCreating a new Puppet userAdding a user to a roleViewing how to connect Puppet with an existing external directory

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Puppet Open Source

Puppet Server

00:09:31

Lesson Description:

Unlike with our Puppet Enterprise master, the open source Puppet master contains substantially fewer "add-on" services, such as Code Manager and RBAC. An open source Puppet master instead features the Puppet Server, mcollective on servers 5.5 and below, and the certificate authority. We also have the option to add a resource-limited version of PuppetDB. Due to the smaller amount of features, Puppet open source also has smaller system requirements, and we'll be able to get it up and running on a micro-sized Ubuntu 18.04 playground server. Don't worry if you don't want to use up more Playground units for this course, though — we can remove these open source servers after this section, and I also provide a Vagrantfile for those who would prefer to temporarily spin up a Vagrant environment. As for open source Puppet's system requirements, they are as follows:Nodes Cores RAM<10 2 1 GB<1000 4 4 GBNow let's see how to get our open source setup up and running! First, on our Playground servers, we need to set our /etc/hosts file so our FQDN is mapped to our loopback address address. We also want to make sure our server will respond to the alias puppet:

$ sudo $EDITOR /etc/hosts

127.0.0.1  USERNAME#c.mylabserver.com puppet localhost
Save and exit the file. Next, we need to add the Puppet 5 Platform repositories:
$ wget https://apt.puppetlabs.com/puppet5-release-bionic.deb
$ sudo dpkg -i puppet5-release-bionic.deb
$ sudo apt update
Then install the puppetserver package:
$ sudo apt install puppetserver
Before we start the Puppet Server, however, we need to make some changes to its init file, ensuring it will work on our systems-limited micro server. Open the file:
$ sudo $EDITOR /etc/default/puppetserver
Then look for the JAVA_ARGS line. By default, Puppet is set to use up to 2 GB RAM. We're going to drop this down to the absolute minimum, 512 MB:
$ JAVA_ARGS="-Xms512m -Xmx512m -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger"
Save and exit the file. We can now start and enable the Puppet Server:
$ sudo systemctl start puppetserver
$ sudo systemctl enable puppetserver
Additionally, one last thing we want to do to ensure we can use our various Puppet platform commands as a superuser is add the location of our Puppet binaries to the secure path of the server:
$ sudo $EDITOR /etc/sudoers.d/extra

Defaults    secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/opt/puppetlabs/bin
Save and exit. To test that everything is working, refresh the session and try to run a puppet command:
$ bash
$ sudo puppet agent --fingerprint
Wrap Up In this lesson, we installed the open source version of Puppet by:Adding the Puppet 5 repositories to our hostInstalling the Puppet ServerConfiguring the Puppet Server's startup parametersEnsuring the Puppet Server works and can restart automatically at startup Vagrantfile Master Credentials:Hostname: puppetserver"FQDN": puppetserver.mylabserver.comPrivate IP: 192.168.50.10 Node Credentials:Hostname: node"FQDN": node1.mylabserver.comPrivate IP: 192.168.50.20
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|

  config.vm.define "puppetserver" do |puppetserver|
    puppetserver.vm.box = "bento/ubuntu-18.04"
    puppetserver.vm.hostname = "puppetserver.mylabserver.com"
    puppetserver.vm.network "private_network", ip: "192.168.50.10"
  end

  config.vm.define "node" do |node|
    node.vm.box = "bento/centos-7.3"
    node.vm.hostname = "node1.mylabserver.com"
    node.vm.network "private_network", ip: "192.168.50.20"
  end

end

Puppet Agents

00:04:40

Lesson Description:

While open source Puppet doesn't provide us with a single-line installer like Puppet Enterprise does, the process is still fairly short and simple. Go ahead and start up a CentOS 7 server. We first need to update our /etc/hosts file so the private IP of our master is mapped to the master's FQDN and the alias puppet:

$ sudo $EDITOR /etc/hosts

PRIVATE_IP_MASTER   puppet USERNAME#c.mylabserver.com
Save and exit the file, then add the Puppet 5 Platform repository by running:
$ sudo rpm -Uvh https://yum.puppet.com/puppet5-release-el-7.noarch.rpm
We can now install the puppet-agent package:
$ sudo yum install puppet-agent
And get it up and running with:
$ sudo systemctl start puppet
$ sudo systemctl enable puppet
Which is all we have to do to prepare our node to work under our Puppet master! We do still want to update our secure path, however:
$ sudo $EDITOR /etc/sudoers.d/extra

Defaults    secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/opt/puppetlabs/bin
And check that it works by noting our fingerprint:
$ bash
$ sudo puppet agent --fingerprint
Wrap Up This short and sweet lesson showed us how to install the Puppet agent by:Adding the Puppet 5 Platform repositoryInstalling the puppet-agent packageStarting and enabling the service

Certificate Authority

00:11:58

Lesson Description:

Now that we know what the fingerprint of our node is, we need to figure out how to accept it without the presence of the Puppet Enterprise console. For open source, certificate management is done through the use of the puppetserver ca command — which can also be used on our Enterprise setup. To see our full list of certs, let's run:

$ sudo puppetserver ca list --all
Two fingerprints return: One for our master, and one for our node. The master fingerprint also has a + to the left, indicating it's been signed, a process which is done automatically upon starting the Puppet Server itself. Let's go ahead and accept our node into our Puppet infrastructure:
$ sudo puppetserver ca sign --certname USERNAME#c.mylabserver.com
We also have the option to use the --all flag to accept all impending certificate requests. There's more to the puppetserver ca command than just viewing and accepting certs, however. We also have the following options:puppetserver ca setup: Configures a root and intermediate signing authority --subject-alt-names: Set alias for master--certname: Set certificate name for master--ca-name: Set a common name for the signing certificate puppetserver ca generate: Creates a new cert signed by the intermediate authority; generally done offline to create certs for restricted nodes --ca-client: Lets us create the certificate offline--subject-alt-names: Set alias for the cert puppetserver ca import: Use an existing certificate authority --crl-chain: The external CA's cert chain--cert-bundle: Bundle information for the CA--private-key: Private key for the CA--certname: Set certificate name for the CA--subject-alt-name: Set aliases for the CA puppetserver ca revoke: Remove access from a server --certname: The certificate(s) to revoke puppetserver ca clean: Revokes the cert and also removes all associated files --certname: The certificate(s) to cleanAll of these commands also have two universal flags: --help, which displays information about the command and its options; and --config, which lets us define a custom puppet.conf file. Autosigning If we do not wish to manually approve every server that comes in, however; we also have the option to use an autosign.conf file, which contains a list of approved hosts whose certs can be automatically approved. These certs can contain wildcards, not only at a subdomain level. This file can also be used with Puppet Enterprise. Let's create an autosign.conf file in our configuration directory:
$ sudo $EDITOR /etc/puppetlabs/puppet
And set it so our lab servers will be automatically accepted:
*.mylabserver.com
Save and exit the file. Wrap Up Useful with both open source and enterprise versions of Puppet, the puppetserver ca command lets us:Create a certificate authority for creating connections between our master and nodesImport an existing certificate authority into PuppetSign, revoke, and completely remove certificates We also learned how to enable and use autosigning to accept certificate requests from known servers. It is now safe to remove your open source Puppet servers; we will not use them again in this course.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Modules and Manifests

The Puppet Language

The Puppet Forge

00:15:20

Lesson Description:

To configure any of our nodes, we need to use a Puppet module — with a module being a collection of code set to configure a single specific overall item, such as Apache, nginx, or NTP. The Puppet Domain-Specific Language used to write these modules, however, has a lot of moving parts, so to get a sense of what we'll be working with and the best practices we should follow, we're going to take a look at an existing module put together by PuppetLabs on the Puppet Forge. The Puppet Forge is a PuppetLabs-hosted repository of Puppet modules, both user-submitted and provided by PuppetLabs itself. Located at forge.puppetlabs.com, the Puppet Forge lets us search for any existing modules we may want to use. These modules have a hierarchy, however, and not every module on the Forge is created equally. To make sure we're getting a well-written and properly-vetted module, we want to pay attention to what tags or labels that module has. There are five labels we should be aware of:Supported: A supported module is one written by PuppetLabs that works on both enterprise and open source versions of Puppet. These modules are actively maintained and stringently tested against Puppet Enterprise to ensure the highest quality of module.Partner: A partner module is one written by a "Puppet partner" — currently, these are all modules written by the organization that produces the tool it configures. In other words, the Sensu module is considered a partner module because while it is not written by PuppetLabs, it _is_ written by Sensu. These modules are also rigorously tested against Puppet Enterprise, actively maintained, and are held to the highest standards.Approved: An approved module is a user-submitted module that's "well-written" and "reliable". These modules follow best practices and are actively maintained, although not as stringently tested at supported and partner modules.Tasks: Modules with the tasks tag provided task support — in other words, they supply configuration options that can be run outside of the configuration management system.PDK: Any module that supports validation testing through the Puppet Development Kit uses the PDK label. Let's now go ahead and search for the PuppetLab's NTP module. It should be the first result returned. We can see it had both supported and PDK tags, and if we click it, we're taken to an overview of the module itself, including what versions of Puppet it supports, supported operating systems, ratings, a README, a changelog, dependency information, and more. We can also click on the Project URL link to be taken directly to the module's GitHub, where we can review the code itself. To add a module from the Forge, we're offered two options: We can add it to our Puppetfile, which is something we'll address in a future lesson, or we can manually add it through the use of the puppet module command. Let's SSH into our master and do just that:

$ sudo puppet module install puppetlabs-ntp --version 7.4.0
Now, let's consider the output of this command:
Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules ...
Notice: Downloading from https://forgeapi.puppet.com ...
Notice: Installing -- do not interrupt ...
/etc/puppetlabs/code/environments/production/modules
??? puppetlabs-ntp (v7.4.0)
  ??? puppetlabs-stdlib (v5.2.0)
When we use the puppet module install command, the module is pulled into the production environment found under the $codedir, with the $codedir being /etc/puppetlabs/code and the full modulepath with the production environment being /etc/puppetlabs/code/environments/production/modules. We can also see that when we install a module, any modules it depends upon are also installed. Wrap Up We took a look at how to use the Puppet Forge, learning:What each tag in the Forge entails for a moduleHow to find information about a desired moduleHow to download a module from the Forge Next, we'll be taking this NTP module and using it to break down each component of a module, then take a look at the code itself to see how it reflects Puppet best practices.

Module Structure

00:11:06

Lesson Description:

Every module we write in Puppet will be comprised of the same basic parts. To get a look at these overall components, let's move into the directory for our ntp module and take a look at what's in it:

$ cd /etc/puppetlabs/code/environments/production/modules/ntp
$ ls
There are a lot of files here! Some of them we've actually already seen — things like the README.md and CHANGELOG.md files are pulled into the description of the module on the Puppet Forge itself. Others are information files we'll only have to use in certain circumstances; if we wanted to contribute to the module itself, for example, we'd take a look at the requirements within the CONTRIBUTING.md file. But what really highlights the different parts of a complete module are the directories themselves — data, manifests, spec, etc. These all store different configurations and data our module needs to run. Let's go ahead and break these down:data and hiera.yaml: Hiera is Puppet's data store, with the hiera.yaml file being where we store our module-specific configuration file and the data directory where we store our actual dataexamples: This directory stores examples regarding the module's use; for ntp, this shows us how to call the class in our main manifestlocales and readmes: Stores localization data for other languagesmanifests: Our configurations are stored in this directory; covered in detail in the next lessonspec: Stores integration tests for our Puppet code written in RSpectemplates and files: Stores files to be distributed to nodes; templates specifically stores template files that need to be rendered before being added to a nodetypes: Stores aliases for custom resource typestasks: Stores tasks that can be used outside of configuration managementlib: Stores plugins, including things like additional resource types and custom factsfacts.d: Stores additional facts Note that we do not always need to have every single one of these files. The ntp module we have certainly does not! Instead, when creating a module, we'll want to leave only the directories that actually store working parts of our module, allowing anyone who uses it an overview of its features and needs. Wrap Up We went through and looked at how an entire module is structured, discovering:Where we store and configure our Hiera dataWhere to store files — static or otherwise on our serverThe function of other directory structures, such as examples, spec, and types In our next lesson, we'll move into the manifests directory and start breaking down some Puppet code.

Basics

00:07:12

Lesson Description:

Picking up where we left off, let's now look into the manifests directory we glossed over in the last video:

$ cd /etc/puppetlabs/code/environments/modules/ntp/manifests
$ ls
config.pp  init.pp  install.pp  service.pp
Here we have a number of manifests — or .pp files that store our Puppet code. Within each manifest is a named class. Let's look at the install.pp file as an example:
$ cat install.pp

class ntp::install {

  if $ntp::package_manage {

    package { $ntp::package_name:
      ensure => $ntp::package_ensure,
    }

  }

}
Everything in the class ntp::install block is part of our class. A class should be set up to configure one whole portion of our module. In this install.pp example, the class exists to make sure the appropriate NTP package is installed. Within our class, we can also have any number of resources. A resource is the block of code that contains and sets parameters for the resource type, which defines what kind of configuration we're setting. In other words, given the install.pp manifest, this is a class:
class ntp::install {

  if $ntp::package_manage {

    package { $ntp::package_name:
      ensure => $ntp::package_ensure,
    }

  }

}
This is a resource:
package { $ntp::package_name:
  ensure => $ntp::package_ensure,
}
And this is a resource type:
package
We can also have multiple resources in a single class, as evident by our config.pp file. Let's cat that out now:
$ cat config.pp
Notice the numerous instances of file resources, all under our ntp::config class. Notice, too, how we can use conditional statements within our code to further increase its versatility. Here we can do things like use Hiera to determine if the resource runs at all, ensure pieces of code only run on systems of certain operating system types, and ensure there's no templating conflicts by checking for duplicate template file. Wrap Up So at this point, we know that:A manifest is the .pp file that stores a defined part of our module, such as configuring package managementA class is a named container for our actual configuration descriptionsA resource is the actual configuration instructions, created through the use of resource typesAnd any piece of Puppet code we write can be manipulated with conditional statements Now we just need to learn the parameters by which we need to format and otherwise style this code when we write, then we can start working with modules of our own!

Style Guide

00:12:33

Lesson Description:

Now that we know the parts of a manifest, let's take a look at how we structure these components so we craft well-written and readable Puppet code. PuppetLabs stress three core principles of writing code in their documentation: Readability, scope and simplicity, and the idea that modules are software. This means Puppet code should be easy to read and break down — kind of like what we've been doing thus far! A person — yourself or another — should be able to return to this code in months and be able to understand the purpose of everything and how it's broken down. This is done by remembering that writing Puppet code is no different than writing any other software, and ensure each class is restricted to one purpose. For example, if we ask ourselves "what is the purpose of our apache.pp manifest," and we answer "To install and start Apache," then we need to refactor our code into two separate files — one to install and one to start. The Basics Open up the service.pp file from the ntp module:

$ cd /etc/puppetlabs/code/environments/production/modules/ntp/manifests
$ $EDITOR service.pp

# @summary
#   This class handles the ntp service.
#
# @api private
#
class ntp::service {

  if $ntp::service_manage == true {
    service { 'ntp':
      ensure     => $ntp::service_ensure,
      enable     => $ntp::service_enable,
      name       => $ntp::service_name,
      provider   => $ntp::service_provider,
      hasstatus  => true,
      hasrestart => true,
    }
  }

}
If using vim, run: :set list :set listchars+=space:.This file houses a single class, with a single resource, using a single resource type. We can't get much more uncomplicated than this without creating a file that wouldn't look like one used in the wild. We can see it begins with a series of documenting comments — considered best practice to have — then immediately calls our ntp::service class. Let's look at how that class ntp::service { line is formatted. Whitespace is very important in Puppet, and there always needs to be a space between the name of the class and the opening brace. Similarly, we also need to add a space between the resource type (service) and the opening brace, as well as the brace and the resource name ('ntp'). The colon should be directly next to the resource name, however. Additionally, we need to be sure there are no trailing spaces on any of our lines, and all attributes end in commas. To set our attributes for a resource, we use hashrockets (=>). These must always line up. Quotes Puppet code should use single quotes around strings, but only if the string does not contain a variable, escape character, or single quote already. In those cases, double quotes are used. Variables should also be encased in braces. init.pp Requirements Exit the service.pp file and switch to the init.pp manifest:
$ $EDITOR init.pp
We're met with a long list of comments, documenting each parameter in the module. This is best practice, although not what we're here to look at. We also call our list of parameters with its type (Boolean, string, etc.). After this, our classes are pulled in, then ordered with the Class[''] function. To ensure our overall classes are run in order, we use chaining arrows. The -> chaining arrow works as an ordering arrow, ensuring the class to its right is run after the one above it, while the ~> notifying arrow only triggers the class to its right if the prior class makes actual changes to the receiving system. Wrap Up We got down to the details in this lesson, looking at the requirements Puppet has for things like:Tabs or spaces (two spaces)Whitespace usage (no trailing spaces)Quotes (always single quotes, unless variables and strings with single quotes are involved) We've also spent a lot of time looking at the ntp module we downloaded for the Forge, but we haven't actually run it yet. In our next lesson, we'll add the last piece of our configuration management setup by finding out how we can assign modules to nodes.

The Main Manifest

00:09:42

Lesson Description:

We've now spent quite a bit of time looking at the ntp module in effort to gain greater understanding of writing Puppet code. But what about actually using it? To use any module, it's not good enough to simply write a bunch of classes and assign them to an init file. We need to assign our overall module to whichever nodes it needs to configure, as well. This can be done in two ways — via a Puppetfile or using a main manifest — but we're going to focus exclusively on our main manifest option right now, which is the only one that works with both open source and enterprise versions of Puppet. The main manifest is a manifest directory much like the one we saw in our ntp module itself, except instead of holding our configuration code, it contains our classes-to-nodes mappings. Each environment has its own main manifest, located in the manifests directory under the environment's root directory. So for the production environment that our ntp module is on, we would go to:

$ cd /etc/puppetlabs/code/environments/production/
And then look for the manifests directory there. However, before we look at our manifests folder directly, let's also consider the environment.conf file:
$ cat environment.conf
Notice how the entire document is commented out, although there are options for changing our main manifest location. Since we don't want to change it, however, let's just move into the existing one and take a look:
$ cd manifests
$ ls
site.pp
Much like our ntp module's manifest directory, this contains a .pp file. Let's go ahead and open it:
$ sudo $EDITOR site.pp
Like with our existing environment.conf file, this contains quite a bit of commented documentation, explaining how the file works — including that we can also set our global object definitions, such as the File { backup => false } configuration, which tells Puppet not to store backups of any managed files in a filebucket. What's most interesting to us is the node default section. This is where we can assign any module we want to work across all our nodes — we can also substitute the default option with any of our node names or use the wildcard (*) to capture multiple nodes at once. Now, to add our ntp module, we're presented with a few choices. We can add it without making any adjustments:
node default {
  include ntp
}
Or we can set additional parameters and otherwise overwrite parts of the module itself:
node default {

  notify { 'enduser-before': }
  notify { 'enduser-after': }

  class { 'ntp':
    require => Notify['enduser-before'],
    before  => Notify['enduser-after'],
  }

}
Or:
class { 'ntp':
  servers => [ 'ntp1.corp.com', 'ntp2.corp.com' ],
}
Since we don't need to make any node-specific alterations, however, we'll just go with the include ntp option:
node default {
  include ntp
}
Save and exit the file. We can now apply the module with a:
$ sudo puppet agent -t
Wrap Up In this lesson, we assigned our NTP module to our nodes through the use of our production environment's main manifest learning that:Each environment has its own manifest where we assign classes to nodesThere are multiple ways to assign classes, andWe can overwrite parts of our class in the main manifest if we need

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Module Authoring: Apache

GitHub and the PDK

00:09:44

Lesson Description:

We're almost ready to begin creating our module! In fact, we will create part of our module in this very lesson! But first, we need to do two things: Install the Puppet Development Kit and create a GitHub repository for our module. Let's get started by installing the PDK first. Note that we have to add the Puppet 5 Platform repository because Puppet Enterprise uses separate, enterprise repos for install; the PDK is not housed on these.

$ wget https://apt.puppetlabs.com/puppet5-release-bionic.deb
$ sudo dpkg -i puppet5-release-bionic.deb
$ sudo apt update
$ sudo apt install pdk
Now, while we wait for the PDK to install, let's open GitHub in our browsers. Log in, then create a new public repo. We'll be creating an Apache module in the next few lessons, something signifying that's what the repo is related to would be the best name — I'll be naming mine puppet-apache. Once the repo has been created, return to the command line. To generate a directory structure for our module, we need to first move into the directory we want the module in:
$ cd /etc/puppetlabs/code/environments/production/modules
Then run:
 $ sudo pdk new module
We're now prompted with a series of questions about our module. Answer as follows:Set the name to apacheFill in your Puppet Forge username; if you do not have one, leave as defaultAdd your name as the module authorLeave the license as DefaultUnselect Windows; leave RedHat based Linux and Debian based Linux selected Type y to create the module skeleton. Now let's see what we created:
$ cd apache
$ ls
CHANGELOG.md  README.md     data      hiera.yaml     spec
Gemfile       Rakefile      examples  manifests      tasks
Gemfile.lock  appveyor.yml  files     metadata.json  templates
Notice how we have a lot of the same structures as the ntp module — although whether or not we need all these is yet to be seen. Let's go ahead and commit this to Git:
$ sudo git init
$ sudo git add .
$ sudo git commit -am "Module skeleton generated"
$ sudo git remote add origin https://github.com/elle-la/puppet-apache.git
$ sudo git push -u origin master
At this point, we want to mention another difference between this version of Puppet and older ones: The use of the PDK for module creation. In prior versions of Puppet, the puppet module generate command was used for this process. However, despite the PDK being the suggested method for creating modules in Puppet 5, I do want to note that Puppet's public practice test lists a question in which the puppet module generate command is an answer. And the command does still work, and asks a different series of questions than the PDK does:
$ sudo puppet module generate test-test
Warning: `puppet module generate` is deprecated and will be removed in a future release. This action has been replaced by Puppet Development Kit. For more information visit https://puppet.com/docs/pdk/latest/pdk.html.
   (location: /opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/face/module/generate.rb:142:in `generate')
We need to create a metadata.json file for this module.  Please answer the
    following questions; if the question is not applicable to this module, feel free
    to leave it blank.

Puppet uses Semantic Versioning (semver.org) to version modules.
What version is this module?  [0.1.0]
-->

Who wrote this module?  [test]
-->

What license does this module code fall under?  [Apache-2.0]
-->

How would you describe this module in a single sentence?
-->

Where is this module's source code repository?
-->

Where can others go to learn more about this module?
-->

Where can others go to file issues about this module?
-->

----------------------------------------
{
  "name": "test-test",
  "version": "0.1.0",
  "author": "test",
  "summary": null,
  "license": "Apache-2.0",
  "source": "",
  "project_page": null,
  "issues_url": null,
  "dependencies": [
    {
      "name": "puppetlabs-stdlib",
      "version_requirement": ">= 1.0.0"
    }
  ],
  "data_provider": null
}
----------------------------------------

About to generate this metadata; continue? [n/Y]
--> y
I do suggest running the command to generate a module yourself, just to be familiar. Since the PDK is the recommended method of module creation, however, that is the method we will use for the rest of this course. Wrap Up In this lesson, we:Created our GitHub repoGenerated the module skeleton for our future apache module, andCommitted this skeleton to Git

Our First Class

00:14:08

Lesson Description:

We're going to start simple with our first written class by creating a class and manifest that will install the httpd package on a CentOS 7 node managed by Puppet. To begin, we need to first move into our apache module directory:

$ cd /etc/puppetlabs/code/environments/production/modules/apache
To create a class skeleton, we can use the PDK, which will generate a basic manifest for us:
$ sudo pdk new class install
Note that install is the name of our class (and module). Also note that when adding a class, we must be in the core directory for our module. Had we been in the manifests directory directly, this command would have failed. Now let's take a look at what the PDK generated for us:
$ sudo $EDITOR manifests/install.pp

# @summary A short summary of the purpose of this class
#
# A description of what this class does
#
# @example
#   include apache::install
class apache::install {
}
We want to begin by updating the commented area. In this case, since we can describe our class in a few simple words, we can get rid of the longer description section entirely. We can also remove the example, since we don't expect this class to be used on its own:
# @summary
#   Installs the base Apache package.
#
Now let's look at the class itself. We're provided with the class definition (class apache::install {), but nothing else. Since we hope to manage our Apache package here, we want to add a resource declaration for the package resource type:
class apache::install {
  package { 'httpd':
  }
}
Notice for the resource type name, we provide the name of the package on CentOS 7. If we do this, we do not have to define the name attribute directly. We do, however, need to let Puppet know what we hope to do with this package. In this case, we want to make sure it's installed on our host:
class apache::install {
  package { 'httpd':
    ensure => present,
  }
}
ensure is a parameter we'll see a lot, as it can be used with any resource type. Generally, it provides the provider with information with how something is being set up. So if we were adding a directory with the file resource type instead of managing a package, we would make sure that directory is present with ensure => present just as we did here. Save and exit the file. Generally, when we write our modules, we want to test them as we go. To make this easier, let's also add an init.pp file and include our class in it. We'll be refactoring this later; for now, it's just a quick way to test as we go:
$ sudo pdk new class apache
$ sudo $EDITOR manifests/init.pp

class apache {
  include apache::install
}
Save and exit. Finally, we can assign this module to our CentOS 7 node in our main manifest, then log in to our CentOS 7 host to test. Let's first assign the module:
$ cd ../..
$ sudo vim manifests/site.pp

node USERNAME#c.mylabserver.com {

  include apache

}
Save and exit, then log in to the CentOS 7 host. To initially test our module, let's perform a dry-run with the --noop flag:
$ sudo puppet agent -t --noop
Notice how it tracks which changes would be made:
Notice: /Stage[main]/Apache::Install/Package[httpd]/ensure: current_value 'purged', should be 'present' (noop)
Notice: Class[Apache::Install]: Would have triggered 'refresh' from 1 event
Notice: Stage[main]: Would have triggered 'refresh' from 1 event
Go ahead and perform an actual Puppet run when you're confident in the changes:
$ sudo puppet agent -t
If everything runs successfully, move back into the apache directory on the master, and commit to Git:
$ cd modules/apache
$ sudo git add .
$ sudo git commit -am "Add install class with CentOS 7 support"
$ sudo git push origin master
Wrap Up Congratulations! We've written the first class of the course, learning how:To use the PDK to generate a class and manifestTo write effective comments at the manifest-levelTo create a simple class that performs a basic task in Puppet Referencesvimrc filepackage resource type docs

The params.pp File

00:15:13

Lesson Description:

While our current install class works wonderfully on CentOS 7, our environment also houses Ubuntu 18.04 servers, which means we need to refactor our class to work across all distros we're using. There are two ways we can do this, but we're going to start by creating a params.pp file. A params.pp file or params class initially started as a clever hack using Puppet's class inheritance behavior, and quickly became standard in instances where things like package names are OS-dependent. With the release of Hiera 5, this behavior can be replicated — somewhat cleaner — in Hiera, although using params is still considered valid. So let's go ahead and consider our install class:

$ cd /etc/puppetlabs/code/environments/production/modules/apache
$ sudo $EDITOR manifests/install.pp

class apache::install {
  package { 'httpd':
    ensure => present,
  }
}
Ideally, both attributes (remember the httpd is a way of defining the name attribute) would be substituted with variables, giving us a way to change any aspect of our module based on OS, role, or otherwise. Of the two values we're using name is OS-specific, while our ensure value will most likely depend on the purpose of the server. This is going to affect how we lay our our params file. Go ahead and exit the install class — we're not making changes here, yet. Instead, we want to generate and open up our params.pp file:
$ sudo pdk new class params
$ sudo $EDITOR manifests/params.pp
Let's update the comments:
# @summary
#   Operating system-related variables.
#
We can now define any variables we want to assign that are not OS-specific. To assign a variable, we add a $, then define the name. When naming variables, it's generally best to include the class it's associated with and the attribute it alters:
class apache::params {
  $install_ensure = present,
}
Next, we need to create a conditional statement to assign our next variable on a per-OS basis. We can do this with either a case or if statement, although I find case statements to be better when assigning simple logic, while if statements are best for any instances where you need more complicated logic — like working with distro version on top of having multiple distros. For this lesson, we'll be crafting a case statement that filters our variables based on the osfamily fact. We can do this in one of two ways. Option 1:
class apache::params {
  $install_ensure = 'present'

  case $facts['os']['family'] {
  }
}
Option 2:
class apache::params {
  $install_ensure = 'present'

  case $::osfamily {
  }
}
I'll be using option 2 from here on out. Let's define our two supported operating systems:
class apache::params {
  $install_ensure = 'present'

  case $::osfamily {
    'RedHat': {
    }
    'Debian': {
    }
  }
}
We now want to add any OS-specific variables. For us, that's our name value in the install class, so we'll call it install_name:
class apache::params {
  $install_ensure = 'present'

  case $::osfamily {
    'RedHat': {
      $install_name = 'httpd'
    }
    'Debian': {
      $install_name = 'apache2'
    }
  }
}
We can now go ahead and save and exit, then reopen the install class:
$ sudo $EDITOR manifests/install.pp
To use our variables in this class, we have to make use of class inheritance, specifically by modifying our class declaration to assign our variables:
class apache::install (
  $install_name   = $apache::params::install_name,
  $install_ensure = $apache::params::install_ensure,
) inherits apache::params {
We can then call the variables in our code:
class apache::install (
  $install_name   = $apache::params::install_name,
  $install_ensure = $apache::params::install_ensure,
) inherits apache::params {
  package { "${install_name}":
    ensure => $install_ensure,
  }
}
Save and exit the file. To test our changes, we want to perform a Puppet run on both the CentOS 7 and Ubuntu 18.04 nodes. So let's first update our main manifest:
$ sudo $EDITOR ../../manifests/site.pp

node rabbitheart3c.mylabserver.com {

  include apache

}
Then run the puppet agent -t command on each node:
$ sudo puppet agent -t
When you are content that everything is working, return to the master and update the GitHub repository:
$ sudo git add .
$ sudo git commit -am "Add install class with CentOS 7 support"
$ sudo git push origin master
Wrap Up In this lesson, we explored one method of writing Puppet code so it works across multiple distributions, learning:The function of a params class and how it worksHow to use case statements in our Puppet codeHow to use class inheritance to use our params variables in regular classes

Hiera

00:11:36

Lesson Description:

While using a params.pp file is absolutely considered valid, it can quickly become unwieldy as we work across distros, server roles, and with any more complicated modules. This isn't to say it can't be used — a look at PuppetLab's existing Apache module shows us otherwise — but it also shows us why we may want to consider using Hiera instead. The params.pp file for this module is long, convoluted, and comprised of numerous if statements within case statements within more if statements. If we refactor our module so it uses Hiera, however, we can provide all our logic for different operating systems and circumstances in separate, easy-to-understand YAML files. Since we want to use Hiera to sort our module's data by operating system family, we first need to consider where we update this information at. Hiera data can be stored in three places — there is Hiera data relevant to the whole system (with the configuration file for this at /etc/puppetlabs/puppet/hiera.yaml), there is Hiera data that can be applied on an environmental basis (within our ../environments/ENV directories), and then there's module-specific Hiera data, which is what we're working with. We can look at an example of this from within the ntp module:

$ cat /etc/puppetlabs/code/environments/production/modules/ntp/hiera.yaml

---
version: 5

defaults:
  datadir: 'data'
  data_hash: 'yaml_data'

hierarchy:
  - name: 'Full Version'
    path: '%{facts.os.name}-%{facts.os.release.full}.yaml'

  - name: 'Major Version'
    path: '%{facts.os.name}-%{facts.os.release.major}.yaml'

  - name: 'Distribution Name'
    path: '%{facts.os.name}.yaml'

  - name: 'Operating System Family'
    path: '%{facts.os.family}-family.yaml'

  - name: 'common'
    path: 'common.yaml'
When Hiera parses this file, it "learns" that any general variables are found in data/common, with any OS family variables overwriting that, and distro variables overwriting any OS family variables, and so on up the hierarchy. We just want to worry about OS family and common for our module, so let's update our own hiera.yaml file:
$ cd /etc/puppetlabs/code/environments/production/modules/apache
$ sudo $EDITOR hiera.yaml

---
version: 5

defaults:  # Used for any hierarchy level that omits these keys.
  datadir: data         # This path is relative to hiera.yaml's directory.
  data_hash: yaml_data  # Use the built-in YAML backend.

hierarchy:
  - name: 'Operating System Family'
    path: '%{facts.os.family}-family.yaml'

  - name: 'common'
    path: 'common.yaml'
Note how we leave the common.yaml configuration. This is where we'll store our default values for things like our install_ensure setting, which is independent of OS family. Save and exit the file. Let's now add a RedHat-family.yaml file:
$ sudo $EDITOR data/RedHat-family.yaml
And add our single OS-dependent install variable in it:
---
apache::install_name: 'httpd'
Save and exit, then do the same for Debian-based servers with Debian-family.yaml:
---
apache::install_name: 'apache2'
Save and exit again. Finally, we want add the install_ensure variables to Hiera, as well. But since that isn't distro-dependent, we can just add it to the common.yaml file, which will store our default:
$ sudo $EDITOR data/common.yaml

---
apache::install_ensure: present
We may also want to add a default value for the install_name variable here. Generally this would be the most common package name for the module, which is a bit difficult to determine for Apache. That said, since we have more Ubuntu 18.04 servers than we do CentOS 7, it makes the most sense in our use case to include an apache2 default:
---
apache::install_ensure: present
apache::install_name: 'apache2'
Now we need to update the install.pp manifest itself. Open the file:
$ sudo $EDITOR manifests/install.pp
And replace the current variables with the name of the module (apache), followed by two colons and the variable name (apache::install_name, apache::install_ensure):
class apache::install (
  $install_name   = $apache::params::install_name,
  $install_ensure = $apache::params::install_ensure,
) inherits apache::params {
  package { "${apache::install_name}":
    ensure => $apache::install_ensure,
  }
}
And remove the class inheritance setup from our class declaration:
class apache::install {
  package { "${apache::install_name}":
    ensure => $apache::install_ensure,
  }
}
That said, we do still need to declare our variables for the module — just not here. Instead of adding our parameters to each individual class, we want to make sure anyone using our module can see all the variables we use from the start — in our init.pp. We can't do this with the params.pp method, but it makes things much cleaner when using Hiera:
$ sudo $EDITOR manifests/init.pp
To do this, we do something similar to how we used inheritance with our params, but we don't need to use the inherits option. Instead, we can simply define our variables and what type of data they expect to accept:
class apache  (
  String $install_name,
  String $install_ensure,
) {
  include apache::install
}
To confirm everything is working, use puppet parser validate to check your work, then run a puppet agent -t on both nodes. Commit to GitHub when ready:
$ sudo git add .
$ sudo git commit -am "Refactored module for Hiera"
$ sudo git push origin master
Wrap Up In this Hiera-based lesson, we learned:The three ways we can store Hiera dataHow to change a hiera.yaml file to use facts to set dataHow to refactor a module with existing params so that it uses Hiera instead

Files

00:14:19

Lesson Description:

When writing modules, we often also have various configuration files — or just files in general — that we need to supply to get our application or service up and running. So how do we get those files from the Puppet master to the nodes? The file resource type, of course! For our Apache module, we'll specifically be providing a custom configuration file to our nodes. We'll have two of these in our module for use — one for our Red Hat family servers, and one for our Debian-family servers. To get started, let's pull down the files for our config:

$ cd /etc/puppetlabs/code/environments/production/modules/apache

$ sudo curl https://raw.githubusercontent.com/linuxacademy/content-ppt206-extra/master/apache2.conf -o files/Debian.conf
$ sudo curl https://raw.githubusercontent.com/linuxacademy/content-ppt206-extra/master/httpd.conf -o files/RedHat.conf
Now, the easy part: Whenever we manage files on Puppet, we want to make sure whoever accesses those files on the node itself knows that are not to be touched. So, let's make a note at the top of each file that these are maintained by config management:
$ sudo $EDITOR files/Debian.conf

# This file is managed by Puppet; update module to make changes.
#
Do the same for the RedHat.conf file:
$ sudo vim files/RedHat.conf

# This file is managed by Puppet; update module to make changes.
#
Now to craft our associated class:
$ sudo pdk new class config
$ sudo $EDITOR manifests/config.pp
Let's update the comments at the top:
# @summary
#   Manages configuration files for Apache
#
Next, we want to use the file resource type to craft our resource declaration. file, like package, has a number of attributes to consider, although the ones we want to concern ourselves with are path, ensure, source, mode, owner, and group. Write the resource definition:
class apache::config {
  file { 'apache_config':
  }
}
Then supply the attributes:
class apache::config {
  file { 'apache_config':
    ensure => $apache::config_ensure,
    path   => $apache::config_path,
    source => "puppet:///modules/apache/${osfamily}.conf",
    mode   => '0644',
    owner  => 'root',
    group  => 'root',
  }
}
Notice how some of these take Hiera data, while some are hardcoded. Values should be hardcoded when the results will be the same no matter what — meaning, the mode, owner, and group values for Apache will work across distros, roles, and whatever other categorization we're using to separate our nodes. We also want to pay attention to the source setting: This does two things we haven't seen before. First, it uses the Puppet URI scheme to reference our configuration files' location on the master — see the puppet:/// prefix, which lets us reference any Puppet mount points. Also notice how we don't need to include the file directory outright when defining the location of the file. Second, we use the $osfamily fact to call the file name. This prevents us from having to set a new Hiera value to store the location of the file, keeping the code less complicated. Let's now add the Hiera variables to our data. config_path will be distro-dependent, but config_ensure we can add to our common file and nothing else. Let's start with our distro-related variable, however. Open the RedHat-family.yaml file and set the variables:
$ sudo $EDITOR data/RedHat-family.yaml

apache::config_path: '/etc/httpd/conf/httpd.conf'
Next, set the variable for Debian-based servers:
$ sudo $EDITOR data/Debian-family.yaml:

apache::config_path: '/etc/apache2/apache2.conf'
Finally, change the common.yaml file:
$ sudo $EDITOR data/common.yaml

apache::config_ensure: 'file'
apache::config_path: '/etc/apache2/apache2.conf'
With that done, all that's left is to update our init.pp file and do a little refactoring. First, we can update our parameters:
class apache  (
  String $install_name,
  String $install_ensure,
  String $config_ensure,
  String $config_path,
) {
Then we want to replace our simple include with a setup that will ensure our classes are run in order. We do this by switching the include to containcontain allows the apache class itself to create a "container" around any contained code, ensuring none of it will run before the actual container itself. This lets us then form relationships between our classes with our chaining arrows:
) {
  contain apache::install
  contain apache::config

  Class['::apache::install']
  -> Class['::apache::config']
}
Save and exit the file, then test the module with both the puppet parser validate command and by forcing Puppet runs on both nodes. When finished, commit to git:
$ sudo git add .
$ sudo git commit -am "Added config class to manage configuration files"
$ sudo git push origin master
Wrap Up We got hands-on with the file resource, learning:The basic attributes of the resource typeAbout Puppet URIsHow to call facts within our classes

Metaparameters

00:21:17

Lesson Description:

Now that we have two classes written, we want to start thinking about how to order them, as well as how to ensure we get certain behaviors from our classes only at certain times. To address many of these overall class management issues, we can use metaparameters. Metaparameters are attributes that can be used for any resource type. There are 13 of these, although most people will not have to use all — or even most of them — in their everyday module writing. To demonstrate the use of metaparameters, we're going to use a service class that restarts our Apache service every time there's a change caused by our config class. Since there's no new concepts to be learned by writing this, go ahead and get it set up by adding these following files: manifests/service.pp:

# @summary
#   Allows for the Apache service to restart when triggered
class apache::service {
  service { "${apache::service_name}":
    ensure     => $apache::service_ensure,
    enable     => $apache::service_enable,
    hasrestart => true,
  }
}
data/RedHat-family.yaml:
apache::service_name: 'httpd'
data/Debian-family.yaml:
apache::service_name: 'apache2'
data/common.yaml:
apache::service_name: 'apache2'
apache::service_ensure: 'running'
apache::service_enable: true
Let's now take a look at our metaparameter options. alias alias lets us set an alternative namevar when the current one is long or otherwise unwieldy to use. We've actually already done the shorthand version of this! In our config class, we name the class apache_config despite its namevar being defined as the $apache::config_path variable in the class itself. This makes it easier for us to reference our classes by name, especially when that name would otherwise include a variable. audit Enables resource auditing. When using the audit parameter, we supply it with the name of an attribute (or an array of attributes). Then, whenever that attribute is used in a Puppet run, Puppet will track whether that attribute contributed a change and what that change was. before While our init.pp chaining arrows are a wonderful, visual way to order our modules, we also have the option to use the before metaparameter. This attribute lets us provide a class or array of classes that cannot be run before our resource, ensuring our classes are being run in the appropriate order. consume and export consume and export are specifically useful for orchestration and when using Puppet to deploy applications that work across multiple servers. It allows us to provide data from other resources to use with the current class. For example, if we wanted to supply the user and password data provided in a mysql resource to an apache class, we would include the export metaparameter in the mysql resource and the contain in the apache resource. loglevel Sets the log level for the resource. Log levels include: emerg, alert, crit, err, warning, notice, info/verbose, debug noop Ensures the class will be tested by not applied, by default, just as when we ran the puppet agent -t --noop command. notify Expresses that the defined resource or class depends on this class to run. It also ensures the dependent resource is only run when there are changes to the notifying resource. We can also see this with the use of the ~> chaining arrow. require This works almost as the reverse of notify — instead of supplying the metaparameter to the resource that needs to come before other resources, require is added to resources that come after the defined resource or resources. schedule Sets a time when Puppet is permitted to manage a resource. To use this, we must first define a schedule with the schedule resource type, then use the metaparameter to set the frequency. stage By default, when we run our classes that are run using the main stage. We can set other stages to order our resources in blocks using the stage resource type, then calling it in a class. subscribe Any resources a class is subscribed to are run before the resource with the subscribe attribute, and runs only when changes to the subscribed resource occur. tag Let's us supply a list of tags for our resource. Ordering Our service Class Now that we know how our metaparameters work, we need to determine which of them we should use to ensure our service class is run every time the config class makes a change on our server. Of the above parameters, notify and subscribe are our best options; however, we also have a third option: to use chaining arrows instead, as shown in our prior lesson. Whether to use chaining arrows or a metaparameter is ultimately up to you. To keep with the existing structure of our module, however, we'll be updating the init.pp file to use the ~> notifying arrow:
$ sudo $EDITOR manifests/init.pp

class apache  (
  String $install_name,
  String $install_ensure,
  String $config_ensure,
  String $config_path,
  String $service_name,
  Enum["running", "stopped"] $service_ensure,
  Boolean $service_enable,
) {
  contain apache::install
  contain apache::config
  contain apache::service

  Class['::apache::install']
  -> Class['::apache::config']
  ~> Class['::apache::service']
}
When content with your changes, test the server on the two nodes with puppet agent -t, and, of course, add these changes to GitHub:
$ sudo git add .
$ sudo git commit -am "Added a service class"
$ sudo git push origin master
Wrap Up In this lesson, we went through our list of available metaparameters, including:Ordering parameters that let us define the way in which our code runsAuditing and debugging parameters, like audit and loglevelParameters that will make application orchestration and overall management easier, such as consume, stage, and exportAnd generally helpful metaparameters, like tag and alias

Templating

00:09:52

Lesson Description:

Apache's configuration is almost never done with just an install and a configuration file — we also need to provide a virtual hosts file (or files) for Apache to use. These configuration files will most certainly be different on your various nodes, and may even need to be run multiple times with different values on the same host. This is where templating comes in. When writing Puppet templates, we have two options for our language: embedded Puppet (EPP) or embedded Ruby (ERB). We'll be focusing on the Puppet-specific EPP here. Generally, before starting to write a template, it helps to have a reference for the file itself. The base vhosts template I'm using as inspiration is this:

<VirtualHost *:80>
    ServerName subdomain.mylabserver.com
    ServerAlias subdomain
    ServerAdmin admin@mylabserver.com
    DocumentRoot /var/www/subdomain/html
</VirtualHost>
To take this and turn it into a template, let's first create our template file:
$ sudo vim templates/vhosts.conf.epp
Notice the epp suffix — this lets Puppet know we're using an embedded Puppet template. We can start just by copying in our base in its entirety:
<VirtualHost *:80>
    ServerName subdomain.mylabserver.com
    ServerAlias subdomain
    ServerAdmin admin@mylabserver.com
    DocumentRoot /var/www/subdomain/html
</VirtualHost>
When we template files with embedded Puppet, we use various < >-style tags to signify where we're either importing in information such as variables, adding comments, or writing any store of statement — most commonly if statements. Let's start by adding in placeholders for our variables. We want to give users the option to define port, subdomain, the admin's email (optional), and the document root where we'll host our website files. To set variables, we use the <%= $VARIABLENAME %> syntax:
<VirtualHost *:<%= $port %>>
    ServerName <%= $subdomain %>.mylabserver.com
    ServerAlias <%= $subdomain %>
    ServerAdmin <%= $admin %>
    DocumentRoot <%= $docroot %>
</VirtualHost>
We can also use the <%= %> syntax to call Facter facts. Which is exactly what we'll do to replace the mylabserver.com part of our file:
<VirtualHost *:<%= $port %>>
    ServerName <%= $subdomain %>.<%= $facts[fqdn] %>
    ServerAlias <%= $subdomain %>
    ServerAdmin <%= $admin %>
    DocumentRoot <%= $docroot %>
</VirtualHost>
Note that we have to use the fact[fqdn] syntax here. Using something like $::fqdn will cause an error. We're not quite done yet, however! We want to make it so the ServerAdmin setting is optional, which means crafting an if statement about it. To use actual code in our EPP file, we encase it in plain <% %> brackets:
<% if $admin =~ String[1] { -%>
    ServerAdmin <%= $admin %>
<% } -%>
The String[1] data type tells Puppet that we want to include this part of the template only if the length of our admin string is greater than one character long. We also want to pay attention to the dash (-) at the start of the closing tag. When we end our templating tags with -%> we are telling Puppet to strip any trailing whitespace from the provided data. We also have the option to open the <%-, which strips any indentation from the start of the line. Additionally, we have the option to add comments to our template with the <%# opening tag. Let's add one to the top of our file:
<%# Virtual hosts template -%>
Finally, we want to take a look at how we can add parameters to our template. We'll be calling our parameters our associated manifest in the next lesson, so we won't be doing it here for our template, but if we did need to include a parameters list, it would look like the following:
<%- | Integer $port,
      String  $sitename,
      String  $admin,
      String  $docroot
| -%>
Notice the use of pipes (|) to mark the boundaries of the parameter stanza. We can now save and exit the file. We'll work on implementing this in the next lesson, so no parser tests or GitHub pushes yet! Wrap Up In this lesson, we learned how to write a template, including how to:Add variables to our templateUse if statements and other conditionals to increase our template's flexibilityWrite comments to guide end-users, andDefine parameters at the top of the file

Defined Types

00:19:47

Lesson Description:

With our virtual hosts template created, we now need to create the class that uses it — except it's not going to be a class at all. Instead, we want to create a defined type, which lets us leverage Puppet code to essentially create a custom resource state. Let's start by generating our defined type's manifest file:

$ cd /etc/puppetlabs/code/environments/production/modules/apache
$ sudo pdk new defined_type vhosts
And updating our comments:
# @summary
#   Generates virtual hosts files based on template
#
# @example
#   apache::vhosts { 'namevar': }
This time we want to leave the example in there, however — we'll substitute it for what we have to feed to our main manifest later. We now want to pull in our variables from your template at the top of the file, similar to how we added variables for our params and init classes. We'll be using these within our resource type definitions as well:
define apache::vhosts (
  Integer $port,
  String[1] $sitename,
  String $admin,
  String[1] $docroot,
) {
Notice how we can specify string length here, just as we could in our template. We now want to make sure that whatever docroot we specify exists. For this, we also use file, but instead set the ensure option to directory. We also want to add Hiera data to set our directory owner and group:
  file { "${docroot}":
    ensure => 'directory',
    owner  => $apache::vhosts_owner,
    group  => $apache::vhosts_group,
  }
We now want to add in a second file resource here, this time one that will create the subdomain.conf file in the specified virtual hosts location for our nodes — since this is something that differs between distros, we'll create a Hiera variable for the location where we place our vhosts configs, then use the subdomain variable to name the file itself:
  file { "${apache::vhosts_dir}/${subdomain}.conf":
Let's add attributes to set ownership and mode:
  file { "${apache::vhosts_dir}/${vh_sitename}.conf":
    ensure  => 'file',
    owner   => $apache::vhosts_owner,
    group   => $apache::vhosts_group,
    mode    => '0644',
And now we want to call our template. We use the content attribute for this, which can take raw text, the template() values for ERB, or the epp() value for EPP templates. We'll be using EPP:
    content => epp('apache/vhosts.conf.epp')
But we aren't done yet! We need to pass in our defined type's variables (the ones we already have set in the template itself) so our template can access them. This is done by calling each variable as if it were its own attribute in the content line itself:
    content => epp('apache/vhosts.conf.epp', {'port' => $port, 'subdomain' => $subdomain, 'admin' => $admin, 'docroot' => $docroot}),
Finally, we want to make sure this resource triggers a restart for the Apache service when changes are made. Let's use notify for this, making our full document look like:
define apache::vhosts (
  Integer $port,
  String $subdomain,
  String $admin,
  String[1] $docroot,
) {
  file { "${docroot}":
    ensure => 'directory',
    owner  => $apache::vhosts_owner,
    group  => $apache::vhosts_group,
  }
  file { "${apache::vhosts_dir}/${subdomain}.conf":
    ensure  => 'file',
    owner   => $apache::vhosts_owner,
    group   => $apache::vhosts_group,
    mode    => '0644',
    content => epp('apache/vhosts.conf.epp', {'port' => $port, 'subdomain' => $subdomain, 'admin' => $admin, 'docroot' => $docroot}),
    notify  => Service["${apache::service_name}"],
  }
}
Let's now add our Hiera data, all of which is OS family-specific:
# data/RedHat-family.yaml
apache::vhosts_dir: '/etc/httpd/conf.d'
apache::vhosts_owner: 'apache'
apache::vhosts_group: 'apache'

# data/Debian-family.yaml
apache::vhosts_dir: '/etc/apache2/sites-available'
apache::vhosts_owner: 'www-data'
apache::vhosts_group: 'www-data'

# data/common.yaml
apache::vhosts_dir: '/etc/apache2/sites-available'
apache::vhosts_owner: 'www-data'
apache::vhosts_group: 'www-data'
We also want to add these to our init.pp — although we don't need to add the defined type to the init.pp class itself:
# manifests/init.pp
class apache  (
  String $install_name,
  String $install_ensure,
  String $config_ensure,
  String $config_path,
  String $service_name,
  Enum["running", "stopped"] $service_ensure,
  Boolean $service_enable,
  String[1] $vhosts_dir,
  String[1] $vhosts_owner,
  String[1] $vhosts_group,
) {
Now to actually use our defined type! To call our define, we add it to our main manifest, using it similar to a class, only without the class name. Let's initially test our code in CentOS:
$ sudo $EDITOR ../../manifests/site.pp

node USERNAME#c.mylabserver.com {

  include apache

  apache::vhosts { 'puppet_project':
    port      => 80,
    subdomain => 'puppetproject',
    admin     => 'admin@mylabserver.com',
    docroot   => '/var/www/html/puppetproject',
  }

}
Next, run the puppet parser against the vhosts manifest, then test the code on the CentOS 7 node with a puppet agent -t. Once you've confirmed everything works, we can further play with the features of the defined type. For our Ubuntu server, we're going to declare our resource type twice, once leaving out the admin address:
node USERNAME#c.mylabserver.com {

  include apache

  apache::vhosts { 'puppet_project':
    port      => 80,
    subdomain => 'puppetproject',
    admin     => 'admin@mylabserver.com',
    docroot   => '/var/www/html/puppetproject',
  }

}

  apache::vhosts { 'puppet_project_dev':
    port      => 8081,
    subdomain => 'puppetproject-dev',
    admin     => '',
    docroot   => '/var/www/html/puppetproject-dev',
  }

}
Save and exit. We also want to take this time to update our docstring in the vhosts.pp manifest:
# @example
# apache::vhosts { 'namevar':
#   port      => INT,
#   subdomain => STR,
#   admin     => STR,
#   docroot   => STR,
Then test on the Ubuntu server with puppet agent -t. When you're content the module is working, remember to commit to GitHub!
$ sudo git add .
$ sudo git commit -am "Added virtual hosts defined type"
$ sudo git push origin master
Wrap Up In this lesson, we looked at an alternate way of using Puppet resource types by creating a defined type, which lets us:Repeatedly use the resource as many times as we need, andSet our own variables to use as resource attributes At this point, we have a pretty decent start to an Apache module! We'll be looking at testing our end states for this module next, but if you still want to practice module creation, I suggest looking into ways to expand the module to remove default site configurations and enable any sites.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Unit Testing

RSpec

00:09:36

Lesson Description:

When it comes to using the PDK, we aren't just limited to the module skeletons we create. PDK also ships with RSpec, rspec-puppet, and rspec-puppet-facts, which lets us create unit tests to confirm the results of our module. This, in turn, allows us to extend our infrastructure's test coverage to our Puppet modules themselves. RSpec itself is a behavior-driven testing framework that can be used for any Ruby code, although our related libraries make it customized for Puppet specifically. Now, let's get started by looking at what the PDK provided us for tests. First, we want to move into our apache module directory:

$ cd /etc/puppetlabs/code/environments/production/modules/apache
Then we want to review the spec directory that the PDK generated for us:
$ cd spec
$ ls
classes  default_facts.yml  defines  fixtures  functions  hosts  spec_helper.rb
classes and defines are the locations where we store our class tests and defined type tests, respectively. default_facts.yml is used if we want to test against certain facts, and any data provided there will override the actual facts a server is using. fixtures stores any dependency modules, which right now is just a copy of our apache module as a whole. functions and hosts store additional unit tests, as well, especially those related to functions and hosts. These are written in the same way class and defined type tests will be written. Finally, the spec_helper.rb file contains any "helper" code for running our tests. This includes things like adding the require lines for rspec-puppet-facts, ensuring our directory structure works as intended, and all of our facts are sorted. Now let's take a look at what the PDK generated for us for our class tests. Although we can separate our tests out into the different files the PDK created for us, we'll be working directly with the apache_spec.rb file as we learn the basics. If we read the file:
$ cat classes/apache_spec.rb
we're presented with:
require 'spec_helper'

describe 'apache' do
  on_supported_os.each do |os, os_facts|
    context "on #{os}" do
      let(:facts) { os_facts }

      it { is_expected.to compile }
    end
  end
end
Right now, this just tests if the class can be compiled into a catalog (it { is_expected.to compile }, with the context "on #{os}" do portion allowing us to loop through all supported distros to perform the check. So let's see what happens if we run this test
$ sudo pdk test unit --tests=spec/classes/apache_spec.rb

pdk (INFO): Using Ruby 2.5.3
pdk (INFO): Using Puppet 6.4.0
[?] Preparing to run the unit tests.
[?] Running unit tests.
Run options: exclude {:bolt=>true}
  Evaluated 6 tests in 8.628672553 seconds: 0 failures, 0 pending.
Puppet confirms that our tests run successfully. But right now it's only confirming that our module can compile. We want more than that. Now, let's go ahead and open our apache_spec.rb file and add more tests.
$ sudo vim classes/apache_spec.rb
We're going to test three additional things: that all our classes are contained, that our appropriate package is installed, and that the appropriate service is started and enabled. This won't cover our entire module, but will be enough for us to cover any structures that may appear on the exam. The PDK and RSpec testing is not a huge focus of the exam, we just need to know how it works and be able to read a unit test file. So, right now our unit test looks like this:
require 'spec_helper'

describe 'apache' do
  on_supported_os.each do |os, os_facts|
    context "on #{os}" do
      let(:facts) { os_facts }

      it { is_expected.to compile }
    end
  end
end
This is specifically structured to use rspec-puppet-facts, which allows us to iterate through all of the OSes that our module supports. Without it we would have to write separate files for each operating system or OS version, every use case we would otherwise consider. These lines, in particular, create a loop using facts:
  on_supported_os.each do |os, os_facts|
    context "on #{os}" do
      let(:facts) { os_facts }
For each operating system, the loop will pull in another operating system's related facts (let(:facts) { os_facts }), then it runs any tests within the function specifically against a host with that OS. For OS-specific variables, we will have to use case or if statements, but we'll still be managing much less code than in we if we wrote individual context stanzas for each OS. Now let's consider:
it { is_expected.to compile }
This is how all our actual tests will be structured at the most basic level. We'll add as we consider more advanced use cases, but every test line will essentially look like:
it { DOES SOMETHING IN A CERTAIN WAY }
Most RSpec tests can be read as a sentence, answering the question "What is it expected to do?" So, let's go ahead and expand the current test, focusing first on tests we can run across all operating systems: the tests that check that we have all our classes contained. To do this, we'll write three lines using the contain_class function, instead of compile. As we progress, notice we use the contain_* format a lot. A full list of options can be found in RSpec-Puppet's docs.
require 'spec_helper'

describe 'apache' do
  on_supported_os.each do |os, os_facts|
    context "on #{os}" do
      let(:facts) { os_facts }

      it { is_expected.to contain_class('apache::install') }
      it { is_expected.to contain_class('apache::config') }
      it { is_expected.to contain_class('apache::service') }
      it { is_expected.to compile }
    end
  end
end
Save and exit. Let's see if our tests work:
$ sudo pdk test unit --tests=spec/classes/apache_spec.rb
Success! That said, notice the returned data. If our tests succeeds, it will let us know there were no failures. But what does it look like if our tests fail? There are two things we want to look our for here. Let's reopen our apache_spec.rb file:
$ sudo vim classes/apache_spec.rb
and add a stray curly bracket somewhere. Save and exit, then rerun the unit test:
$ sudo pdk test unit --tests=spec/classes/apache_spec.rb
pdk (INFO): Using Ruby 2.5.3
pdk (INFO): Using Puppet 6.4.0
[?] Preparing to run the unit tests.
[?] Running unit tests.
Notice how it quits at [?] Running unit tests. with no errors. This means we messed up the syntax of our file is an unexpected way. On the other hand, if something simply isn't working, the errors will resemble:
$ sudo pdk test unit --tests=spec/classes/install_spec.rb
pdk (INFO): Using Ruby 2.5.3
pdk (INFO): Using Puppet 6.4.0
[?] Preparing to run the unit tests.
[?] Running unit tests.
Run options: exclude {:bolt=>true}
  Evaluated 12 tests in 7.41826528 seconds: 12 failures, 0 pending.
failed: rspec: ./spec/classes/install_spec.rb:11: error during compilation:
 Evaluation Error: Error while evaluating a Resource Statement,
Class[Apache::Install]: has no parameter named 'install_name'
(line: 2, column: 1) on node rabbitheart1c.mylabserver.com
  apache::install on debian-8-x86_64 should compile into a catalogue without dependency cycles
So if there's an actual issue with our code or with our class, we can just follow the feedback to troubleshoot the issue. In this case, it's because the apache::install class doesn't have our Hiera data defined, since we do that in our init.pp. Wrap Up In this lesson, we learned the function of unit tests, specifically:Puppet uses the RSpec test framework to perform unit testsThe PDK installed and configured the test framework for usHow to write basic unit tests working with the rspec-puppet-facts gem

Working Across OSs

00:07:57

Lesson Description:

At this point in our test case, we've tested for everything that works across all distros. All we have left to consider are things that we need to address on an OS-family by OS-family basis. To do this, we're going to craft an if statement:

$ cd /etc/puppetlabs/code/environments/production/modules/apache/spec
$ sudo vim classes/apache_spec.rb

describe 'apache' do
  on_supported_os.each do |os, os_facts|
    context "on #{os}" do
      let(:facts) { os_facts }

      it { is_expected.to contain_class('apache::install') }
      it { is_expected.to contain_class('apache::config') }
      it { is_expected.to contain_class('apache::service') }
      it { is_expected.to compile }

      case os_facts[:osfamily]
      when 'Debian'
      when 'RedHat'
      end
    end
  end
end
We then want to supply each case with tests that check that the Apache package is present, and that the Apache service is running, enabled, and expected to restart given certain parameters. We'll start with the package check and focus first on Debian. At it's must basic, we can check for the package with:
it { is_expected.to contain_package('apache2') }
But we especially want to check that our ensure variable is set correctly, so we expand our test with the .with function:
it { is_expected.to contain_package('apache2').with(ensure: 'present') }
Notice how this still mostly works as a sentence, albeit a bit of a stilted one once we get to our parameters. We also want to mimic this same structure with the service test, only this time we're going to provide multiple parameters:
it { is_expected.to contain_service('apache2').with(ensure: 'running', enable: true, hasrestart: true) }
Giving us this resulting code for Debian:
       when 'Debian'
         it { is_expected.to contain_package('apache2').with(ensure: 'present') }
         it { is_expected.to contain_service('apache2').with(ensure: 'running', enable: true, hasrestart: true) }
Let's repeat this for RedHat:
       when 'RedHat'
         it { is_expected.to contain_package('httpd').with(ensure: 'present') }
         it { is_expected.to contain_service('httpd').with(ensure: 'running', enable: true, hasrestart: true) }
Finally, we do want to address one use case we don't have: how to pull in parameters when using a params.pp file with a module. For this, we would just call our parameters using a :params line:
let :params do
  {
    install_name: 'apache2',
    install_ensure: 'present',
  }
end
Let's save and exit the file, then run our tests one more time:
$ sudo pdk test unit --tests=spec/classes/apache_spec.rb
And we have another success! Wrap Up In this lesson, we expanded our unit tests for different operating systems, learning how to:Use case statements account for different variables on different OSesTest for specific parameters using withAdd a :params section to define any variables that use the param pattern Looking for an extra challenge? Continue writing the unit tests for this module, specifically focusing on the config class. You'll use the same processes as we did here, just with the contain_file option.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:15:00

Facts and Facter

Facter Overview

00:03:22

Lesson Description:

At this point in the course, Facter isn't anything new to us — we've used facts rather substantially to manipulate our Puppet module across various distros. But we haven't taken the time to really look at Facter itself and how our facts are structured, created, and set up for us to use. So as we know, we can use Facter to query for basic facts about system. To receive a full list of these facts, all we must do is run:

$ facter
We're then promptly met with a long list of information about our host. The information is then structured in two ways: as either a flat or structured fact. Flat facts look like our timezone or virtual facts — a one-to-one key-value pair:
  timezone => UTC
  virtual => kvm
On the other hand, structured facts are organized in an array and contain numerous key-value pairs that fall underneath an overall key. This is how the os fact that we've previously worked with functions:
$ facter os

{
  architecture => "amd64",
  distro => {
    codename => "bionic",
    description => "Ubuntu 18.04.2 LTS",
    id => "Ubuntu",
    release => {
      full => "18.04",
      major => "18.04"
    }
  },
  family => "Debian",
  hardware => "x86_64",
  name => "Ubuntu",
  release => {
    full => "18.04",
    major => "18.04"
  },
  selinux => {
    enabled => false
  }
}
But we can go even farther here. We can also classify our facts by how they collect information, giving us core facts, custom facts, and external facts. Core facts are the ones we just listed above — they ship with Facter and we can use them without doing any special work. Custom facts are ones generated via a Ruby script that produced the necessary values. Finally, external facts are provided via static data or through an executable script — not a Ruby script we include in one of our load paths, but something like a Bash or Python script. Regardless of which kind of fact we write, all non-core facts are treated like a plugin and the preferred way of supplying them is through modules. Specifically, custom facts are stored in MODULE/lib/facter, while external facts live in MODULE/fact.d. We can also work from the /opt/puppetlabs/facter and /etc/puppetlabs/facter directories. Now, let's get started by creating some new facts in our next lessons. Wrap Up In this quick Facter overview, we learned:How to query for factsHow flat versus structured facts lookThe different types of facts and where to find them

External Facts

00:08:19

Lesson Description:

When it comes to providing Puppet with external facts, we have a few options both with regards to where we store our facts and how we right them. External facts come in two flavors: basic facts, which work as simple key-value pairs (with the option to nest additional pairs into a "structured fact," as well as the option to use arrays), and executable facts, which are facts written in any non-Ruby language your host happens to support. By default, there are four places where we can store our external facts:MODULEDIR/MODULE/fact.d/opt/puppetlabs/facter/facts.d//etc/puppetlabs/facter/facts.d//etc/facter/facts.d/ But none of these directories are created on their own. As we move through this lesson, we'll end up having to create both the MODULEDIR fact location and the /etc/puppetlabs location. Basic Facts To create a basic fact, we have the option of using YAML, JSON, or just plain text with the .txt file type. As noted, these can be flat key=value pairs or more structured. Either way, no actual code needs to be run for these facts to be used. Now, move into the module's directory:

$ cd /etc/puppetlabs/code/environments/production/modules/apache
Then, use the following to create a facts.d directory:
$ sudo mkdir facts.d
Since we're already familiar with YAML, let's use this as our file type:
$ sudo vim facts.d/web_server.yaml
Then, add our key-value pair:
---
web_server: Apache
If we expect to later expand our web facts, we could also set this up as a structured fact:
---
web:
  server: Apache
Executable Facts Now, let's consider writing an external fact that relies on a script. For this, we want to create a fact that will tell us how many users are on the system. To do this, we're going to just use a Bash script that will evaluate the following command:
$ who | wc -l
That said, it's not enough for us to just send the output of that command to our STDOUT. Any evaluated facts need to follow the key=value format for their output for the fact to work. Since this fact isn't module-specific, we're going to start by creating a new facter/fact.d directory in /etc/puppetlabs:
$ sudo mkdir -p /etc/puppetlabs/facter/facts.d
$ cd /etc/puppetlabs/facter/facts.d/
Now, let's write our Bash script:
$ sudo vim user_count.sh
Let's add our interpreter information:
#!/bin/bash
Then, assign the output of our who | wc -l command to a variable:
count=$(who | wc -l)
And use the echo command to output our desired fact:
echo "user_count=$count"
Save and exit, then ensure the script is executable:
$ sudo chmod +x user_count.sh
Test the script:
$ ./user_count.sh
Viewing Our Facts To see our user_count script, all we have to do is query Facter:
$ sudo facter user_count
To view our any facts added via module, we need to add the -p flag:
$ sudo facter -p web_server
Wrap In We created two external facts in this lesson, learning how to:Create simple facts based on key-value pairs and more structured dataWrite scripts whose output works as an external factQuery for our new facts

Custom Facts

00:09:11

Lesson Description:

Custom facts are plugins created in Ruby that provide additional facts to our hosts. These custom facts are stored not like our external facts, but instead work the same as plugins do — thus, these facts can be placed in any directory that Ruby reads as part of its load path, as long as it is in a file named facter. In our instance, we'll be adding our custom fact to our apache module. Let's move into the module's directory:

$ cd /etc/puppetlabs/code/environments/production/modules/apache
Ruby looks for the presence of a lib directory for plugins. Let's add this, along with the facter subdirectory:
$ sudo mkdir -p lib/facter
Now, we can start crafting our custom fact. Specifically, we'll be adding a fact that outputs how many processes the Apache service is using, which means we'll have to consider the differences between Red Hat and Debian-based servers. Let's begin by creating and opening our apache_processes.rb file:
$ sudo vim lib/facter/apache_processes.rb
Next, we want to define the name of the fact. This can either be a completely new fact, or we can change the behavior of an existing fact by providing that fact's name:
Facter.add(:apache_processes) do
end
As we can tell, the name of our fact in this instance is apache_processes. Now, we want to provide Facter with the code it needs to evaluate to get the results of our fact. This is done with a setcode do stanza:
Facter.add(:apache_processes) do
  setcode do
  end
end
Simpler facts can substitute this for a single line: setcode 'CMD'. Since we'll be crafting a case statement, however, we won't be using this. We can now write the evaluation statement for the fact itself. To find how many Apache processes we have running, however, we need to consider our distro. So, let's go ahead and craft a case statement:
Facter.add(:apache_processes) do
  setcode do
    osfamily = Facter.value(:osfamily)
    when 'Debian'
    when 'RedHat'
    end
  end
end
Then, add our code. To run a Linux command through our custom fact, we use the Facter::Code::Execution.execute function:
Facter.add(:apache_processes) do
  setcode do
    osfamily = Facter.value(:osfamily)
    case osfamily
    when 'Debian'
      Facter::Core::Execution.execute('pgrep apache2 | wc -l')
    when 'RedHat'
      Facter::Core::Execution.execute('pgrep httpd | wc -l')
    end
  end
end
Finally, since this fact only works on Linux-based OSs, we can use the confine option to ensure the fact is only added to Linux-based servers, not Windows:
Facter.add(:apache_processes) do
  setcode do
    confine :kernel => "Linux"
    osfamily = Facter.value(:osfamily)
    case osfamily
    when 'Debian'
      Facter::Core::Execution.execute('pgrep apache2 | wc -l')
    when 'RedHat'
      Facter::Core::Execution.execute('pgrep httpd | wc -l')
    end
  end
end
Let's now save and exit the file. To deploy this fact on any server (that has the Linux kernel), we can use our puppet agent -t command. Notice, however, that this fact is added to all servers, not just the ones that use the Apache module. This is true of any module-specific facts, and there's no way around this. For example, we can query for our fact on our master, which doesn't use the Apache module at all:
$ sudo puppet agent -t
$ sudo facter -p apache_processes
0
Notice that it returns a 0. In contrast, should we run this on our CentOS or Ubuntu nodes, we'll get a different response:
$ sudo puppet agent -t
$ sudo facter -p apache_processes
6
Finally, one thing we did not address for the sake of ease up until this point is that the -p flag with Facter is going to be removed in future releases. Instead, PuppetLabs suggests we use the puppet facts find command to output all Facter values. We cannot query our facts on a single-fact basis with this, however. To achieve a similar result as facter -p, we could use grep:
$ sudo puppet facts | grep apache_processes
Wrap Up In this lesson, we learned the components of a custom module by:Creating a new fact based on our Apache moduleEnsuring the fact only works on LinuxUsing Ruby to evaluate a Linux command and provide that result to Facter

Profiles and Roles

Profiles

00:11:50

Lesson Description:

Generally, when working with configuration management, our setup is not fractured to a point where we would need to apply each module individually. Or more specifically, not adding a component module to every server that needs, for instance, Apache. Instead, we have the option to add a layer of indirection to our modules by writing wrapper classes that are known as profiles and roles. Profiles contain several related modules called component modules in this setup due to the configuration of each of the infrastructure components. Additionally, profiles should be set up to configure a type of layer within our technology stack. On the other hand, roles are collections of profiles that allow us to configure an entire system. In this lesson, we're going to set up three profiles: a base profile that contains the modules we want on all of our servers; an apache profile that includes our module, a vhosts definition, and a php module pulled in; and a mysql profile that configures MySQL to create a database. But before we can do any of that, we need to pull down the necessary component modules to complete this. Let's now move into our modules directory:

cd /etc/puppetlabs/code/environments/production/modules/
And grab the motd, php, and mysql modules:
sudo puppet module install puppetlabs-motd --version 3.0.0
sudo puppet module install puppet-php --version 6.0.2
sudo puppet module install puppetlabs-mysql --version 9.0.0
Now we want to create a storage location for our profiles. Since a profile is merely a layer of indirection, it's just another class. We have to use pdk commands to create our profile location:
sudo pdk new module profile
cd profile
Now let's start by creating our base profile:
sudo pdk new class base
sudo vim manifests/base.pp
The class we're creating will function in the same manner as our node definitions. We can use include for our ntp module and add a message of the day through the motd module:
# @summary
#  Core modules for all servers
#
# @example
#   include profile::base
class profile::base {
  include ::ntp

  class { '::motd':
    content => "This host is managed by Puppet!n",
  }

}
Note: Take into consideration how we update our comments. Also, notice that to include our ntp class, we add a double colon to the front of our class name. This allows us to back up from the profile scope and include our ntp class. Nevertheless, we can imagine that this process is dropping down a directory on the command line. Finally, by using class to call our motd module, we can overwrite any default values with ones that will work best for our hosts. You may find yourself using this method more frequently with generalized modules that have to work for numerous hosts, such as Puppet Forge modules. Save and exit the base class. Now let's create our apache profile:
sudo pdk new class apache
sudo vim manifests/apache.pp
For this, we can start by including our base apache module:
# @summary Basic Apache configuration
#
# @example
#   include profile::apache
class profile::apache {
  include ::apache
}
Then, we add our new php module:
  class { '::php':
    pear => true,
  }
Save and exit the file. Finally, let's repeat this process to have a working mysql-server profile for our next lesson:
sudo pdk new class mysql::server
sudo vim manifests/mysql/server.pp

# @summary
#   Manages MySQL
#
# @example
#   include profile::mysql
class profile::mysql::server {
  class { '::mysql::server':
    root_password           => 'passwordhash',
    remove_default_accounts => true,
  }
}
Next, we're going to combine our profiles into a single role and check out our resulting LAMP stack! Wrap Up We've now created a layer of indirection between our component modules and what will eventually face server assignment. In this lesson, we learned the following: The purpose of roles and profilesHow to create a profileHow scoping works

Roles

00:05:42

Lesson Description:

If profiles set up specific areas of our server, then roles exist to bring these profiles together to transform into a complete system. Roles should only contain profiles and nothing else. Similarily to roles, profiles are stored as manifests under an overall module role that we have to create. Let's execute this now:

cd /etc/puppetlabs/code/environments/production/modules/
sudo pdk new module role
Remember to use the same defaults that we've set up when creating past modules. When prompted, we will want to ensure that Windows is deselected. Create the lamp role:
cd role
sudo pdk new class lamp
Now, to create our role, all we have to do is use include for the relevant profiles:
sudo vim manifests/lamp.pp

# @summary
#   Configures a full LAMP stack
#
# @example
#   include role::lamp
class role::lamp {
  include profile::base
  include profile::apache
  include profile::mysql::server
}
Save and exit the file. Let's test this out by updating our node definitions:
sudo vim ../../manifests/site.pp

node USERNAME#C.mylabserver.com {
  include role::lamp
}

node USERNAME#C.mylabserver.com {
  include role::lamp
}
Save and exit. Finally, perform a Puppet run on both servers. Note: At the time of writing, there was an issue with the mysql module on Ubuntu hosts. After applying the new role definition once, run a sudo apt --fix-broken install. Then, perform the Puppet run again. Wrap Up In this lesson, we learned the following: The location for adding rolesThat roles should only include profilesHow to apply roles to our servers

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Node Management

Classification

Groups and Classification

00:06:03

Lesson Description:

Although it may feel as if classification is a newfound concept, it’s certainly not. Throughout this course, we’ve been tackling it repeatedly. Classification is essentially the practice of configuring our nodes by mapping our classes, parameters, variables, and Hiera data to our nodes. Up until now, we’ve done this exclusively with our main manifest, yet we still have better options with Puppet Enterprise. To start, we need to ensure that we're logged in to the PE Console, then click on Classification on the left-hand side menu. We can see now that the two most general node groups are All Environments and PE Infrastructure. The All Environments group has a defining feature that's marked with an Env group tag. This tag exists because there are two kinds of groups that we have access to: classification node groups and environment node groups. It goes without saying which one our All Environments group is a part of. The primary difference between these two node groups is that while we can assign our normal classification data of classes, parameters, and variables to our classification groups, we use environment groups to only assign environments to nodes. Now, let's expand the PE Infrastructure group. We have our PE infrastructure broken up into different components; each component includes an overall child group. Parent and child groups use a concept called inheritance to apply our modules on the nodes in our groups, and helps us further refine our modules, so they're working under certain circumstances. We can think of it a little like how we overrode certain settings in our profiles and roles, but on a group or environment level. When it comes down to group inheritance, the process is like Hiera inheritance or any other scope we've seen in Puppet where the children inherent all classifications defined in the parent. If we want a child group to use different parameters, then we overwrite them in the child class again, much like we did in our profiles. With that in mind, we don't overwrite our classes if the parent class uses the mysql class, so the child class will always use it as well. So, let's now actually take a look at one of these existing groups before we do anything ourselves. We've spent quite a bit of time on our Puppet master, so let's choose this option. But first, let's click its parent group, PE Infrastructure. Immediately, we can see that no actual nodes exhibit assignment to this overall group. Instead, let's look at our Configuration page. We can see that the overall puppet_enterprise class shows it applied to all nodes under this parent group. We can also see where Puppet provided our master's fqdn as a parameter override for many of the values within this class. Now, let's step back and look at the PE Master group that has our master assigned to it, and when we look under Configuration, it is evident that a number of related classes assign to this node. From earlier in this course, we developed a little experience assigning classes to this node already with our installation videos where we added the pe_repo::platform::el_7_x86_64 class that lets us use our single-command installer work on CentOS. We can even see when we did this underneath the Activity tab. Wrap Up Overall, Puppet Enterprise provides us with an additional group-based classification system. Regarding this system, we learned:The difference between an environment group and a classification groupHow inheritance works across groupsHow to navigate groups from within the PE console

Adding Groups

00:08:41

Lesson Description:

We now want to create two groups: a parent and a child that coincide with Puppet's best practices for defining groups. Generally, when it comes down to creating groups, we want to take our high-level business requirements into consideration, and then consider how this reflects the actual infrastructure we use within our organization. Afterwards, we would then define our parent classes based on these established high-level requirements, and craft our child classes based on our specific use-cases and considerations. For us, this means we're going to create an overall Web Server group. While we tackle this, let's say we work for a company that does almost all its work from a website and a child group called LAMP that is specifically used to set up full LAMP stacks (whereas we normally have separate databases and web hosts). So, let's go ahead and actually add some groups now by logging in to our Puppet Enterprise Console and selecting the Classification page. Click Add group... to finalize this. To create a group, we need to log in to our Puppet Enterprise Console and return to our Classification page. We will then click Add group.... This will expand a menu that will present the option to define our parent group, provide the group name and environment, as well as write a description of the environment itself. We also have the option to check whether the group is an environment group. For our overall web server group, we must leave the Parent name as All Nodes, and then provide the Group Name of Web Server. Note: There are no real capitalization or style conventions to follow for the name, but make sure it's plain text and alphanumeric. Finally, set the Environment to production. Do not check the environment group option. Add a description only if you want. Click Add when done. Now let's repeat this process to add our LAMP group. This time we will set the Parent name to Web Server and the Group name to LAMP. The rest of the settings should remain the same. If we scroll to the bottom of our list of groups, we can now see our Web Server group at the bottom. Click on it in order to expand the group and display our LAMP child group. Let's now click on the LAMP group itself. On the Rules tab, we have the option to either define fact-based rules to add servers to our group, or we can just supply the certname under the Pin specific nodes to the group option. Adding Rules To add hosts to our group via rules, all we need to do is define a fact, define an operator, and then set the value of that operator. To see this in action, let's see how we can add our non-master hosts to this specific group. For this, let's first select our fqdn fact from the dropdown. We now want to set our operator. These operators are the same basic operators we see in math and programming: = for exact matches, ~ for close matches, != to match anything except the provided value, and a number of additional options such as greater-than or less-than. Now, in an ideal world, our hosts would most likely have hostnames related to our purpose web1.mylabserver.com, web2.mylabserver.com, and so on and so forth. If this were the convention we were using, we could set our operator to ~ and then provide web as our value. We can even try this with the username that our current hostnames are based on. For example, if the servers use the ellejaclyn username, we type ellejaclyn for the value, and then can see that there are three node matches. That's not how our playground servers are set up, though, unfortunately. So instead let's change our operator to != and define our master's hostname. Click Add rule, then Commit 1 change. Alternatively, we also have the option of adding host-level facts to each of our non-master nodes defining a role fact, and then simply using that role to assign our nodes, but this is outside the scope of this lesson. That said, we've already covered how to add custom facts, so if you prefer this way or just want the extra challenge, feel free to add a role fact. We may also choose to pin each host by the certname. Finally, let's finish out this lesson by logging in via SSH into our master, and removing our node definitions from our main manifest:

sudo vim /etc/puppetlabs/code/environments/production/manifests/site.pp
Delete both node definitions for our non-master hosts and clear out any data you have in the default definition. Wrap Up We've now created two groups in total, a parent and a child, for us to work with. In this lesson, we've also learned:To add either a classification or environment groupHow to assign nodes to groups using the certnameHow to assign nodes to groups using fact-based rules

Defining Data

00:06:28

Lesson Description:

Let's return to our Web Server group from the previous lesson by logging in to our PE Console and viewing our Classification page. Our groups are created, and nodes assigned, but we still must complete the actual classification part where we map the classes to the group. For the group classification setup, we're going to be abandoning the roles and profiles in favor of assigning our component modules directly — which is a perfectly valid way of classifying our nodes, and will also let us demonstrate some of the more interesting classification features we have open to us. To do this, let's first click on the Configuration tab. We can now begin to add our classes. For our overall web server configuration, we're going to be specifically adding our apache module, and that same PHP module we used in our overall apache profile. To add a class, perform a search for it in the Add new class box, and then click Add class. Make sure the classes you add are specifically apache and php, not any the subclasses they contain. Notice, however, that it truly is only our classes — should we search for our apache::vhosts defined type, it is not available to us. Should we want to access it and assign it to our groups in this method, we would need to create an additional wrapper class for it, then add that class to our init.pp as normal. Adding Parameters If we look now at our added classes, we see that the option exists to add different parameters to each. Let's click on the Parameter name dropdown for apache. Here, any of the parameters that we provide to our init.pp file are available for us to change. Let's do this by changing our install_ensure option to latest, and click Add parameter. Now, let's commit the changes we made by clicking Commit 2 changes. We also want to update our LAMP group, so let's now select this group from our Classification page. Right now, our Web Server group installs Apache and PHP — this means that our LAMP group does, too. But we still need to add MySQL, so let's use the Add new class ability to add our mysql::server class. Click Add class when done. But what if we want our LAMP stack to use different Apache parameters than the ones we have for our Web Server group? Well, this is where the Data section of our configuration comes in. Simply search for the apache module, select your desired variable, and override the parameter that way. Let's do this by setting our install_ensure variable back to present. Save the parameter, and then commit the changes. Adding Variables That said, using the parameters option for each of our classes is not the only way to provide variables. If we wanted to set any general variables for our overall group itself, instead of working with them on a class-by-class basis we can select the Variables tab, then provide any variable we use as a key with whatever value we want. For example, this would be an ideal place to set the information for our virtual host's file, and by adding something like servername to our global variables, we could also reference it in other modules that may need that same information. Note: Our variables cannot use the :: scope indicator, therefore, we can't specify module-specific variables because these should be handled by adding parameters to classes. Wrap Up In this lesson, we assigned classes to our groups, learning:How to set class-specific parameters for each groupHow to set parameters for classes assigned to the parent groupHow these parameters work alongside group inheritanceHow to set top-level variables that can be used across the group

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:15:00

Environments

Environment Basics

00:06:24

Lesson Description:

We've been sticking to our production environment thus far while working through this course, but any good engineer or administrator knows that we shouldn't just work on production alone. Instead, we should have two if not more environments, so that we can test our code in a safe manner that will not disrupt customers and coworkers. In Puppet, we can aid in this concept by dividing our Puppet code itself into environments. Within Puppet, an environment is a branch of individual modules, data, and variables that we use to configure part of our infrastructure. If we have particularly segmented teams, we can also use environments to keep our Puppet code separate as well. So how do we add an environment? All we have to do is add a new directory into our /etc/puppetlabs/code/environments path, and ensure that the new environment contains both manifests and modules directories. Let's go ahead and create an environment called staging:

$ cd /etc/puppetlabs/code/environments
$ sudo mkdir -p staging/manifests staging/modules
We also have the option to add a hiera.yaml and data files if we wish to use environment-level Hiera data, and an environment.conf file should we want to update any configurations. We only have four settings for this file, and we can see a commented description of these in our production environment's environment.conf:
$ cat production/environment.conf

# modulepath = ./modules:$basemodulepath
# manifest = (default_manifest from puppet.conf, which defaults to ./manifests)
# config_version = (no script; Puppet will use the time the catalog was compiled)
# environment_timeout = (environment_timeout from puppet.conf, which defaults to 0)
    # Note: unless you have a specific reason, we recommend only setting
    # environment_timeout in puppet.conf.
Essentially, this lets us change our manifests and modules directories if we choose to do so, and define an executable script that will generate a config_version (configuration version), or override the default environmental timeout. For the most part we will not have to update this file — these values tend to instead be set globally in our puppet.conf file. As for assigning nodes to various environments, we have a number of options for this. We can assign an environment fact to each node, define a group in our PE Console, assign our nodes in the environment's main manifest as we did earlier in this course, or we can take a more full-featured option, which is what we'll be looking at for the rest of this course. Puppet offers us the option to use r10k and Code Manager to manage our Puppet code and its associated environments as a whole. Code Manager also allows us to sync our Puppet code across masters in any multi-master setups. Note: Remember r10k is the backend to Code Manager but can be used on its own to manage our environments as well. Wrap Up In this lesson, we learned some environment basics, including: How to create an environmentEnvironment use casesHow to change our environment settings

Using a Control Repo

00:06:42

Lesson Description:

Before we set up Code Manager itself, we want to set up what is known as a control repo. A control repo is a Git repository that lets us use version control to track, maintain, and deploy our Puppet code. As we update the control repository, our respective environments will also be updated. At minimum, a control repo needs to contain a production branch, a Puppetfile, and an environment.conf file with a modified modulepath that lets us set environment-specific modules and settings. And while we can create a fresh repository and add these files ourselves, PuppetLabs suggests that most users copy the example control repository they provide, so we will do exactly that. Create a Key Pair First, we need to create a key pair that we can use along with our control repo, and since this is stored in the puppetserver configuration directory, we'll need to drop to root:

$ sudo -i
# ssh-keygen -t rsa -b 2048 -P '' -f /etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa
Note: We must not include a password for this key. Now, set the key's ownership to the pe-puppet user and group:
# chown pe-puppet:pe-puppet /etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa
Set Up Git and GitHub We now want to create our Git repository. We'll be using GitHub in this course, but if you have a preferred Git host, feel free to use that. Log in to GitHub, then click New to create a new repository. Name the repository something relevant; I'll be going with control-repo. Click Create Repository. We now want to associate our relevant SSH key. Click on the Settings tab, then click Deploy keys. On the command line, retrieve your public key details:
# cat /etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa.pub
Add it to the Key text box on the Deploy keys page. Give the key a title, check off Allow write access, and then select Add key. Let's now copy down the control repo. Ideally, we would use a separate workstation for this, but for the sake of simplicity, let's store it in our cloud_user's home directory on our master.
# exit
$ git clone https://github.com/puppetlabs/control-repo.git
If we look in our new repo, we can see it contains all the required files, including a Puppetfile, which we'll discuss later in this section; the environment.conf and hiera.yaml configuration files; our usual data and manifests directories; related helper scripts (scripts); and what is called a site-modules directory that lets us store our roles and profiles in a more structured system.
 $ cd control-repo
 $ ls
Let's go ahead and replace the existing origin for this directory to our new Git repo:
$ git remote remove origin
$ git remote add origin GIT_REPO_URL
And push our data to GitHub:
$ git push origin production
Wrap Up We've prepared our control-repo in this lesson, ensuring that we:Created a Git repoCloned the suggested PuppetLabs' control-repositorySet our new repo as the remote to our own repo

Code Manager

00:05:28

Lesson Description:

While our control-repo finished its setup, all that's left is for us to set up Code Manager itself. To finish up our setup and get Code Manager working alongside our repo, we want to switch to our Puppet Enterprise console. Log in, and then navigate to the Classification page. Expand the PE Infrastructure group, then select the PE Master group. Next, switch to the Configuration tab. We will now look for the puppet_enterprise::profile::master class and add some additional parameters:code_manager_auto_configure: truer10k_remote: SSH option for our GitHub repo (it should begin with `git@github.com:`)r10k_private_key to /etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa Now, finalize the set values:Commit to the changes.Navigate to Nodes, select your Puppet master.Perform a Puppet run to enforce all of the changes. Create a Deploy User We need to create a user for Code Manager to use:Navigate to Access control > Users. Create a new user, such as Code Manager with a login of codemanager. Click Add local user, and then select the user and generate a password.Switch to the User roles page, and then select Code Deployers. Assign our new user to this role. Finally, we need to finish out our user creation through the puppet-access command line tool. From the Puppet master, run:

$ puppet-access login --lifetime 365d
And log in as the codemanager user. We can then test that Code Manager is working by running:
$ puppet-code deploy --dry-run
Wrap Up In this lesson, we set up Code Manager by:Updating our Puppet Master group; specifically, the puppet_enterprise::profile::master classAdding a new Code Manager userTesting our Code Manager setup

The Puppetfile

00:10:48

Lesson Description:

With Code Manager working, we can now begin defining our environments and classifying our nodes within these environments. Unlike in previous lessons where we did this with the main manifest or through the PE console Classification page, we'll be working with what is known as a Puppetfile. If we look in our control-repo, we see that one of these has already been created in our default production branch. In this file, we'll define which modules we want to run by providing the name of the module and the repository the module can be found at, hence, we pushed our apache module to GitHub. But first, let's see how we can work with a new environment. With Code Manager we can manage a new environment by simply adding a new branch to our repository. Recall the staging directory we created earlier in our environments folder. Let's go ahead and work with that:

$ git checkout -b staging
Now, open the Puppetfile:
$ vim Puppetfile
The top of the file initially defines the location of the Puppet Forge — this lets us easily add Forge modules to our nodes without needing to define the specific GitHub repo. In fact, let's do this with the ntp module. To add a Forge-based module, all we have to do is list the name of the module and desired version, like so:
mod 'puppetlabs-ntp', '8.0.0'
We're also provided with examples of how to add any non-Forge modules hosted on Git repositories. We'll use the same mod MODULENAME format, then define the Git location. If we're using a non-master branch, we'll also need to define the branch name. Let's add our apache module:
mod 'apache',
  :git => 'YOUR_GIT_INFORMATION'
Save and exit. We now want to push our changes to the overall control-repo:
$ git commit -am "Added NTP and Apache modules"
$ git push origin staging
Take note of how we push to the staging branch. We can now apply this to our nodes assigned to this environment — but the problem is that right now there are no nodes in this environment. Let's switch to our Puppet Enterprise Console and log in. Move to Classification, then add a new environment group, with the Environment set to staging. Add it to the All Environments parent group. Click on the new group once added. Assign either of the nodes, for example we'll be adding the CentOS host. Commit the changes. We can now return to our command line and deploy our environment. We use the --wait flag so we can view any additional feedback and ensure the process worked:
$ puppet-code deploy staging --wait
Success! But what happens to our existing environment setup when we do this? Well, let's go check that out:
$ cd /etc/puppetlabs/code
What's that? It won't let us cd into our code directory as our cloud_user anymore since we enabled Code Manager, Puppet automatically changed the permissions to our environment directories, locking them down for even superusers. Let's switch to root:
$ sudo -i
And view our staging environment:
$ cd /etc/puppetlabs/code/environments/staging
If we now look in our modules directory, we can see the two modules from our Puppetfile have been added:
$ ls modules
apache ntp
And what of our production environment? Well, since we haven't deployed any code, it's not even there and we now only have staging. Wrap Up We learned how to use Code Manager to actually set up our hosts through the use of a Puppetfile, which lets us:Define the modules we want to useSet specific information about these modules, such as a particular path or branchPush changes to our modules through the use of a puppet-code deploy

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Orchestration and Tasks

Orchestrator

Orchestration Overview

00:06:44

Lesson Description:

One of the primary benefits of using the Puppet orchestrator is the ability to perform Puppet runs on defined nodes, node groups, or PQL queries. A second benefit includes the ability to run Puppet tasks, which are one-off commands that we can execute on our target machines. Moreover, these capabilities allow us to manage the rollout of our hosts instead of relying upon the commonly used configuration changes that are set to kick off every 30 minutes. The Puppet orchestrator consists of two protocols: The Puppet Execution Protocol (PXP) and the Puppet Communications Protocol (PCP). The PXP is a message format used by pe-orchestration services that request an executed task to run on the remote host, and this is what receives any responses associated with that task. On the other hand, the PCP is the underlying protocol that the PXP uses; this also describes how PXP messages route to an agent, and then go back to the orchestrator. The overall process runs via the Java Virtual Machine (JVM) service that runs as part of pe-orchestration-services on our master-of-masters (MoM). We'll get into the details on how this works further as we dive into the other individual Puppet run and task lessons, but for now, let's take into consideration what we'll need to complete tasks with the orchestrator. Puppet orchestrator is a Puppet Enterprise product, which means it's already configured with our PE install. With that said, we must enable it for our users, and we also have the option to adjust our orchestrator configurations for the entire setup, as well as for specific users. Allowing Orchestrator Access To provide access to orchestrator, we need to make sure our user has the appropriate permissions applied through our RBAC system. Since the user we created already has full administrator privileges, this isn't a problem, but other users will need the following role access:Type PermissionPuppet Agent Run Puppet on the agent nodeJob Orchestrator Start, stop, and view jobsTasks Run tasksNodes View node data from PuppetDBIf we wish to use the puppet-job command from the CLI, then we need to enable our puppet-access token. It will be no different than when we performed the same task for Code Manager:

$ puppet-access login --lifetime 7d
Configuration Options PE is managing the orchestrator because it's not the service where we should manage the global configuration. Instead, we can create user-specific configurations by creating the ~/.puppetlabs/client-tools/orchestrator.conf file. The .puppetlabs directory was created when we generated our first access token, but we need to add the client-tools directory underneath this:
$ mkdir .puppetlabs/client-tools
Then create our configuration file for orchestrator:
$ vim .puppetlabs/client-tools/orchestrator.conf
We must format this file with JSON, and we have five settings that we can change: service-url, environment, cacert, token-file, color, and noop:
{
  "options" : {
    "service-url": "https://PUPPET MASTER HOSTNAME:8143",
    "environment": "production",
    "cacert": "/etc/puppetlabs/puppet/ssl/certs/ca.pem",
    "token-file": "~/.puppetlabs/token",
    "color": true,
    "noop": false
  }
}
Since we don't need to set up any of these for our user, let's leave the color parameter:
{
  "options" : {
    "color": true
  }
}
Save and exit. We're now prepared to use orchestrator! Wrap Up In this lesson, we've learned the following:The function of orchestrator within our Puppet infrastructureHow to enable orchestrator for a user through RBAC and the puppet-access commandHow to set up user-specific configuration options

On-Demand Puppet Runs

00:08:19

Lesson Description:

Previously, when we needed to trigger a Puppet run on any of our nodes, we used a puppet agent -t. However, Puppet Enterprise provides us with alternative options that allow us to use either the PE Console or puppet-job command line tool to orchestrate on-demand Puppet runs on defined sets of nodes. Using the Console Let's begin by trying it out on our PE console. Log in as the user, and then navigate to the Puppet link, under Run on the left-hand side menu. This allows us to set up the circumstances of our Puppet run by defining the desired environment to use, such as noop for a test run, and the list of nodes you wish to run. The list of nodes, specifically, is defined in the Inventory section, and we have the option of supplying a Node list, PQL query, or Node group. Using a Node list or Node group will be the easiest option to execute. We can search for our nodes or group, select it, and then apply the Puppet run. The PQL query option is more advanced, but we have a provided list of common queries that can aid us as we work through common orchestration use cases. Do we need to update every server that uses a specific package? Then select Nodes with a specific resource. Should we target everything through the use of a specific class? Select Nodes with a specific class, and so on. Let's go ahead and try this. Select Nodes assigned to a specific environment. By default, this is set up to perform a Puppet run on production hosts. Let's update this to staging:

nodes[certname] { catalog_environment = "staging" }
Click Submit query. Our two nodes are presented to us in the node list. Once we have verified that everything looks good, click Run job. We'll be redirected to a processing page for our job. As each node completes its Puppet run, they'll report back with a status of succeeded, failed, or skipped. The Process So what exactly happened when we triggered each Puppet run through orchestrator? Let's go through this step-by-step:The orchestrator requests classification information about the nodes on which the job is run.The orchestrator requests the environment graph from the Puppet Server.The orchestrator creates the job ID and begins to poll nodes.The orchestrator queries PuppetDB for the agent version.The orchestrator triggers the PCP broker, which starts to perform any actions concerning the targeted nodes.The PXP agent returns its results to the PCP broker.The orchestrator receives the results and requests run reports from PuppetDB. The CLI Of course, we don't have to restrict ourselves to using the PE console alone. Let's switch to our command line — either our master or our workstation, it will be where you set up the token in the previous lesson. We can perform all of the same actions we just did, but only with the puppet-job command. To trigger a run, we would specifically use puppet-job run --QUERY_TYPE QUERY. So, to run the same PQL query that we just utilized, we would run:
puppet-job run --query 'nodes[certname] { catalog_environment = "staging" }'
This will report the same output we saw on our console. We can also view this report from the console under the Jobs page. Wrap Up In this lesson, we looked at using orchestrator for on-demand Puppet runs, learning:How to trigger a run through the CLI and consoleHow to use node names, node groups, and PQL queries to define the nodes we're working withThe underlying events that occur when we request an orchestrator run

Using Tasks

00:06:48

Lesson Description:

Puppet's orchestrator capabilities extend beyond the mere coordination of Puppet runs. One of the primary features of orchestrator is that it allows us to run ad-hoc commands across our nodes in a similar manner to how we perform Puppet runs. Let's go ahead and tackle this now with one of our default tasks. Using the Console Log in to the Puppet Enterprise console, and then move to the Task page from the left menu. We're first prompted to select a task. Here we have a few defaults shipped with PE: facter_task, package, puppet_conf, puppet_enterprise::dhparam, service, service::linux, and service::windows. Task names should be self-explanatory — facter_task lets us query for facts, whereas package manages packages, and so forth. Let's try to query for our os fact. Select facter_task from the menu. Note: Providing an additional description is optional. Next, we need to supply the task parameters. These are predefined options we can set, some of which are required for the task to run. Select fact for the Facter task, and then set the value of the task to os. Click Add Parameter. We do have the ability to schedule this task in the future, but leave it set to Now. Finally, we can select our nodes in the same manner as the previous lesson. Let's use the Node list option, selecting both of our non-master hosts. Select Run job when ready. As before, we're redirected to a page that outputs our results for our job. Take notice of how we receive the output that we would normally get when querying facts on the command line of our console. The Process So, what happens when we run a task? The process deviates slightly from performing simple on-demand Puppet runs, consisting of the following steps:The PE client sends a task command.The orchestrator performs a check to see if the user is authenticated.The orchestrator fetches the node target from PuppetDB and returns the nodes.The orchestrator requests data from Puppet Server.Puppet Server returns task metadata, file URIs, and file SHAs.The orchestrator validates the task command and then sends the job ID to the client.The orchestrator sends task parameters and file information to the PXP agent.The PXP agent sends an initial response back to the orchestrator and checks the SHA against the local cache.The PXP agent requests a task file from Puppet Server.Puppet Server returns the task file to the PXP agent.The task runs.The PXP agent returns the results to the orchestrator.The client requests events from the orchestrator.The orchestrator returns the results to the client. The CLI Additionally, we can also run tasks from the CLI. For this action, we will not use the puppet-job command — instead, it will be puppet task. The format of the command should resemble: puppet task run TASK_NAME PARAMETER=VALUE --TARGET_TYPE TARGETS. Let's go ahead and try to restart httpd on our CentOS node:

$ puppet task run service action=restart name=httpd --nodes USERNAME#c.mylabserver.com
Note: If the server has been restarted since the previous lesson, a Puppet access login must be run again. As before, the output on our CLI reflects that on our console, and we can view the job on our console. Wrap Up In this lesson, we learned how to run ad-hoc commands on our nodes via tasks, including:How to run tasks on the CLIHow to run tasks on the consoleThe underlying process that occurs when we run tasks

Writing Tasks

00:11:46

Lesson Description:

Currently, our list of tasks is quite limited. While Puppet ships with some useful tasks, it is in no way an exhaustive list. So, how do we add new tasks? Well, tasks are added through modules, so when we add a module with task support, those tasks are automatically added for us to use. None of our Puppet Forge modules in use currently have tasks and using those tasks would in no way differ from using the pre-installed tasks, so instead let's look into writing tasks ourselves. We can write tasks in any programming language and they should be stored inside the tasks directory within our module path. How can we edit our module now that Code Manager has taken over the environment directory? Well, it's a good thing we've been using GitHub. Go ahead and pull down the apache module, either to the workstation or simply use the cloud_user's home directory on the master.

$ mkdir modules
$ cd modules
$ git clone https://github.com/USERNAME/MODULE.git apache
$ cd apache
Let's add our tasks directory:
$ mkdir tasks
While we can use any language to create our tasks, the default is set as Bash, so if we use the PDK to create our task file, it outputs a .sh file:
$ pdk new task ctl
pdk (INFO): Creating '/home/cloud_user/apache/tasks/ctl.sh' from template.
pdk (INFO): Creating '/home/cloud_user/apache/tasks/ctl.json' from template.
We're going to be creating a simple script that will run the apachectl command with defined parameters — the goal here isn't to learn advanced script writing, but to understand the context within Puppet, so we're going to keep this simple with regards to the script itself. Take notice that when we created our task, a .json file was included along with our script — this where we're going to store the metaparameters that define how our script is used as well as our parameters themselves. When we name our tasks, we should follow the usual Puppet naming structure: lowercase letters, digits, and underscores are allowed, and the name of the task will be referred to as modulename::taskname or apache::ctl in this case. Now let's open our script:
$ vim tasks/ctl.sh
Our hashbang has already been provided, and this is mandatory for all tasks, regardless of language. Now we can write our Bash script. No matter how simple or advanced our scripts are, there are additional standards that we can reference when writing our tasks. The primary standard is that we can define a variable we want to use as a user-supplied parameter, plus we need to include the PT_ prefix when providing the variable. For example, our goal here is to let users execute some apachectl commands. So, our script will be:
apachectl $PT_command
Now let's save and exit and look at our JSON file. Most of the work that goes into getting our script to work with Puppet can be found in this file, not the script itself. Right now, our JSON file contains placeholder metadata that we need to update:
$ vim tasks/ctl.json

{
  "puppet_task_version": 1,
  "supports_noop": false,
  "description": "A short description of this task",
  "parameters": {
  }
}
So, let's go ahead and update this:
{
  "puppet_task_version": 1,
  "supports_noop": false,
  "description": "Allows us to run apachectl commands through Puppet",
  "parameters": {
  }
}
Now we want to supply our parameters. We just have one, PT_command, but when we reference it in our parameters, we need to leave out the PT_ prefix:
{
  "puppet_task_version": 1,
  "supports_noop": false,
  "description": "Allows us to run apachectl commands through Puppet",
  "parameters": {
    "command": {
    }
  }
}
We now want to provide a description, and then a type that will work similar to how we assigned the type in our init.pp files. With that said, we need to be incredibly cautious about how we reference our type because security problems can be created that will essentially threaten the overall system's health. So, this means that instead of defining our type as a string, we must use Enum and provide a specific set of options:
{
  "puppet_task_version": 1,
  "supports_noop": false,
  "description": "Allows us to run apachectl commands through Puppet",
  "parameters": {
    "command": {
      "description": "apachectl command to run",
      "type": "Enum[start,stop,restart,status,graceful,graceful-stop]"
    }
  }
}
Save and exit, and then push these changes to GitHub:
$ git add .
$ git commit -m "Added apachectl task"
$ get push origin master
Finally, we want to apply this to our Puppet setup. However, where we add the module matters. Right now, we only have our apache module assigned to staging, but to add tasks, the parent module must be on the production environment. So, let's go ahead and move back to our control-repo:
$ cd ../control-repo/
And switch to the production branch:
$ git branch production
Then add the module to the Puppetfile:
$ vim Puppetfile

mod 'apache',
  :git => 'https://github.com/USERNAME/MODULE'
Save and exit. We can now deploy our environment:
$ puppet-code deploy production --wait
Test the Task To test our task, let's switch back to our PE console, and then move to the Tasks page. apache::ctl is now available in the dropdown. Select it, then change the command parameter to restart and run it against both hosts. Success! Our task is working, plus we now have a greater sense of how to create and use our own Puppet tasks. Wrap Up In this lesson, we created a new Puppet task, learning:How to add custom tasks to our Puppet setupHow to supply parameters to custom tasksHow the JSON metadata file is used in conjunction with the task script itself

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:15:00

Bolt

Bolt Overview

00:06:40

Lesson Description:

PuppetLabs' Bolt is an open-source orchestration tool that allows us to make changes on our nodes without needing to install any additional tools or services on the nodes we are managing. In this manner, Bolt can be comparable to the configuration management system known as Ansible, which allows users to make configuration changes over SSH. We can install Bolt on versions of Windows, Mac OS X, or Linux, and we can add it directly to our workstation computer. We'll still be working through my Puppet master for this lesson as an example, but one may choose to work through a preferred workstation if desired. Installing Bolt is simple enough. Since we already added the Puppet 5 repository when we installed the PDK, all we must run is:

$ sudo apt-get install puppet-bolt
The package name is the same on CentOS. If we're working on a Windows or Mac computer workstation, go to Bolt's homepage at https://puppet.com/products/bolt to download the package, and then double-click it to perform the installation. Once Bolt is installed, we can now run one-off commands and scripts, use Puppet tasks plus plans, and even perform a puppet apply of any specified manifests — which makes Bolt especially useful for quite a few tasks, such as bootstrapping our nodes to work under Puppet. As previously mentioned, Bolt uses SSH to access our hosts. So, we do need to perform a preliminary login to ensure our key exchange is working properly for a passwordless SSH login. For example, if we were to try to run a command on either of our agent nodes right now:
$ bolt command run 'hostname -f' --nodes USERNAME#c.mylabserver.com -p
We would receive an error:
Host key verification failed...
Let's log in via SSH into our remote machine for a moment, and then attempt to run the command again:
$ ssh cloud_user@USERNAME#c.mylabserver.com
$ exit
$ bolt command run 'hostname -f' --nodes USERNAME#c.mylabserver.com -p
We'll see that Bolt returns with the expected value this time. Take notice too of the use of --nodes, which may look like our orchestrator commands from the previous section. These similarities exist because it essentially works the same — although we can also replace --nodes for --targets. Additionally, we have the --query or -q option to perform PuppetDB queries so that we can define our targets. The --noop and --params options also work the same way as our puppet-job or puppet tasks commands. We can also select what user we wish to work as with --user — or we can escalate our privileges with --run-as. We've also been using the -p flag so that we can log in as our cloud_user, but we can simplify this process by using SSH keys instead:
$ ssh-keygen
$ ssh-copy-id cloud_user@USERNAME#c.mylabserver.com
We can now run passwordless commands:
$ bolt command run 'hostname -f' --nodes USERNAME#c.mylabserver.com
Since we have Bolt up and running now, let's get ready to look at how we can use Bolt to perform more open-ended changes, especially comparing it to the default orchestrator that's present. Wrap Up In this lesson, we covered the basics of Bolt, learning:That Bolt is Puppet's open-source (and slightly a more advanced) orchestration toolThat Bolt works off SSHHow to install Bolt on our system and ensure it can connect to our hosts

Basic Commands

00:03:06

Lesson Description:

Already in the previous lesson, we noticed a benefit when using Bolt that our PE orchestrator significantly lacks. The primary differentiator is that through Bolt, we can run ad-hoc commands of any kind and are not limited to only using tasks to perform actions to our hosts. While we should create tasks and later plans for frequently used actions, we also have the option to perform any command easily that we need through Bolt without any additional prerequisites (past having Bolt installed). Moreover, unlike the orchestrator, Bolt works over SSH, and our targets do not need to have Puppet installed. On top of this, it is also a popular tool for tasks like bootstrapping nodes into Puppet or performing any preliminary tasks. So, let's start with the basics. We already saw that we have the ability to run commands like:

$ bolt command run 'hostname -f' --nodes rabbitheart2c.mylabserver.com
However, we should note that when using any redirection in our Bolt commands, we need to pass it in the shell prompt before writing the command. So if we wanted to pass some information into a file, we would run:
$ bolt command run "bash -c 'echo hello > /tmp/hello'" --nodes rabbitheart2c.mylabserver.com
We also have the option to run scripts on nodes that get executed through the same command as above, but instead of supplying a command, we would supply the location of a script:
$ bolt command run ../script.sh --nodes rabbitheart2c.mylabserver.com
Note: The script must contain the appropriate interpreter declaration at the top of the file. Finally, we can also use Bolt to manage files. Let's go ahead and pass in our .vimrc config file:
$ bolt file upload .vimrc ~/.vimrc --nodes rabbitheart2c.mylabserver.com
With the basics out of the way, we can now look at working with more robust Puppet tasks. Wrap Up In this lesson, we covered basic Bolt commands, such as:Running ad-hoc commands of any kindUsing a script to make changes on our targetsCopying a file to our targets

Running Tasks

00:04:07

Lesson Description:

Tasks in Bolt are no different than the tasks tackled using orchestrator and the tasks we're running are the same, which means everything we learned in our previous section on tasks are the same for Bolt, except we'll be using the bolt command instead of puppet tasks. To run a task in Bolt, we use the bolt tasks run command, and then we can pass parameters with the parameter=value format again, similar to when using orchestrator on its own. With that said, if we try to run our apache::ctl module, then it’ll fail:

$ bolt task run apache::ctl command=restart --run-as root --sudo-password --nodes rabbitheart2c.mylabserver.com
Could not find a task named "apache::ctl". For a list of available tasks, run "bolt task show"
The event occurs due to an issue with Bolt's module path:
$ bolt task show
...
MODULEPATH:
/home/cloud_user/.puppetlabs/bolt/modules:/home/cloud_user/.puppetlabs/bolt/site-modules:/home/cloud_user/.puppetlabs/bolt/site
To solve this, we can update the modulepath, but first, let's create an overall modules directory within our home directory:
$ mkdir modules
$ mv apache modules/
Bolt's configuration can be altered by creating a bolt.yaml file in .puppetlabs/bolt/:
$ vim .puppetlabs/bolt/bolt.yaml
modulepath: "/home/cloud_user/.puppetlabs/bolt/modules:/home/cloud_user/.puppetlabs/bolt/site-modules:/home/cloud_user/.puppetlabs/bolt/site:/home/cloud_user/modules"
Let's go ahead and run our apache::ctl task now:
$ bolt task run apache::ctl command=restart --run-as root --sudo-password --nodes rabbitheart2c.mylabserver.com
Note: We must ensure that we're working as root for this task, supplying our escalation password with --sudo-password — by leaving it blank, this permits a prompt to appear when running the command, which also lets us type it in. This time it is successful, so we can now add and run tasks in Bolt! Wrap Up In this short lesson, we learned how to:Run a task in BoltAdd new tasks in BoltUpdate our bolt.yaml configuration file

Using Orchestrator with Bolt

00:10:13

Lesson Description:

While tasks are wonderful for common tasks that are simple to run, there are times when we want to be able to write something more robust than a simple script. Plans allow us to combine tasks with other logic so that we can complete more complex operations through Bolt. Right now, we have a few prepackaged plans at our disposal:

$ bolt plan show
aggregate::count
aggregate::nodes
canary
facts
facts::info
puppetdb_fact
reboot
Moreover, to run a plan, we would use the bolt plan run command, feeding in the parameter when appropriate:
$ bolt plan run facts --nodes rabbitheart2c.mylabserver.com
We can also create our own plans. To do so, we need to add a plans directory to our module:
$ cd modules/apache/
$ mkdir plans
We're going to create a plan to remove Apache from our hosts, using a combination of commands, tasks, and parameters. Plans follow the same naming structure of classes and tasks, but there is currently no PDK command to generate them. We'll have to create our plan the old-fashioned way — from scratch, in our text editor.
$ vim plans/remove.yaml
Plans consist of three structures: steps, parameters, and returns — with returns being completely optional. The steps are comprised of an array of actions to complete, in order, known as "step objects." We can use five different kinds of step objects: command, task, script, file (which uses source and destination keys), and plan. Each one of them configures a "step" involving an action for which the step was named — e.g., the command step object runs a command, task runs a task, script a script, and so on. Let's start by creating a task step object that causes apachectl to perform a graceful shutdown of our Apache services:
steps:
  - task: apache::ctl
    target: rabbitheart2c.mylabserver.com
    description: "Shuts down Apache service"
    parameters:
      command: graceful-stop
We're now going to use command to create a copy of our log files before we remove our package:
  - command: "cp /etc/httpd/logs/access_log /tmp/apache_access"
    target: rabbitheart2c.mylabserver.com
    description: "Create snapshop of apache log files"
Finally, remove the package:
  - task: package
    target: rabbitheart2c.mylabserver.com
    description: "Uninstall Apache"
    parameters:
      action: uninstall
      name: httpd
Let's save and exit. Now we can test this with:
$ bolt plan run apache::remove --run-as root --sudo-password
Note: Notice the specific use of the escalated privileges. We can also write plans using the usual Puppet DSL. However, you'll find that while code written for plans has a lot of the usual iterations of Puppet code, it's not quite what we’re used to nor the same. We can see this in practice by converting our current script:
$ bolt plan convert plans/remove.yaml
# WARNING: This is an autogenerated plan. It may not behave as expected.
plan apache::remove() {
  run_task('apache::ctl', 'rabbitheart2c.mylabserver.com', "Shuts down Apache service", {'command' => 'graceful-stop'})
  run_command("cp /etc/httpd/logs/access_log /tmp/apache_access", 'rabbitheart2c.mylabserver.com', "Create snapshop of apache log files")
  run_task('package', 'rabbitheart2c.mylabserver.com', "Uninstall Apache", {'action' => 'uninstall', 'name' => 'httpd'})
}
Take notice of how to run various tasks, commands, etcetera, that use the run_ prefix, and then the name of our step object. We then feed our attributes into the parentheses and provide our parameters like how we define parameters in resources and classes — encased in curly brackets and mapped via hash rockets. There are some benefits to using the Puppet language over YAML, however. Namely, we can include logic, such as if or case statement — just as we can in any other piece of Puppet code. If you're looking for a way to expand on this lesson, a good way would be to add logic to our above plan so we can remove Apache from Debian-based hosts and our CentOS host. Wrap Up We worked with plans in this lesson, learning how to:Write plans in YAML and the Puppet DSLConvert plansRun plans on our hosts

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

PuppetDB

PuppetDB Overview

PuppetDB Overview

00:11:13

Lesson Description:

PuppetDB is a PostgreSQL-backed database that collects data concerning our nodes. Included with Puppet Enterprise and available for open-source Puppet, PuppetDB also enables some advanced Puppet features, such as the ability to use exported resources. We already have PuppetDB installed — but if we wanted to add it manually, we could do so from the Puppet 5 repository that we've already enabled, through the puppetlabs-puppetdb module, or from the source. We also have the option to split our installation so that PuppetDB works on a separate server. Configuration Post-installation, PuppetDB's configuration files can be found in the /etc/puppetlabs/puppetdb/conf.d directory. The files inside are stored as .ini files and portioned out in different [sections], similar to many of the other Puppet configuration files we've worked on in this course. Take note, however, that if you installed PuppetDB through the Puppet Forge module, these changes should be made in Puppet itself, and not directly to the configuration file. This is also true for us using Puppet Enterprise — if we wanted to make changes to our PuppetDB, we would do so through the PE console, under the PE PuppetDB group. If we navigate over to this group now, we can expand the Parameter name menu to view the primary configuration options we should familiarize ourselves with. For the most part, these are all self-explanatory — we can provide specific database information, change our RBAC setup for PuppetDB, define the certname, change our port setup — we can even update our java_args expression for the startup. CLI If we were working from an open-source Puppet setup, our options for using PuppetDB would be somewhat limited. We would be able to use exported resources though, but we'll discuss this further in the next lesson. When it comes down to the CLI, we have a single option where we can check our node status:

$ sudo puppet node status USERNAME#c.mylabserver.com

Currently active
Last catalog: 2019-06-25T16:21:37.061Z
Last facts: 2019-06-25T16:21:36.539Z
With that being said, since we're using Puppet Enterprise we also have access to the puppet query and puppet db commands. Puppet Querying The puppet query command allows us to run Puppet queries on the CLI; we'll be using this extensively in the next lesson, so for now let's just get a sense of how it works by running a familiar query. Note: A renewal of your access token may be needed:
puppet query "nodes[certname] { catalog_environment = 'staging' }"
PuppetDB Configuration Options The puppet db command provides us with two abilities, the first being that we can export our current database with puppet db export FILENAME.tgz or import an existing database with puppet db import FILENAME.tgz. Additionally, we can use one of three anonymization profiles for creating exports: full: This will anonymize all data, including node names, resource types, resource titles, parameter names, values, any log messages, file names, and file lines. The structure of the data will remain unaffected.moderate: The recommended profile. Node names, resource titles, parameter values, log messages, file names, report logs messages, and time-based metric names are anonymized.low: Node names, parameter values, and log messages are anonymized. Metrics Finally, we can view the metrics for our PuppetDB server through the SSH tunnel. Since the performance of our database is integral to its function, we should know how to access its metrics page. To do this, we need to set up an SSH tunnel on our workstation:
$ ssh -L 8080:localhost:8080 cloud_user@PUPPETDBHOST
Note that for us, our PUPPETDBHOST is also our master. Wrap Up In this lesson, we learned the basics of PuppetDB, including: How to install PuppetDBPuppetDB configuration optionsThe basics of the PuppetDB CLIHow to view our PuppetDB metrics

Exported Resources

00:14:37

Lesson Description:

PuppetDB's capabilities extend beyond just allowing queries for data. It also provides us with the ability to store resource information that we can then use in other modules and manifests. The best way to get a sense of how this works is to write it ourselves, so let's create a host record we can reference that lets us build an /etc/hosts file based on our nodes' IP address and FQDN for any web servers. So, let's go ahead and move into our faux-workstation, where we stored our Apache module:

$ cd modules/apache
We're going to update our config class to include an exported host definition. Note that the host resource type is depreciated in Puppet 6.0 and above, but is available for use in our version of Puppet — however, it does provide the clearest example of how exported resources work. The host also creates an entry in /etc/hosts using the provided parameters. Open up the config.pp manifests:
$ vim manifests.config.pp
Let's add the host resource type as though we weren't exporting it first:
  host { "${hostname}":
    host_aliases => "$fqdn",
    ip           => "$ipaddress",
  }
Then, to signify we want this information stored in PuppetDB, we prepend the resource type with @@:
  @@host { "${hostname}":
    host_aliases => "$fqdn",
    ip           => "$ipaddress",
  }
Save and exit, then commit the changes to git:
$ git add .
$ git commit -m "Added host exported resource"
$ git push origin master
We then want to force a Puppet run on our two non-master hosts. We can do this through the command line via orchestrator if we wish:
$ puppet job run --nodes NODELIST
Finally, to confirm our resource has been exported, run:
$  puppet query "resources[certname, type, title, parameters] { exported = true }"
We now want to create a base class that will create host records based on the information pulled from this exported resource. Let's drop into our modules directory:
$ cd ..
And create a base module — fill out the PDK form as normal:
$ pdk new module base
$ cd base
Let's use the init.pp to import our hostname information:
$ pdk new class base
$ vim manifests/init.pp
We can now reference our exported resource with Host <<||>>:
class base {
  Host <<||>>
}
We'll want to add this GitHub, as well. Create a new GitHub repo using the instructions from the beginning of the "Module Authoring" section, then add your base module to Git:
$ git init
$ git add .
$ git commit -m "Base host module"
$ git remote add origin https://github.com/USERNAME/REPO.git
$ git push origin master
We now want to add this to our staging environment:
$ cd
$ cd control-repo
Ensure that we are on the staging branch:
$ git branch
Add the new module to your Puppetfile:
$ vim Puppetfile

mod 'base',
  :git => 'https://github.com/USERNAME/REPO.git'
Save and exit, then commit the change:
$ git add .
$ git commit -m "Added base module to Puppetfile"
$ git push origin staging
Deploy the changes:
$ puppet code deploy staging --wait
We will want to access our Classification page on the PE Console now and select our dev-staging group. Lastly, add the base class to the configuration, commit the changes, and then perform a Puppet run on each host. Wrap Up In this lesson, we learned about exported resources, including: How to export fact data in a resourceWhere that data is stored and how to view itHow to import an exported resource

The PQL

PQL Basics

00:08:18

Lesson Description:

While we can connect to our PuppetDB database and run SQL-based queries on it as we would any database, PuppetDB also has its own query language that we should be aware of. Any query written in the Puppet Query Language (PQL) follows the same basic structure:

<entity> [<projection>] { <filter> <modifiers> }
Entities Out of all these components, only the entity aspect is required for the query to run. An entity is a context the query executes on. It helps define the results we will receive once the query is run. An entity essentially defines what we are querying for, such as node information, environments, facts, or resources. Full List Documentation Link. Projections The projection allows us to define a subset of our overall query that we wish to return. For example, we might want to parse our node entity using our facts or environment. We also saw this when we searched for our exported resource:
resources[certname, type, title, parameters] { exported = true }
Instead of retrieving all our resources entity information, we filtered by the name of the node, the resource type plus title, and included any set parameters. Had we left our projections out, we would have instead received a full list of information, including the environment, any tags, and more:
$ puppet query "resources { exported = true }"
Projections can also use functions instead of queries. Functions include count(), avg(<field>), sum(<field>), min(<field>), and max(<field>). Some of these require an additional field value to work. Filters and Modifiers Next, we have the ability to further filter our results. This portion of the query is stored in curly brackets and can reference entity fields, facts, and environment information. Filters are often further limited by the provided modifiers. So if we wanted to filter our nodes the start with a particular certname — say your Linux Academy username — the filter would be certname and the modifier would be your username. Here's an example:
$ puppet query 'nodes[certname] { certname ~ "^ellejaclyn" }'
Notice how we use a conditional operator to map our modifier to the filter. Also notice the use of regex. Additional Sorting Finally, we can further manipulate our Puppet query results by either grouping or paging the results. When we group our results, we condense all rows with the defined value into a single row. For example, if we wanted to see a simple list of fact names, we can use group by:
$ puppet query "facts[name] { group by name }"
We can also use SQL-like paging clauses: limit, offset, and order by. So if we wanted to examine a list of reports in the order that they were received, we would run:
$ puppet query "reports {certname = MASTER order by receive_time }"
Wrap Up In this lesson, we discovered the key components of a Puppet query, including the: EntityProjectionFilterModifier

Building Puppet Queries

00:07:25

Lesson Description:

For the most part, we've kept our queries simple for the sake of explaining its various components. Yet we can build out our queries to work in the most specific of circumstances. Let's first begin with a few basics, however. Say, for example, we just wanted to filter our PuppetDB data via a fact:

$ puppet query "inventory { facts.os.name = 'Ubuntu' }"
Note that inventory is an alternative endpoint for the facts entity that lets us better filter our facts. But this isn't necessarily enough to filter our results down to what we want. What if we wanted to only query for facts on Ubuntu hosts with the apache2 service? Well, we can use and to further built our query:
$ puppet query "inventory { facts.os.name = 'Ubuntu' and resouces { type = 'Service' and title = 'apache2' } }"
We can use and as much as we need, as well, chaining together all sorts of queries. Of course, we can also use or in place of and to manipulate our query. Let's look for Ubuntu or Red Hat-family hosts:
$ puppet query "inventory { facts.os.name = 'Ubuntu' or facts.s.family = 'RedHat' }"
Finally, let's close out our lesson by looking for Ubuntu or CentOS hosts on the staging environment containing either the httpd or apache2 service:
$ puppet query "inventory { facts.os.name = 'Ubuntu' or facts.s.name = 'CentOS' and resources { type = 'Service' and title = 'apache2' or title = 'httpd' } and environment = 'staging' }"
Wrap Up This lesson was all about using components from the previous lesson to build practical Puppet queries!

Reports and Troubleshooting

Reporting

The Node Graph

00:03:49

Lesson Description:

It feels as if we’ve spent countless hours working in Puppet now. However, by the time you've reached the end of this lesson, you’ll recognize that the number of modules we’ve touched in this course is relatively small due to the immensity of the full configuration management setup. As our infrastructure grows, we’ll want to be able to easily pinpoint how our modules are contributing to changes. With Puppet Enterprise (PE), this is where our node graph swoops in. A node graph is a visual representation of the changes Puppet makes on our node that is based upon its most recent catalog compilation. To look at a node graph, we simply must log in to the Puppet Enterprise console, select a node from the Nodes page, and then click the Node graph link below the certname of the host. We are then presented with a flow graph of our overall setup for that single node. Each individual module class that was applied is seen, with the color of the dot indicating whether it was unchanged, changed, or encountered a failed state. Underneath each module's overall class, we see the individual classes or resource types, breaking each configuration section down. If we click on any of these items, we can see a list of tags and dependencies for that individual item. We can also perform some simple filtering tasks on our node graph itself — it allows us to search by resource, stage, class, tag, status, and centers in or highlights the desired area of the map as the filter is applied. With that being said, the node graph is not at all the only option we have for investigating our nodes and node changes. For a more comprehensive and detailed look, we'll want to consider our reports.

The Report Table

00:05:25

Lesson Description:

Each time a Puppet run is performed on our hosts, a report is taken of what occurs for it (unless otherwise disabled in the configuration). To be able to view these detailed reports, we need our Puppet Enterprise console — specifically, we need to move over to the Reports page. We have a chronological list of all logs and events for our system presented to us. We can filter these by the run’s status or simply view them on their own, but each overall report informs us of the node’s report origins. They also showcase how many nodes failed, changed, remain unchanged, were skipped, and any corrective changes that were made to them. By selecting one of these reports — simply on the report time, we can see a list of resources events that occurred on the host. We can break our Puppet runs down here to individual resources, allowing us to perform tracking and identifying where our modules are failing (or otherwise). Especially during changes or failures, fore more specific details we can select the Log tab. This provides us with a formatted log of the Puppet run, with warnings color-coded in yellow and failures highlighted in red. If we recall the feedback received when using puppet agent -t runs on our hosts earlier in this course — it's the same thing, only it's presented on our PE console this time. Finally, we can also view the Metrics of the run. This section informs us about basic facts about the run, such as the environment, time taken, when the run began, the number of resources being managed, and much more. Overall, we may find ourselves turning to the Logs page quite often when troubleshooting issues. While the Events and Metrics pages provide important context and data for common troubleshooting procedures, we'll discover the most helpful information in our logs.

Filtering Reports

00:03:49

Lesson Description:

Let’s now return to the overall Reports page. By default, it displays all reports in chronological order, or we can filter via the run status too. However, by using the Filter by fact value option, we can refine our reports further. The filtering capability can be especially useful when tracking certain trends in run status. Still working from the Reports page, select Filter by fact value. We have two options for our defined rules here: We can match all rules or any rules. Essentially, when we match all rules, we can place an and in-between our rules: osfamily = Debian AND pe_server_version ~ 2018.1 As such, the above express would match only our master. Alternatively, if we switch to the any node filter, we would read this same set of rules as: osfamily = Debian OR pe_server_version ~ 2018.1 Results for both our Ubuntu node and master are received. Exporting Data If we want to track down an interesting trend or need a record of information, we can also download the .CSV files of our reports. Underneath the filter area, we have the option to Export data. As a result of this, a local copy of the provided report table downloads, giving us all set filters.

Troubleshooting

Installation

00:06:09

Lesson Description:

We are going to finish out this course by spending some time on troubleshooting strategies for situations that we may come across while using Puppet. Since it makes sense to start from the beginning, ways to troubleshoot a Puppet Enterprise (PE) installation are covered first. Most of the time, Puppet installation issues fall into one of two categories: communication issues and infrastructure support issues. Deciding on how to begin troubleshooting depends primarily upon the way the issue presents itself. The PE Installer When using the Puppet Enterprise installer — with any version — we often notice a problem when it attempts to connect to its localhost using the FQDN we provided Puppet. Most often this can be fixed by either updating the /etc/hosts file to ensure the FQDN is appropriately mapped or by checking the firewall for interference. If there is firewall interference, we may want to disable the firewall entirely and implement a firewall module via Puppet — or we can open the correct port numbers. Open-Source Installation If problems arise with connecting to an open-source installation, we probably won't notice the issue until we attempt to start the Puppet Server service itself. In this case, we should check the firewall and /etc/hosts file, but we also look at the Puppet configuration itself: /etc/puppetlabs/puppet/puppet.conf Here, we may find the need to manually add the certname parameter and set it to our FQDN. We should have this set up in both the [main] and [master] sections of the file. Communication Timeout Issues If communication between the host and itself is not the issue, then it might be the size of your host causing the problem. If we are working outside of the recommended settings for the Puppet master, the /etc/default/puppetserver file must be updated, and the memory settings to support a smaller setting should change:

    JAVA_ARGS="-Xms2g -Xmx2g -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger"
Note: We can set the 2g portions of the above line to be as small as 512m if needed. A Final Note If our installation goes wrong, we can rerun the Puppet installer an unlimited amount of times without having to worry about negative repercussions. If we have the time and haven’t pulled in any nodes yet, it might be worth it to rerun the install. If we have trouble identifying the source of a problem, we must pay close attention to the logs and examine them thoroughly. Wrap Up In this lesson, we learned how to troubleshoot a failed install by: Checking our DNS informationReviewing our firewall settingsChanging the memory expectations upon startup

Communication

00:07:48

Lesson Description:

While we discussed the impact that DNS issues could have on the installation process, there are several other places where communication failures between components can cause issues in a Puppet setup — specifically, the communication between the Puppet agent on our hosts and master(s). For tackling these dilemmas, there are some basic things we should confirm to ensure our components can talk to one another. DNS and Firewall Issues Going back to the Puppet master discussed in the previous lesson, the first thing we must check is that there are no issues with DNS or a firewall between our Puppet master and any agents that exist. If there is uncertainty regarding whether the connection works, we can run:

$ telnet MASTER_FQDN 8140
If this fails, perform the same checks from the previous lesson on both master and agented node. We should make sure that the Puppet Server service started on the master (pe-puppetserver or puppetserver, depending on the Puppet version). Finally, another quick check we can perform is to confirm whether the time syncs up between both hosts. We can verify this manually or, for an easier fix, we might want to add NTP to all our hosts. Certification Settings If our DNS and firewall settings look okay, and the issue persists, there are a few other things we want to consider, such as our certificate settings for our agent nodes. The easiest check that we can make involves ensuring that the node's certification gets accepted on the appropriate master. We can confirm this by running puppetserver ca list --all on the master and checking that our cert signed for the problem node. If the problem persists, then the issue might exist with the certificate — either the certification has invalid dates, or our node shares a name with a previous node that has since faced removal from the infrastructure overall, but not from Puppet. In both of these instances, we should reissue our node's certificate. On the master:
$ puppet cert clean NODE_NAME
On the problem node:
$ rm -r $(puppet agent --configprint ssldir)
$ puppet agent -t
On the master:
$ puppetserver ca sign NODE NAME
Wrap Up In this lesson, we reviewed common causes for issues between our nodes and master, including: DNS and firewall issuesTime synchronization problemsCertificate troubleshooting

Code Manager

00:08:58

Lesson Description:

Code Manager is one of the more complicated components in our Puppet setup, and when it comes down to troubleshooting any issues, we will have to leverage a less straightforward approach. Logs, Statuses, and Communication If Code Manager fails, the first and easiest task that we can perform is to run the puppet code status command. This command provides us with a helpful overview of our Code Manager deployment and indicates the overall status of Code Manager plus the file service it uses. A working Code Manager deployment will display the state as running, but a failure will generally look like the command is not working at all. For example, a response of Connection timed out (os error 110) indicates that Code Manager is struggling to connect to our Puppet Server — so we would want to perform our usual DNS checks, as well as ensure that the Puppet Server service has been started. The puppet code status command may also return information about an invalid configuration. In this case, we either want to check our config at /etc/puppetlabs/puppetserver/conf.d/code-manager.conf (for manual setups) or view our class settings in the PE Enterprise Console for Code Manager. If we are still unsure of there the issues are deriving from, then looking into our Code Manager logs would be the next option. These logs are found at /var/log/puppetlabs/puppetserver/puppetserver.log and contain not just Code Manager logs but information for our whole Puppet Server setup. Therefore, we should look for logs for two specific endpoints: /v1/deploys or — if using a webhook along with Code Manager — /v1/webhook. Finally, we can test the communication between our Code Manager and the control repo through the puppet code deploy --dry-run command. If there are issues with our Git repo, we should receive an Unable to determine current branches for Git source warning. Puppetfile Troubleshooting Our issue may not be with Code Manager itself, however. If our issue is that Code Manager appears to be working, but our end results aren't what we expect, we should follow another troubleshooting tactic. First, we'll want to check our Puppetfile — we can verify our syntax via the r10k puppetfile check command, but beyond that we should investigate our Puppetfile's sources. Specifically, we should be looking at any version changes for Forge modules, plus ensure that any referenced Git repos still exist. We can confirm this further by copying our control repo to a temporary directory and running:

$ sudo -H -u pe-puppet bash -c 
    '/opt/puppetlabs/puppet/bin/r10k puppetfile install'
This will install all modules in the Puppetfile and note any errors that occur during the test deploy. Wrap Up In this lesson, we looked at both communication and Puppetfile-related issues that we can encounter when running Code Manager, including: How to test the connection between Code Manager and the Puppet ServerWhere to find our Code Manager logsHow to check our Puppetfile and control repo for issues

PostgreSQL

00:04:54

Lesson Description:

Puppet uses PostgreSQL as its backend database. If we encounter issues with the database’s performance, it can often be traced back to it using either too much space or memory. Both can be solved either through an easy configuration change or by implementing a split configuration where the database gets housed in a separate host from the Puppet Server. If we find we're running out of space on our host due to our database, we can enable the autovacuum setting, which performs routine maintenance on our database, recovering disk space and updating any related statistics. If we are running out of RAM, we first want to confirm by reviewing the /var/log/pe-postgresql/pgstartup.log for the following error:

FATAL: could not create shared memory segment: No space left on device
DETAIL: Failed system call was shmget(key=5432001, size=34427584512,03600).
Once the issue is confirmed, we would want to update our kernel settings to use about 50% of the existing RAM for our shmmax setting. Our shmall setting should be shmmax divided by the page size, which we can determine by running getconf PAGE_SIZE.
$ sysctl -w kernel.shmmax=RAM
$ sysctl -w kernel.shmall=RAM
Additionally, we may find issues due to conflicting port usage. In this case, we would want to change our database port via the puppet_enterprise::puppetdb_port setting. Wrap Up In this lesson, we reviewed some common database issues, including how to: Fix space issuesUpdate our RAM settingsChange our database port

High Availability

00:02:23

Lesson Description:

Although our demo setup includes only a single master, we also want to take the time to understand how to troubleshoot a multi-master environment. There are a few things we can check, should a master fail to communicate: Latency If our masters communicate over a high-latency connection, then we may need to run the command a second time. Redundant Settings When using a multi-master setup, we can set either the server for a split environment or the server_list for a round-robin setup. If both are set instead of one, then the environment setup fails. Empty Groups With a multi-master setup, each master needs to have at least one node assigned to it. If there is an empty node group, any jobs for that group fail. Wrap Up In this lesson, we covered the basics of high availability troubleshooting, including how to: Handle latencyLook for duplicate settingsEnsure that there are no empty node groups

Conclusion

Practice Exam

Exam Tips

00:06:10

Lesson Description:

You've almost finished the course – now it's time to take the practice exam! And the pending practice exam probably has you thinking a lot about the real exam, as well. In this video, we'll cover the basics of both the real-life and practice exam (90 minutes, 60 questions, multiple choice), learn what you need the day of the real test, and cover some basic exam tips to maximize your success.

Puppet Professional Certification - PPT206 Practice Exam

01:30:00

Congratulations!

Next Steps

00:02:50

Lesson Description:

You've finished this course, but hopefully you aren't finished with Linux Academy. Now sure where to go next? Well, if you want to keep honing your DevOps skills, check out the following courses: Docker Deep DiveKubernetes the Hard WayUsing Terraform to Manage Applications and InfrastructureDatabase EssentialsUsing Salt for Configuration Management and OrchestrationBasic Chef Fluency BadgeLearn Ansible by Doing

Take this course and learn a new skill today.

Transform your learning with our all access plan.

Start 7-Day Free Trial