SaltStack Certified Engineer


Intro Video

Photo of Elle Krout

Elle Krout

Content Team Lead in Content

Elle is a Course Author at Linux Academy and Cloud Assessments with a focus on DevOps and Linux. She's a SaltStack Certified Engineer, and particularly enjoys working with configuration management. Prior to working as a Course Author, she was Linux Academy's technical writer for two years, producing and editing written content; before that, she worked in cloud hosting and infrastructure. Outside of tech, she likes cats, video games, and writing fiction.





Course Details

In the SaltStack Certified Engineer prep course, students learn Salt using real-world scenarios and hands-on environments. We start by learning some core Salt components – such as execution modules and states – then begin to build out our infrastructure by leveraging various Salt features to not just use Salt for remote execution and configuration management, but also for event-driven infrastructure and orchestration.

This course covers all topics required to pass the SSCE, including:

Common Salt execution modulesCommon Salt state modulesUsing Salt statesUsing pillarManaging Salt via the CLISalt configurationSalt key managementSalt securityTemplating

This course also included a practice exam to test your skills before sitting the SSC



Getting Started

Course Introduction


Lesson Description:

Welcome to the SaltStack Certified Engineer prep course from Linux Academy. In this course, we're going to work with real-world scenarios to learn the core concepts of Salt -- as well as all of the little details you need to pass the SSCE. The course covers all topics needed for the SSCE, as well as some additional Salt features and functionality that will take your use of Salt to the next level, and allow you to use Salt for event-driven infrastructure, cloud platform management, and even servers that don't have Salt installed at all. Topics covered include: Installation, configuration, and security concerns The remote execution system and its inner workings The state system and its relationship with remote execution modules Events, runners, and using Salt for event-driven infrastructure Using Salt SSH to manage servers that do not (or cannot) have Salt installed Using Salt Cloud to provision servers on cloud and virtualization platforms

About the Training Architect


Lesson Description:

Hello there! I'm Elle, your friendly neighborhood training architect for this course. I'm a former technical writer, whose focus was on documenting Linux systems and the DevOps toolchain. I've previously worked for a cloud hosting company, before joining Linux Academy almost three years ago to produce our written content (like many of our lab guides and blogs) and ensure our spelling was on-point. I've been using Salt for over four years and became certified at SaltConf '17 in Salt Lake City, Utah. A conference I highly recommend to any DevOps and systems professionals using Salt. Outside of wrangling servers and sentences, I enjoy writing fiction, reading anything that has words, making things (Arduino and dresses are equal passions of mine), and playing video games. You can contact me @ellejaclyn on Twitter or on LinkedIn (/ellejaclyn).

End State Goals


Lesson Description:

In this course, we're going to be undertaking a scenario that will allow us to explore various features of Salt by using real-world scenarios. To do this, we're going to be acting as members of the systems/DevOps team at a fictitious software company: IdunnaSoft is a small company focused on building project-management tools. Recently, their kanban-like product, Eski, has had an uptick in popularity and the systems team, of which you are part, has been exploring additional configuration management options to better deal with Eski's infrastructure. After much deliberation and comparisons, you chose Salt due to its extensibility, robust list of features, and commitment to open source. An inventory of the current infrastructure has since been performed, and the various tasks needed to Salt the infrastructure have been divided between the department's DevOps engineers. This effort is being referred to, internally, as Project Golden Apple.

Salt Concepts

Salt Overview


Lesson Description:

Salt, or SaltStack, is a remote execution and configuration management framework. Initially working only as a remote execution framework, Salt has been extended to contain configuration management, orchestration systems, and more. At its core, Salt utilizes a high-speed data bus, ZeroMQ, to establish an encrypted channel between master and minion servers, which can receive and run commands simultaneously, in a pub/sub model format. Salt's commands can work across multiple operating systems and distributions, including Linux, MacOS, and Windows; Linux tends to be the most popular use-case, so that is what this course will focus on – concepts remain the same for all OSes.

Salt Components


Lesson Description:

Salt is highly extensible and contains over twenty pluggable subsystems that can be used. The remote execution and configuration management frameworks are two of these systems; others include an authentication system, the file server, even the return formatter for normalizing data structures in Salt output. Although these all contain default behaviors, there is no one way to use these systems. In this lesson, we're going to take a look at some of the more common subsystems, all of which we will be using within the course. Think of this as a terminology rundown before we get into the meat of the course.

QUIZ: Salt Concepts and Components


Salt Installation and Configuration


Package Install


Lesson Description:

For many *nix-based systems, Salt offers a number of options related to how we want our Salt updates and versioning to go. That said, no matter which Salt release you opt to use, the installation process tends to be similar: Add repository key and repository Update the cache for the packing tool Install the Salt components you want: salt-master salt-minion salt-ssh salt-syndic salt-cloud salt-api As an example, let's walk through installing Salt on a Debian system. Install the Latest Salt on Debian 9 Set the hostname to salt; update /etc/hosts to use the hostname salt: sudo hostnamectl set-hostname salt sudo $EDITOR /etc/hosts Note that, by default, Salt looks for a master named salt. Import the repository key: wget -O - | sudo apt-key add - Add the SaltStack repository into your repository list: sudo $EDITOR /etc/apt/sources.list.d/saltstack.list deb stretch main Run an update: sudo apt-get update Install Salt components: sudo apt-get install salt-master salt-minion

Bootstrapping Salt


Lesson Description:

Bootstrapping Salt is a simple process that uses the same series of commands across all available distros. To bootstrap Salt, we first want to prepare our servers by setting our hostnames and ensure any servers acting as minions can access the master via the hostname salt. Next, we want to pull down the installation script using any file retrieval application, such as curl or wget. Then, we want to run the installation script – and that's all it takes to bootstrap Salt! Now, let's go ahead and set up the system we're going to use throughout the bulk of this course. Bootstrapping the Demo Environment We'll be using three Linux Academy Cloud Servers: Master: Ubuntu 16.04 Minion1: Ubuntu 16.04 Minion2: CentOS 7 Master Setup We'll be using the Ubuntu 16.04 server as our Salt Master. Let's get started: Update the hostname and the /etc/hosts file so that the servers answers to the hostname salt: sudo hostnamectl set-hostname salt sudo $EDITOR /etc/hosts localhost salt Download the installation script; this example uses curl: curl -L -o Run the installer; remember we want the Salt Master to be a minion against itself: sudo sh -P -M Minion 1 Setup Our first minion, which we'll name minion1, will be our Ubuntu 14.04 server: Update the hostname and /etc/hosts file so the hostname resolves to minion1; we also want to ensure that the hostname salt resolves to the local IP address of our Master. sudo hostnamectl set-hostname minion1 sudo $EDITOR /etc/hosts salt Download the installation script: curl -L -o Run the installer; remember, this is a minion server: sudo sh -P Minion 2 Setup Update the hostname and /etc/hosts file so the hostname resolves to minion2; we also want to ensure that the hostname salt resolves to the local IP address of our Master. sudo hostnamectl set-hostname minion2 sudo $EDITOR /etc/hosts salt Download the installation script: curl -L -o Run the installer; remember, this is a minion server: sudo sh -P With this done, we now want to finish the "handshake" process between our Master and minions, so that the connection between the two is established and secure. This will be covered in our next section!


Key Management


Lesson Description:

Now that we have Salt installed on our environment, we need to finish the key exchange so that our Salt Master can send commands to our minions. Communication between the master and minions is done through AES encryption; the minions initiate this connection, by searching the network for a system named salt, as we previously discussed. The minion then initiates a handshake, wherein it sends its public key to the master server. After this, an administrator must accept the keys on the master (although this process can be automated). Now, let's go ahead and check the legitimacy of our keys, and then accept them so we can really get started with Salt. Verify and Accept Our Salt Keys On the Master Server Let's first take a look at what Salt keys we have available: sudo salt-key -L Here we can see that we have unaccepted keys for salt, minion1, and minion2. Now, it might seem obvious to go ahead and run sudo salt-key -A to accept all of them, but we want to take some extra steps to be secure. So, the first thing we want to do is validate our key identity. Let's now view the master fingerprint for the server: sudo salt-key -F master This outputs our .pem and .pub files, which contain information about our master keys. We want to copy the fingerprint and add it to our minions' configuration files. On Minion 1 With the fingerprint still copied, open up /etc/salt/minion on the Minion 1 server: sudo $EDITOR /etc/salt/minion We want to add the fingerprint to the master_finger line; this ensure that our Salt minion will only connect to the appropriate master and prevent any Man-in-the-Middle-type attacks: master_finger: '93:c1:0f:d2:55:76:0d:4b:8a:37:26:69:58:44:bd:04:b5:de:89:20:db:b9:a1:0e:be:95:62:06:cb:60:e0:9e' Also, be sure to uncomment the line. Save and exit the file. Restart the salt-minion service: sudo systemctl restart salt-minion Now that our Salt minions know which master they should be using, we need to ensure that the correct minions are the ones receiving instruction. If you notice on the master, our unaccepted key fingerprints are also viewable using the fingerprint output from earlier. Let's confirm these match. View the fingerprint for the Minion 1 minion: sudo salt-call --local key.finger Confirm that this fingerprint matches the one on the master. Switch to Minion 2. On Minion 2 We now want to repeat the process from above on our second minion: Uncomment and add the fingerprint to the master_finger line in /etc/salt/minion: sudo $EDITOR /etc/salt/minion master_finger: '93:c1:0f:d2:55:76:0d:4b:8a:37:26:69:58:44:bd:04:b5:de:89:20:db:b9:a1:0e:be:95:62:06:cb:60:e0:9e' Save and exit the file. Restart the salt-minion service: sudo systemctl restart salt-minion View the fingerprint for the Minion 2 minion: sudo salt-call --local key.finger Confirm that this fingerprint matches the one on the master. Return to the master. Back on the Master Finally, before we can go and accept our keys, now that we've verified our minions, we also want to perform this process on our master. Copy the fingerprint and add it to the master_finger line in /etc/salt/minion: sudo $EDITOR /etc/salt/minion master_finger: '93:c1:0f:d2:55:76:0d:4b:8a:37:26:69:58:44:bd:04:b5:de:89:20:db:b9:a1:0e:be:95:62:06:cb:60:e0:9e' Save and exit. Restart the salt-minion service: sudo systemctl restart salt-minion Confirm that the minion fingerprint matches: sudo salt-call --local key.finger Accept all keys: sudo salt-key -A We now have our working environment working and can start using Salt; of course, before then we're going to address some different configurations and set ups we can use when configuring our Salt environment.

Master Configuration


Lesson Description:

Knowing the various options available in both our Master and Minion configuration files is an integral part of the SSCE exam. In this lesson, we're going to open up our /etc/salt/master file and review the configuration options available for our master server and perform some common set-up tasks for our new Salt environment. One of the nice things about Salt is that we can configure almost every component of it. In these past few lessons, we've learned that Salt uses ports 4505 and 4506, the configuration file for the master is located at /etc/salt/master, and Salt looks for hosts using the name salt. Except, depending on the Salt set up, none of this might actually be true. Why? Because we can change all of it within our configuration. Since Salt allows the use of their documentation during the exam, instead of trying to memorise the hundreds of lines and options within the configuration file, I would instead suggest familiarizing yourself with the general options it covers, then referencing the file itself or the documentation when receiving a related question; simply search to get the exact configuration attribute correct. Now, let's go ahead and make some changes. Set the File Roots Navigate into the /etc/salt directory: cd /etc/salt Open up the master configuration file: $EDITOR master Search for the file_roots setting in your editor, and copy the existing configuration. The file_roots setting lets us define the location in which we store our states and formulas; we can also define all our environments within their configuration setting. file_roots: base: - /srv/salt Exit the file. Move into the master.d directory, then create and open file-roots.conf: cd master.d sudo $EDITOR file-roots.conf Add the configuration property: file_roots: base: - /srv/salt Save and exit the file. Restart the salt-master service: sudo systemctl restart salt-master

Minion Configuration


Lesson Description:

Much like with our master configuration file, our minion configuration file allows us to configure various options related to our minion and our minions' connection to our master. Located at /etc/salt/minion, the minion config is also a fairly long file whose default settings tend to work out-of-box, with the option to alter almost every part of the minion's default settings. Minion configuration files can also be stored until minion.d using the .conf extension. The single most important line in the minion configuration file is the master setting; this is where we denote the hostname, IP address, or URL of our master server. By default, it looks for a master using the hostname salt, which is what we used when setting up our environment. In this lesson, we are going to make one change on each of our minions. Since our minion configuration can also contain grain data -- remember grains are data about the server -- we're going to add a grain for each server denoting its role or roles. Add Grain Information On the Master Move into /etc/salt/minion.d: cd /etc/salt/minion.d Create and open a configuration file called grains.conf: sudo $EDITOR grains.conf Again, we want to define roles for our server. In this instance, we're working on the master, so master is the role we're going to assign. Assign the role: grains: roles: - master Save and exit the file. Restart the salt-minion service: sudo systemctl restart salt-minion On Minion 1 Move into /etc/salt/minion.d: cd /etc/salt/minion.d Create and open grains.conf: sudo $EDITOR grains.conf Since we know this minion is used in a test -- or dev -- environment, we want to give it the role dev. We also want to assign it the roles webserver and database, since it will be undertaking the roles of both: grains: roles: - dev - webserver - database Notice how a grain can have more than one value. Restart the salt-minion service: sudo systemctl restart salt-minion On Minion 2 Move into /etc/salt/minion.d: cd /etc/salt/minion.d Create and open grains.conf: sudo $EDITOR grains.conf As with our previous minion, we want to give it the dev role. We also want to assign it the roles webserver and database, since it will also be undertaking the roles of both: grains: roles: - dev - webserver - database Restart the salt-minion service: sudo systemctl restart salt-minion

Salt Mine


Lesson Description:

Finally, the last piece of configuration we're going to set up is our Salt Mine. The Salt Mine is a completely optional component of Salt that, much like grains, queries for and stores arbitrary data on the master. Mine data is updated more frequently (every 60 minutes, although this can be changed), and we need to define what data is pulled into the Mine. Mine data can also be saved within Pillar; the set up it also exactly the same, except the file will be saved as an sls in your Pillar roots. Add Networking Data to the Mine On the Master, Minion 1, and Minion 2 Navigate into /etc/salt/minion.d: cd /etc/salt/minion.d Create and open mine.conf: sudo $EDITOR mine.conf Set up our mine_functions so that the Mine queries data relates to our available network IP addresses. We haven't covered execution modules yet, but the function name is fairly self-explanatory: mine_functions: network.ip_addrs: [] We could add extra parameters within the square brackets if we wanted to narrow our responses, such as adding interface: eth0 to only gather addresses on the eth0 interface. This would look like: mine_functions: network.ip_addrs: interface: eth0 Save and exit the file. Restart the salt-minion service: sudo systemctl restart salt-minion To force an update of our Salt Mine data, we can use: salt '*' mine.update

Security Suggestions


Lesson Description:

As we wind down our configuration section of this course, the last thing we want to take a look at is any potential security concerns and actions we would take when using Salt. Much of the relevant information about security is going to be addressed as we work through the course, where applicable -- things like how we should always store sensitive data in Pillar and how we need to be aware that Salt has full root access to all of our minions. Despite this, there are some odds and ends we want to consider while using Salt.The single most important thing to remember regarding Salt is that it does have a root view and root access of any server it manages -- even if we run Salt as non-root, which is possible, Salt will be able to make vast changes to all our servers. This means we should limit access regarding who can use the master, as well as limit the general potential surface area of the system. For example, while we're logging in to our Salt master through our public IP address, in actual practice, utilizing a Bastion host and locking down the IP range of SSH connections can prevent outsiders from trying to access our master. Additionally, we want to ensure we have the basics covered: No root SSH login, ensuring our users have secure SSH keys with strong passwords set up (that rotate within the confines of company policy), and making sure we have a stringent firewall set up.We also want to take into consideration how our formulas, states, and Pillar data is being pulled into Salt. We're going to be writing directly on the master in this course because it makes sense for us -- we're learning, and no one is losing company data if we make a horrible mistake and overwrite a configuration file we shouldn't, for example. That said, in actual practice, we want to keep our states, formulas, and Pillar in a secure (and preferable private) version-control system, much like GitHub -- we're going to do this to some extent, just, again, on the master itself. Going hand-in-hand with this, we also want to make sure our states and formulas go through code review.On the minion-side, we want to limit any extraneous logins -- that in, we shouldn't ever really have to log in to these systems themselves. The whole purpose of Salt is that we don't have to do the tedious parts of managing a server. As such, consider the times and circumstances wherein you normally would have to log in to these servers -- and create a beacon for it. That way, when something that may require your attention comes up, it's sent straight to our Salt event bus for us to deal with. Even better: Add a reactor to automatically fix the problem (or simply react to the expected, non-erroneous action).Additionally, we can make Salt itself restrict some of its own actions. Remember the dangers of Well, we need not allow for the use of cmd at all. Within our minion configuration, we have an option to disable_modules. This works exactly like any other configuration setting: We simply uncomment it, then supply a list of modules. For example, here we disable cmd and network:disable_modules: - cmdmod - networkThe sys module is the only module that cannot be disabled. It is required for the minion to work.We can also disable returners with the disable_returners option.Finally, Salt suggests anyone using Salt to subscribe to their mailing lists to keep track of release information. The more up-to-date Salt is on your system, the more secure it will be:salt-users Mailing Listsalt-announce Mailing List

Infrastructure Variations

Multi-Master Setup


Lesson Description:

In its most simple setup, a multi-master environment's masters all run "hot" -- that is, they can all send commands to all minions. An alternative setup can also be configured wherein the minion is provided a list of masters, connects to one, then only uses another should that first one fail. More information regarding this can be found in SaltStack's documentation page Multi-Master-PKI Tutorial with Failover. Some things we want to consider before setting up a multi-master environment: The masters need to share the same private key The minions' configuration files need to be updated to use both masters This can be done through Salt via the existing master The masters need to be able to share files between themselves Shared files: Minion keys file_roots pillar_roots Do not sync minion key files as this is a security risk Use salt-key to accept the keys of all minions on all masters instead Use a fileserver backend like gitfs to store states, formulas, and pillar data Configurations of both masters should be the same We're going to go ahead and set up a multi-master configuration with a single minion. This is going to be outside of our end state environment, and these servers will not need to be kept for the rest of the course, should you decide to follow along. Set Up Multi-Master Before we begin, bring up a regular Salt Master and a single minion. We're going to be using CentOS 7 for all servers. Refer to either the Package Install or Bootstrap Salt section if you need help. For ease, I suggest not installing a minion on the masters. On the New Master Set the hostname; we're going to use salt2: sudo hostnamectl set-hostname salt2 sudo $EDITOR /etc/hosts localhost salt2 Install Salt: curl -L -o sudo sh -P -M -N Stop the salt-master daemon: sudo systemctl stop salt-master Before we continue, in another window or tab, we want to open up our original Salt Master: One the Old Master We want to copy the master.pem and keys (located in /etc/salt/pki/master) to our second master; this can only be done as the root user. Switch to root: sudo su Move into /etc/salt/pki/master: cd /etc/salt/pki/master Copy the master.pem and keys to the new master. In this example, I use scp: scp master.p* user@ We use the user user and not root when accessing the new master because root login is disabled on cloud servers. Switch back to the new master. Back on the New Master Switch to root: sudo su Move into the /etc/salt/pki/master directory and remove the current and master.pem keys: cd /etc/salt/pki/master rm master.p* Move the new keys into this directory: mv ~user/master.p* . Set the permissions: sudo chmod 0400 master.pem sudo chmod 0644 Start the salt-master service: sudo systemctl start salt-master To confirm that the keys are the same, run sudo salt-key -F master on both masters. You may notice that our new master still has no minions, however. This is because we have one last thing we need to change. On the Minion Add the hostname information for the new master to the /etc/hosts file: sudo $EDITOR /etc/hosts salt2 Open the /etc/salt/minion file: cd /etc/salt sudo $EDITOR minion Find the #master: value and uncomment. We want to add both master hostnames here: master: - salt - salt2 Save and exit the file. Restart the salt-minion service: sudo systemctl restart salt-minion Back on the Master View and accept the minion's key: sudo salt-key -l unacc sudo salt-key -f minion1 sudo salt-key -a minion1 We would now generally go ahead and set up our file backend, but that's a task for another lesson.

Masterless Setup


Lesson Description:

The salt-minion setup itself is so powerful that it can use most of Salt's functionality on its own. This is ideal for testing various bits of your Salt infrastructure, from states to custom modules. In this lesson, we're going to learn how to set up a masterless minion and ensure we can easily run commands against it. Set Up Standalone Minion Install Salt: curl -L -o sudo sh -P Turn off and disable the salt-minion daemon: sudo systemctl stop salt-minion sudo systemctl disable salt-minion Update the /etc/salt/minion configuration and set the file_client to local: sudo $EDITOR /etc/salt/minion file_client: local Save and exit the file. We can now go ahead and test our masterless minion just to see if any commands work. We've used the network.ip_addrs command before, so let's use that: sudo salt-call network.ip_addrs

QUIZ: Install and Configure Salt


Remote Execution

Execution Modules

Remote Execution


Lesson Description:

Salt's remote execution framework allows us to run commands on multiple machines simultaneously (although we can stagger jobs, this is a topic for another lesson). It solves the core IT issue of having to either run a command on multiple servers by hand or through writing scripts to perform that same task. This is done through the use of functions known as execution modules. Execution modules are distro-agnostic* and resemble the following when in use: module.function For example, let's consider the pkg, or "package," module. This module allows us to manage what packages we have on our server by using a number of provided functions that perform specific tasks. One such function is the install function. As you can imagine, this function allows us to install packages. When it goes to use this module function, we would call it with pkg.install. So, how would we use this on the command line? Well, let's find out!



Lesson Description:

So now that we know how to use our execution modules to direct commands across multiple servers, we need to consider how we determine which servers we want to run our commands on. Previously, we utilized the * wildcard to work with all our minions. Salt offers various ways in which we can target our minions, we can call them via: Minion name Grains Node group Pillar data Subnet or IP address SECO range For those utilizing SECO tools



Lesson Description:

Everything we've just covered regarding execution modules relates to a Salt Master dictating commands to its minions. However, what about instances where the Master isn't in play or where we're logged on to a minion itself? For these instances, we can use salt-call to run execution modules on a minion itself. The salt-call module is especially useful for testing. If working on a new Salt execution module, you can run that module (or state, as we'll address later) directly on the minion to gain access to more information regarding the changes that are happening, which is valuable when troubleshooting issues.

Common Modules

The sys Module


Lesson Description:

sys brings us information about the various functions available to a minion. Imagine a sys like manual page or documentation repository for our available execution (and state) modules, but with a few more features. Let's use our pkg module as an example. We know that install is a function of this package. But what other functions does this module have available? To check this, we can use the list_functions function: $ sudo salt 'salt' sys.list_functions pkg salt: - pkg.add_repo_key - pkg.autoremove - pkg.available_version - pkg.del_repo - pkg.del_repo_key - pkg.expand_repo_def - pkg.file_dict - pkg.file_list - pkg.get_repo - pkg.get_repo_keys - pkg.get_selections - pkg.hold - pkg.info_installed - pkg.install - pkg.latest_version - pkg.list_pkgs - pkg.list_repo_pkgs - pkg.list_repos - pkg.list_upgrades - pkg.mod_repo - pkg.owner - pkg.purge - pkg.refresh_db - pkg.remove - pkg.set_selections - pkg.unhold - pkg.upgrade - pkg.upgrade_available - pkg.version - pkg.version_cmp We could even do it with sys itself: $ sudo salt 'salt' sys.list_functions sys salt: - sys.argspec - sys.doc - sys.list_functions - sys.list_modules - sys.list_renderers - sys.list_returner_functions - sys.list_returners - sys.list_runner_functions - sys.list_runners - sys.list_state_functions - sys.list_state_modules - sys.reload_modules - sys.renderer_doc - sys.returner_argspec - sys.returner_doc - sys.runner_argspec - sys.runner_doc - sys.state_argspec - sys.state_doc - sys.state_schema Which brings us to some other features, such as argspec, which gives us information about the arguments we can use with a function. Let's go back to using the pkg.install function for this one: $ sudo salt 'salt' sys.argspec pkg.install alt: ---------- pkg.install: ---------- args: - name - refresh - fromrepo - skip_verify - debconf - pkgs - sources - reinstall - ignore_epoch defaults: - None - False - None - False - None - None - None - False - False kwargs: True varargs: None Here we're provided with a list of available arguments, as well as the defaults for each argument. The defaults will always be in the same order as the args list, so name is has the default of None, refresh has the default of False, et cetera. We're also able to use wildcards for any of these. So if we want to see the argument specifications for all arguments for the pkg module, we can use: $ sudo salt 'salt' sys.argspec pkg.* But let's take a look at another sysmod function now that will be invaluable in your Salt journey: The doc function. This function, like most Salt functions, does just what it sounds like: It returns documentation about an execution module or specific function. Let's check out the doc for doc itself: $ sudo salt 'salt' sys.doc sys.doc sys.doc: Return the docstrings for all modules. Optionally, specify a module or a function to narrow the selection. The strings are aggregated into a single document on the master for easy reading. Multiple modules/functions can be specified. CLI Example: salt '*' sys.doc salt '*' sys.doc sys salt '*' sys.doc sys.doc salt '*' sys.doc network.traceroute Modules can be specified as globs. New in version 2015.5.0 salt '*' sys.doc 'sys.*' salt '*' sys.doc 'sys.list_*' As we can see, this also contains examples of how the function is used. We also don't have to put in a function when using sys.doc; we can simply input a module or a partial name using a wildcard. Finally, you know how earlier in the course we linked to a list of available execution modules? We can also view what's available to us on the command line with: sudo salt 'salt' sys.list_modules Note that these are just available modules; this means Linux servers won't list Windows-specific modules or servers without Apache installed won't have the apache module listed. sys is one of, if not the, most important module to know because if we're ever stuck with any other execution module, we can use sys to give us some direction. sys can also provide us with information about state modules, which we're going to cover later. These are predominantly all the same as the ones we've addressed above but use the prefix state_.

The test Module


Lesson Description:

The test module provides us with a number of options for testing our Salt infrastructure. Let's take a look at some specific ones.The function is especially important. If working on any temporary server, especially ones that are frequently stopped, checking that our master can connect to our minions can be a quick, up-front way to make sure nothing at the surface is wrong with our infrastructure. $ sudo salt '*' We can also do things like send information to Salt's outputter: $ sudo salt '*' test.outputter "Hello, World!" View the current stack trace: $ sudo salt 'salt' test.stack Or just output the versions of Salt, its dependencies, and the system itself using: $ sudo salt 'salt' test.versions_information Many of these commands are particularly useful in troubleshooting and diagnosing issues, so just being aware of the test module can be important.

The pkg Module


Lesson Description:

The pkg module allows us to work with the packages we have on our server. The pkg module is a virtual module; meaning, it combines the functionality of a number of more distro-specific modules so that it works across multiple operating systems. The pkg module uses one of the following modules when working with a minion: aptpkg, brew, ebuild, freebsdpkg, openbsdpkg, pacman, pkgin, pkgng, pkgutil, solarispkg, solarisips, win_pkg, yumpkg, or zypper. Depending on what operating systems you're familiar with, it's probably easy to see which operating system each of these goes with.

The user and group Modules


Lesson Description:

It is beneficial to have a grasp of user and group management through the use of execution modules. This lets us quickly add and remove users, as well as add and remove groups, and add and remove users to groups. We can also manage things like a user's home directory, default shell -- anything that you would normally think to manage regarding a user. Like our pkg module, these are virtual modules. user combines the nix useradd, userdel, and usermod features, with BSD's pw, and Mac and Windows' user management functionalities (which are traditionally done in a GUI). The group module also combines nix, BDS, MacOS, and Windows group functionalities. Add salt Group This is as easy as running a single command: $ sudo salt 'salt' group.add salt This might be a little hard, given all the instances of the word salt, so let's take a moment to remember how we break down commands: sudo salt '' group.add Note there are additional arguments we could use -- gid, system, and root -- but none we need to use in this instance. We're also only running this on our salt Master server because we won't be working directly with Salt on our minions. Now, let's go ahead and add our user user: $ sudo salt 'salt' group.adduser salt user Notice that the name of the group we're adding the user to comes before the username. Add New User We also want to go ahead and add a user for ourselves -- after all, we are an employee at IdunnaSoft in this course. Unlike most of the commands we've used thus far, however, the user command has a number of arguments we generally want to include. Let's take a look: $ salt 'salt' sys.argspec user.add salt: ---------- user.add: ---------- args: - name - uid - gid - groups - home - shell - unique - system - fullname - roomnumber - workphone - homephone - createhome - loginclass - root - nologinit Let's add a user with our information. Here's mine: $ sudo salt 'salt' user.add elle groups=salt,sudo home=/home/elle shell=/bin/bash fullname="Elle K" I included arguments for our user's home directory, the default shell, full name, and also included our new user in our newly-created salt group, not to mention granted them superuser privileges. Whether you use your new user to continue within this course, or continue using the user user, is up to you! If you want to use your own user, you'll have to set a password. Set Password Hash the password you wish to use: $ sudo salt 'salt' shadow.gen_password 'mypassword' Use the hash to set a password for your new user: $ sudo salt 'salt' shadow.set_password elle '$6$8stAtY6O$9o8prUuAeg94heacqhN2SsOTWQ/Kz7YzZOuByAKSQ9yREYn35Xax1xSXs0iBWFKZSRt3tO6o.bZSWkIun65Qe0'

The grains Module


Lesson Description:

While we've already added grain information through our minion configuration, this is an alternative way to set, view, and manage grains for our minions. Let's first take a look at the grains we have available on the minion so far: $ sudo salt 'minion1' This list is too guide to replicate here, but we can see it features a number of items related to our system, such as system architecture, kernel, CPU module, and more. We can also see that it contained our roles grain from earlier. Let's say we wanted to double check what values that grain contains: $ sudo salt 'minion1' grains.fetch roles We could have also used the grains.get or grains.item function. And what if we wanted to view the grains and values for all our items at once? We would use: $ sudo salt 'minion1' grains.items Of course, we can also add grains. Let's add one for the websites we're "hosting" on this webserver: $ sudo salt 'minion1' grains.set 'sites' eski We can also append items to this list: $ sudo salt 'minion1' grains.append sites internal convert=True The convert value converts our grain to a list. And if we have to remove a value that was added by mistake, we can use: $ sudo salt 'minion1' grains.remove sites internal Note that this is intended to remove a value from a grain with a list of values. To remove a grain entirely, we can use: $ sudo salt 'minion1' grains.delkey sites

The cmd Module


Lesson Description:

The cmd module allows us to run arbitrary commands across our servers. Note that we have to be the ones to ensure the appropriate command is used on the appropriate server. If there's a difference between a CentOS command and an Ubuntu command, we'll have to take that into account. But otherwise, cmd opens up a lot of options for things that Salt may not currently support. Also, note that the cmd module essentially gives whoever had access to the Salt Master root access to all of your servers. Use cmd wisely. Let's try this out ourselves. Say we're experiencing some errors with our hostnames, and want to make sure the /etc/hosts files for our minions didn't change: $ sudo salt '*' 'cat /etc/hosts' Or maybe we want to add in an API key to one particular person's .bashrc file across all servers. To do this, we'd have to use the runas argument alongside the command: $ sudo salt '*' 'echo "export API_KEY=KEYDATA" >> .bashrc' runas=user The cmd module also has some practical, data-related features. Writing a script and want to make sure you're using the right path for an executable? Use cmd.which: $ sudo salt '*' cmd.which echo Essentially, the cmd module allows us to work around any limitations in Salt by allowing us to work as we normally would as system administrators, only on a larger scale. You get all the benefits of being able to simultaneously run commands across multiple servers, even if it means having to run a "normal" command within the module instead of running a Salt execution module itself.

The git Module


Lesson Description:

When working with configuration management, we're often tasked with placing our work on version control, such as a Git application. Salt, of course, has a module for this, allowing us to use version control as well as add upstream and downstream repositories and manage our code on those. Set Up Git on the Salt Master Before we begin, create a GitHub (or GitLab) repository for the mysql formula; since this will be under your own account, you can give the repository any name you desire, but I suggest something descriptive. We'll be using la-salt-mysql below. Git is already installed on our Linux Academy cloud servers, so we won't need to do any prep work there. Instead, we want to go ahead and use Salt to set our default global configuration for our user user: $ sudo salt 'salt' git.config_set Elle global=True user=user $ sudo salt 'salt' git.config_set global=True user=user Note that we have to define the user we want to set the configurations for, lest it default to the minion's user, which is root. Additionally, if we do not include the global=True argument, we need to use the cwd parameter to define which directory path the user is being set under. To check that our configuration values are properly set, let's use the git.config_get function. Much like the function above, we'll need to define our user and the directory we're working with (or set global=True): $ sudo salt 'salt' git.config_get user=user global=True $ sudo salt 'salt' git.config_get user=user global=True You'll notice that if we leave out any of the above required arguments, we receive an error: $ sudo salt 'salt' git.config_get global=True salt: ERROR: Minions returned with non-zero exit code Create our file_roots directory, where we'll be storing all our Salt states and formulas: $ sudo mkdir /srv/salt At this point, we want to make sure that we can easily add, remove, and edit files in this directory. Remember how we added a salt group earlier? Grant users in the salt group read/write/execute actions for the file_roots: $ sudo chgrp salt /srv/salt/ $ sudo chmod 775 /srv/salt/ $ sudo chmod g+s /srv/salt/ For the permission changes to take effect, log out of the server, then log back in. You can also run bash to start a new session. Create a directory to store our future MySQL formula: $ mkdir /srv/salt/mysql Initialize the directory. Instead of doing this in the traditional way, we want to do this via execution module: $ sudo salt 'salt' git.init cwd=/srv/salt/mysql Add the remote origin: $ sudo salt 'salt' git.remote_set /srv/salt/mysql From here, we can add, commit, pull and push changes, either through Salt or Git itself.

QUIZ: Execution Modules


Salt States and Formulas

States and Formulas

Anatomy of a Salt State


Lesson Description:

Way back when we first started this course, we noted that most of Salt is built upon the remote execution framework we just discussed. One of the features built off this is the state system, which allows us to create "infrastructure as code"-style blueprints for our minions. These let us describe our desired end state for a server, then, when run against a minion, brings the minion into compliance with that state. Create a MySQL Server Installation State Although we have two servers in our environment working as databases, we're going to start with our CentOS server as we begin to build out our MySQL states. If you're familiar with Debian or Ubuntu MySQL installs, you'll know that there's some extra configuration involved during the installation process, which we'll bring in as we learn about more advanced state creation. If you downloaded the network state, then you're already in the /srv/salt folder. This folder is known as the file_roots and is the default location for storing states and formulas. This can be changed in the master config. Move into the mysql directory that we created and initialized as our Git repository: $ cd mysql This is where we'll store all out MySQL configuration files. Note that a collection of related states is called a formula. Create and open a file, server.sls, where we can write our state: $ $EDITOR server.sls We're going to use this file to create a state to download the MySQL server. Let's start by adding a descriptive state declaration: mysql_server_install: We now need to determine which state module to use. Most state modules share names with execution modules -- it's the functions that change. This means the module for installing packages, is the pkg module, only instead of using pkg.install, we use pkg.installed: mysql_server_install: pkg.installed: Finally, we need to set our parameters for the pkg.installed module; keep in mind that we can use the sys.state_list_functions function to view a list of functions. For pkg.installed we have to define the name of the package, but we also have the option to define a particular version (version), set whether or not to reinstall the package if it is already installed (reinstall), refresh the repository database (refresh), and more. Let's just set the name of the package: mysql_server_install: pkg.installed: - name: mariadb-server CentOS servers use a community-maintained version of MySQL, called MariaDB. Save and exit. We can now go ahead and test this state: sudo salt 'minion2' state.sls mysql.server test=true Note that, because the state is found under a directory, we called the directory name, then the state, in a format that looks a lot like calling a module. Finally, go ahead and add and commit this first state to Git: git add . git commit -am "MySQL server installation state added (server.sls)" git push origin master Note that, depending on your GitHub account security, you may need to generate a personal access token and use that to log in, instead of your usual password.

The Salt File Server


Lesson Description:

To work with files, Salt includes a stateless ZeroMQ file server that lets us send files to the minions from the master. Files should be saved in our file_roots (/srv/salt/), generally within the directory of the formula they are being used for. However, files not associated with any formulas or states can still be saved and shared. Manage the MariaDB Conf For CentOS servers, the MariaDB configuration files are located within the /etc/my.cnf.d directory, with a primary configuration file located at /etc/my.cnf. Since we're working on setting up MariaDB as a server, not a client, we'll be working to make sure our /etc/my.cnf.d/server/server.cnf file is managed by Salt. Without our mysql directory, create a folder to store our configuration files: $ cd /srv/salt/mysql $ mkdir files $ cd files Pull in the configuration file: $ curl -O Open the file. When managing a file in Salt, it is best practice to place a comment at the top of the file informing anyone using the server that the file is managed by Salt, and should not be edited outside of configuration management: $ $EDITOR server.cnf # The file is managed by Salt. If needed, you can get more specific in this comment. You can make a note not to edit the file at all if you have any eager people working on the database or otherwise need to provide more information. Save and exit the file. We can now create another state, this time for adding this file to our CentOS 7 database minion. Let's drop back down into our MySQL formula folder, then create a config.sls file: $ cd .. $ $EDITOR config.sls From here, we want to set our state's name declaration. Remember, we want this to be specific so that anyone running these states later can read the output: mysql_server-config: Next, to work with files, we need to use the file state module. Specifically, we want to use the file.managed function, which allows us to define a file, its source, its location on the minion, and more: mysql_server_config: file.managed: We now need to set our parameters for our managed file. The two parameters we need to include are the name parameter, which will define the location of the file on the minion, and the source parameter, which defines where on the master the file is located. Additional arguments, such as setting the user or group for the file and setting a source hash are also available. All arguments can be found via the sys.state_argspec function. Let's add our name and source: mysql_server_config: file.managed: - name: /etc/my.conf.d/server.cnf - source: salt://mysql/files/server.cnf Save and exit. Finally, let's ensure our state doesn't contain any errors by running a test state run: $ sudo salt 'minion2' state.sls mysql.config test=true



Lesson Description:

One thing we want to consider is the order that our states work in. Generally, we want to ensure that any packages are installed first, then our configurations are set, and then any additional work that needs to be done, such as creating a user or database, is completed. This is where Salt's requisite system comes in. Salt's requisite systems allow us to define the relationships between our states and include any requirements -- such as necessary packages -- that we need for our state to work. Create a Restart State Move into the MySQL formula directory, then create and open restart.sls: $ cd /srv/salt/mysql $ $EDITOR restart.sls Set the name declaration and function; to get this state to work, we'll use two functions: service.restart and module.wait. service.restart is what restarts our defined service, like module.wait prevents this state from running every highstate; instead, it waits for the provided requisite to occur. mysql_restart: module.wait: - name: service.restart - m_name: mariadb Here we can see that the module.wait function takes precedence -- it's the function we call after our name declaration. Then, the name value we need to provide is the actual function we want to work with. After this, we provide the service name, as we would were we just using service.restart -- only instead of setting it as name, we use the m_name option. Next, we want to set our requisite. We only want this state to run when there are changes to our configuration file, so let's use the watch option: mysql_restart: module.wait: - name: service.restart - m_name: mariadb - watch: - mysql_server_config We use the name declaration of our configuration state to denote what state we're waiting to change. Save and exit. Finally, as before, we can go ahead and test this: $ sudo salt 'minion2' state.sls mysql.server,mysql.config,mysql.restart test=true Note that now, as we test, we have to include all of our required states for the test (or highstate itself) to work. Unfortunately, because we're not running these state, only testing, we don't get to see our requisite in action. So, let's go ahead and remove the test parameter: $ sudo salt 'minion2' state.sls mysql.server,mysql.config,mysql.restart Instead of a comment denoting that there were no changes, our mysql_restart state instead just reports back as a true -- this means it ran successfully. And what happens if we run it again? Since our configuration file has updated, there will be no changes, and we get a comment about how the state was not run.

The init.sls File


Lesson Description:

The init.sls is a file that should be included with every formula. Which states we include in it depend heavily on how we're working with our formula, but generally, it pulls in things like installation states and mandatory users or configurations. For example, for our current formula, it would make sense to include our server.sls installation state and our config.sls state. Let's do so now: $ cd /srv/salt/mysql $ $EDITOR init.sls We want to use an include to call our state: include: - mysql.server - mysql.config Notice that we call our state the same as we do when running a state.sls on the command line -- through the formula name (taken from the directory) and the file name for the state. Save and exit the file. Now let's see what happens when we try to run the mysql formula on its own, without defining a state: $ sudo salt 'minion2' state.sls mysql test=true Our server installation and config states ran! This is because, when we run a formula by just using the formula name, we're running that formula's init.sls. Notice that we didn't include our restart state; because our restart state is so closely related to our configuration state, it would make more sense to include it there. Includes aren't exclusive to the init.sls file and can be used in any state: $ $EDITOR config.sls include: - mysql.restart ... Save and exit. Now, let's rerun our mysql formula: $ sudo salt 'minion2' state.sls mysql test=true Notice how this time all our states are included, and we've only run a single "state" -- mysql? The use of includes -- as well as the requisites we discussed in the last lesson -- can make for very clean, easy-to-use formulas.

The top.sls File


Lesson Description:

The top.sls file sits in our file_roots, and lets us map our states and formulas to our minions. We can do this in many of the same ways we target our minions when using execution modules: through minion names . Regular expressions include: grains, pillar data, subnet, datastore match, and nodegroup. Let's go ahead and create the top file for our own environment: $ cd /srv/salt/ $ $EDITOR top.sls We first want to set our environment. Since Salt is most-commonly run in a single environment, called base (and that is how we are currently running Salt), we'll define this now: base: Next, in a hierarchy-formation underneath, we can begin to name our targets. Our formula is currently only set to work on our minion2 minion, so we can set that as our first target: base: 'minion2': And then, below this, we want to create a list of our desired states or formulas. Let's go ahead and add our mysql state: base: 'minion2': - mysql But what about instances where we only want to include a single state? To demonstrate, let's create that MySQL client state we discussed in the previous lesson; specifically, let's set it so we can installed the MySQL client on our Ubuntu minion: $ cd mysql $ $EDITOR client.sls mysql_client_install: pkg.installed: - name: mysql-client Now, we can go ahead and map this to our our minion1 state in our top file: $ cd .. $ $EDITOR top.sls base: 'minion1': - mysql.client 'minion2': - mysql Notice how this time we call the state the same as we would when running it via the state.sls execution module; by the formula name (taken from the directory), then by the state name (taken from the file name). Save and exit the file. Let's manually run a highstate, which is what we call the action of ensuring all our servers have their states enforced: $ sudo salt '*' state.highstate test=true Both our minion1 and minion2 servers return information about their states, with minion1 installing only the MySQL client, and minion2 installing the server and setting in its configuration. Our salt minion returns an error because we have nothing mapped to it. This is expected behavior, and we can fix it by adding our hostname state: base: 'salt': - hostname 'minion1': - mysql.client 'minion2': - mysql If we wanted to be more restrictive, given the limitation of our current formula, we could even rewrite this to ensure that only minions with the proper distribution may attempt to run our formula. This uses the same technique as compound targeting (because it is compound targeting): $ $EDITOR top.sls base: 'salt': - hostname 'minion* or G@os:Ubuntu': - mysql.client 'minion* or G@os:CentOS': - mysql We can confirm this by running another highstate: $ sudo salt '*' state.highstate test=true Our results return the same as our previous highstate, despite the different targeting method.




Lesson Description:

While we have a basic MySQL formula mocked up, we also know that there's a lot of limitations that have come with it. Notably, we can only use it to install the configure a MySQL server on CentOS and install a client on Ubuntu. However, state creation does not end with plain YAML. Salt allows us to use a templating language expand our states and ensure they work across whatever distributions or operating systems we need. By default, this templating language is Jinja. This is also what we'll be using in this course. Jinja is a Python-based templating language -- remember that Salt is written in Python -- and works with Python 2.4 and above. When we use Jinja in a state, it is evaluated before the YAML itself, prior to the state running. Perhaps the easiest way to identify Jinja is by it's opening and closing brackets, called delimiters: {% ... %}: Statements {{ ... }}: Expressions; in Salt, this is how we call variables {# ... #}: Comments Most commonly, we'll use Jinja to parameterize our variables, create if statements, and generate loops. In fact, let's get started by working on an if statement. If you're familiar with installing MySQL across distributions, then you might have guessed that we started with CentOS 7 while creating our formulas because Debian-based distributions, such as Ubuntu, require user input when installing the MySQL server, requiring the use of debconf in scripts and, yes, Salt. Because of this, if we want to be able to expand our formula to work across multiple distributions, we need to add some options states in our server.sls that will only work when used on a machines who operating system is in the Debian family. Add deconf to MySQL server State Open the server.sls file for our MySQL formula: $ cd /srv/salt/mysql $ $EDITOR server.sls We want to create an if statement in which we can include a state that we only want run if the parameters we set are matched. In this instance, we want the state we're going to write to only run on systems with the operating system family of Debian: {% if grains['os_family'] == 'Debian' %} {% endif %} Note the use of the {% delimiters. Since we need to match an operating system family, we call a grain with the grains function. We also make sure to close our statement. We next have to write a state that will install the debconf utility. This is the same as if we were to write it outside of our if statement: {% if grains['os_family'] == 'Debian' %} mysql_debconf_install: pkg.installed: - name: debconf {% endif %} Finally, we want to add our debconf settings. For MySQL, since means setting the root password in two different settings. We can do this through the use of the debconf stater module. mysql_debconf_settings: debconf.set: - name: mysql-server - data: 'mysql-server/root_password': {'type': 'password', 'value': 'temppass'} 'mysql-server/root_password_again': {'type': 'password', 'value': 'temppass'} As our formula grows, we also want to consider our requisites as we write, not as something to be added later. For this state in particular, we need the debconf package to be installed. However, this state is also necessary for our mysql_server_install to work on Debian-based distros. As such, this is an excellent time to use a require_in -- that way, the state will only be required for Debian-family minions. mysql_debconf_settings: debconf.set: - name: mysql-server - data: 'mysql-server/root_password': {'type': 'password', 'value': 'temppass'} 'mysql-server/root_password_again': {'type': 'password', 'value': 'temppass'} - require: - pkg: debconf - require_in: - mysql_server_install So we can test what we have so far, let's change our installation value in our regular server installation package name from mariadb-server to mysql-server so we can test our state on our Ubuntu minion: mysql_server_install: pkg.installed: - name: mysql-server Finally, let's test, then run, our server state against our minion1 minion -- we just want to run the single mysql.server state, not the entire formula: $ sudo salt 'minion1' state.sls mysql.server test=true $ sudo salt 'minion1' state.sls mysql.server Note that we do not want to commit any of this to GitHub, for this lesson, as well as the rest of this section. We now have password information in our states, which we do not want to share, even if it is a temporary password on a demo environment.

The map.jinja File


Lesson Description:

Now, with our formula in its half-CentOS and half-Ubuntu state, we need to template out our work further, by making use of Jinja to parameterize things such as our package names and file locations on a per-distribution basis. This is done through what is commonly called a map.jinja file, although the name does not have to be map.jinja. The map.jinja allows us to create a mapping of variables that vary between distribution or distro family. We can then reference these variables in our states, and the appropriate value will be used depending on the distribution of the minion. This works because Jinja is rendered before the states themselves are run. Let's start by working on that same server.sls file we added an if statement to earlier: {% if grains['os_family'] == 'Debian' %} mysql_debconf_install: pkg.installed: - <span class="hljs-string">name:</span> debconf mysql_debconf_settings: debconf.set: - <span class="hljs-string">name:</span> mysql-server - <span class="hljs-string">data:</span> <span class="hljs-string">'mysql-server/root_password'</span>: {<span class="hljs-string">'type'</span>: <span class="hljs-string">'password'</span>, <span class="hljs-string">'value'</span>: <span class="hljs-string">'temppass'</span>} <span class="hljs-string">'mysql-server/root_password_again'</span>: {<span class="hljs-string">'type'</span>: <span class="hljs-string">'password'</span>, <span class="hljs-string">'value'</span>: <span class="hljs-string">'temppass'</span>} - <span class="hljs-string">require:</span> - <span class="hljs-string">pkg:</span> debconf - <span class="hljs-string">require_in:</span> - mysql_server_install {% endif %} mysql_server_install: pkg.installed: - <span class="hljs-string">name:</span> mariadb-server Although we could parameterize the debconf install itself, since that part is already specific to Debian-based distros, we instead want to consider the mysql_server_install state itself: Specficically, we want to make sure the name parameter for the installation module can use the appropriate package on the appropriate distro. Let's move into our mysql formula folder, and begin to extend our formula by creating a map.jinja file: $ cd /srv/salt/mysql $ $EDITOR map.jinja From here, we begin by using the set template tag to inform Salt that we're creating a set of variables we're going to use. It's in this line that we also define that we're filtering our variables via grain: {% set mysql = salt['grains.filter_by']({ The name mysql here is arbitrary -- it could be anything. I tend to just simply use the formula name. Another thing you may have also noticed is that we didn't define which grain we're filtering with. This is because the os_family grain is used by default and this is the grain we're going to be using. We also want to to take this time to close our set's brackets: })%} Now, within our two sets of brackets, we want to define our first subset. We've been working with CentOS predominantly, so let's start there. CentOS is in the Red Hat family, so RedHat is the operating system family we're going to reference: {% set mysql = salt['grains.filter_by']({ 'RedHat': { 'server': 'mariadb-server', }, })%} In this, we define our subset (again, RedHat), then create the server variables and give it the value mariadb-server, which, as we know, is the name of the package we use in our server.sls state. One thing we want to note here, as well, is that after we define our key-value pair, we end the line in a comma. We also do this when we end our subset. Never forget your commas! We also want to go ahead and add this same variable for Debian-based distros, like our Ubuntu server. Although it's not mandatory or even considered best practices to keep our subsets in alphabetical order, I'll be adding our Debian set above RedHat just for that purpose: {% set mysql = salt['grains.filter_by']({ 'Debian': { 'server': 'mysql-server', }, 'RedHat': { 'server': 'mariadb-server', }, })%} Let's now save and exit our file, then consider how we're going to take the set we just created and use it without our states. Once more, open up the server.sls file: $ $EDITOR server.sls To call the variable we just added, we need to import our map file. We do this with a single from statement: {% from "mysql/map.jinja" import mysql with context %} The mysql/map.jinja value references the location of our map file from our file_roots, while the mysql value comes from our set name (not the formula name). with context references a Jinja behavior -- important templates only retain their context when told so. Finally, we also want to reference the variable we added in our mapping. To call a variable, we use double curly brackets ({{ ... }}), then reference the set name, followed by the variable name: - name: {{ mysql.server }} Save and exit the file. Let's see what happens when we test this: $ sudo salt 'minion*' state.sls mysql.server test=true Notice how the Name values in our output reference a different package depending on the distro, just like we intended. We can also see what our debconf states only work on our Ubuntu server. Let's go ahead and update the rest of our formula to use our map.jinja file. Update MySQL Formula The following values need to be added to the map.jinja file: client.sls name package config.sls Add an Ubuntu server config (found here) name value source value required package value restart.sls m_name value Those feeling confident should try to update the map.jinja file on their own first! Add the Debian-based configuration file to the files directory: $ curl -o files/deb_mysqld.cnf The deb prefix was added to keep our files organized. I'm also going to rename the original file so we know it's intended for Red Hat-based servers: $ mv files/server.cnf files/rh_server.cnf With this prep work done, we can now focus on updating our states. Reopen the map.jinja file. You may find it beneficial to have two terminal tabs or panes open -- one for our map file and one for the state we're updating. We'll be starting with the client.sls state. For this state, there's only one value that we want to add to our mapping: the name value, currently set to mysql-client, which is the value we want to input for our Debian set. The name of the client package for Red Hat-based systems, including CentOS, is mariadb: 'Debian': { 'server': 'mysql-server', 'client': 'mysql-client', }, 'RedHat': { 'server': 'mariadb-server', 'client': 'mariadb', The map.jinja File }, We now want to also update the state itself: {% from "mysql/map.jinja" import mysql with context %} mysql_client_install: pkg.installed: - name: {{ mysql.client }} We now want to do this same thing to our config.sls. We're going to name the variable for our name parameter server_conf and the source server_conf_source. The required package should use our server variable, created earlier: ... 'server_conf': '/etc/mysql/mysql.conf.d/mysqld.cnf', 'server_conf_source': 'salt://mysql/files/deb_mysqld.cnf', ... 'server_conf': '/etc/my.cnf.d/server.cnf', 'server_conf_source': 'salt:///mysql/files/rh_server.cnf', ... Then update the state: {% from "mysql/map.jinja" import mysql with context %} include: - mysql.restart mysql_server_config: file.managed: - name: {{ mysql.server_conf }} - source: {{ mysql.server_conf_source }} - require: - pkg: {{ mysql.server }} The last state we need to update is the restart state. As before, first add the needed variable to the map.jinja file (service): ... 'service': 'mysql', ... 'service': 'mariadb', ... Then update the state: {% from "mysql/map.jinja" import mysql with context %} mysql_restart: module.wait: - name: service.restart - m_name: {{ mysql.service }} - onchanges: - mysql_server_config Test the formula: $ sudo salt 'minion' state.sls mysql test=true $ sudo salt 'minion' state.sls mysql.client test=true




Lesson Description:

Now that our MySQL formula can work across multiple distros, we have one last concept to include to really complete our work: Pillar. Pillar, if you recall from the "Concepts" section of this course, is a data store wherein we can house user-provided key-value pairs. These pairs can then be referenced in our MySQL states. But first, let's consider some benefits and use-cases for Pillar. Pillar can be used to store variables, to house our minion configurations (think back to our configuration section -- we often noted addition configurations that could be stored in Pillar), to contain any arbitrary data that would benefit from being stored in a dictionary, and for highly-sensitive data, such as passwords. Pillar is stored outside of our file_roots, in the aptly-named pillar_roots, which, by default, is found at /srv/pillar. To set this up, we would have to follow much the same steps as when we set up our file_roots: Create /srv/pillar: $ sudo mkdir /srv/pillar Set the pillar_roots so anyone in the salt group can add and edit files: $ sudo chgrp salt /srv/pillar/ $ sudo chmod 775 /srv/pillar/ $ sudo chmod g+s /srv/pillar/ Log out, then log back in; or run bash to refresh. Now, Pillar itself it already set up on all of our servers. Although we had to add the default directory, no additional configurations need to be made for Pillar to function. Instead, we can jump right in and begin to add pillar to our formula. Let's consider our end goal again: We want to create a MySQL formula that will add two databases to our database servers. To do this, we need to provide Salt with information regarding the root user, so it can use that information to log in to our MySQL instance. Additionally, we need to provide Salt with the MySQL Python connector. Let's first create that state, then include it in our init.sls file: $ vim /srv/salt/mysql/map.jinja ... 'Debian': { ... 'python': 'python-mysqldb', ... 'RedHat': { ... 'python': 'MySQL-python', ... $ vim /srv/salt/mysql/python.sls {% from "mysql/map.jinja" import mysql with context %} mysql_python_install: pkg.installed: - name: {{ mysql.python }} $ vim /srv/salt/mysql/init.sls include: - mysql.server - mysql.config - mysql.python $ sudo salt 'minion*' state.sls mysql test=true $ sudo salt 'minion*' state.sls mysql With that done, we can now create the state to ensure our root user can be used within our formula: Add the root Connection User Let's first create the state for our MySQL root user: $ vim /srv/salt/mysql/root.sls mysql_root_user: mysql_user.present: - name: root - password: temppass - host: localhost Notice that we use the mysql_user.present state module, which is a state module we gain access to when MySQL is included on a minion. As you may have guessed, this ensure a MySQL user is present on the server, given the defined parameters. We now want to start building out our Pillar data. Pillar works in a way that is remarkably similar to our states -- we can either include a mysql.sls file in the pillar_roots itself, or we can add a mysql directory and store specific MySQL-related pillar files below it, referencing it the same way we would otherwise -- mysql.<pillarfile>. We'll just be using a simple mysql.sls in our pillar_roots: $ cd /srv/pillar $ vim mysql.sls The first thing we want to do is set the name of this dictionary -- since at it's core, that's what we're creating: A dictionary. This should somehow reference what this Pillar data is used for. In our instance, it's mysql: mysql: It is entirely possible to use a plain key-value pair, such as password: passpass in this file instead of creating a dictionary; however, this is a rare, albeit simple, use case. We now want to define our subdict. It might be tempting to name this something like users, but since this is specifically setting up our connection user, we're going to keep this simple and separate, using the name root: mysql: root: Let's now add some key-value pairs to this dictionary; specifically, let's define the root user and password: mysql: root: name: root password: temppass The temppass value is the same is the root password we set in the earlier lesson for debconf; we'll cover how to encrypt this in the next lesson. Save and exit the pillar file. We now need to map our pillar data to the minions that need it. This is done with a top.sls file, much like when we map states to minions in our file_roots: $ vim top.sls base: 'minion*': - mysql Note that when mapping pillar files to minions, they are applied in order; if there are conflicting keys, the last value for that key will be the one used Save and exit when done. Let's now refresh our pillar data and check to see that Salt can read our pillar items: $ sudo salt '*' saltutil.refresh_pillar $ sudo salt '*' pillar.items Pillar data is stored in memory due keep cost down -- compiling data can get expensive! Natively, pillar is refreshed at minion start and during state runs. Now that we have our data in pillar, we can reference it within our states. Let's open our root.sls state: $ cd /srv/salt/mysql $ vim root.sls To call a value from pillar, we use the {{ pillar['key'] }} format. Notice the double curly brackets -- just like calling a Jinja variable. Since we have nested keys, we would use: {{ pillar['key']['nested-key=1']['nested-key-2'] }}. Let's go ahead and use this to update our state: mysql_root_user: mysql_user.present: - name: {{ pillar['mysql']['root']['name'] }} - password: {{ pillar['mysql']['root']['password'] }} - host: localhost Save and exit, then add the root state to our init file: $ vim init.sls include: - mysql.server - mysql.config - mysql.python - mysql.root Test and run the formula: $ sudo salt 'minion*' state.sls mysql test=true Notice the CentOS 7 error -- this is because although we have our packages installed, we never added a state to start and enable our service. Let's quickly add a service state, then try again: $ vim service.sls {% from "mysql/map.jinja" import mysql with context %} mysql_service_enable: service.running: - name: {{ mysql.service }} Note that we already have the appropriate variable available from our restart state. $ vim init.sls include: - mysql.server - mysql.service - mysql.config - mysql.python - mysql.root Retest: $ sudo salt 'minion*' state.sls mysql test=true Run the formula: $ sudo salt 'minion*' state.sls mysql

The GPG Renderer


Lesson Description:

Although getting our passwords our of our Salt states and into Pillar is a step in the right direction with regards to securing Salt, we can take this one step further by encrypting our data using Salt's GPG renderer. This allows Salt to take our encrypted data, read it, and use it within our states. Before we begin encrypting passwords, however, we need to prepare our master server to use this renderer. Prepare GPG Renderer: Create a directory in which we can store our GPG keys: $ sudo mkdir /etc/salt/gpgkeys Set the permissions: $ chmod 0700 /etc/salt/gpgkeys Chances are that our Salt master does not have enough entropy to create our master key. So let's first generate some extra entropy: $ sudo apt-get install rng-tools $ sudo rngd -r /dev/urandom Let's now create our master key: $ sudo gpg --gen-key --homedir /etc/salt/gpgkeys Leave the kind of key, keysize, and expiration date as the default settings. Generally, when creating this key, you'll want to follow your company's existing policies regarding GPG keys and experation. Set the full name to Saltstack, and leave the email and comment blank. Do not input a password -- Salt will not be able to use this key if a password is used. To use this key to encrypt our files, we need to import it to our workstation. In this case, this is the master itself, although generally, it will be whatever environment you've set up to create your states. First, let's export the key: $ sudo gpg --homedir /etc/salt/gpgkeys --armor --export Saltsalt > exported_pubkey.gpg Then import it: $ gpg --import exported_pubkey.gpg We're now set to create "secrets" -- or encrypted words or phrases -- to use in our pillar. Let's work on encrypting that temppass password we use for our root MySQL user. Encrypt Pillar Data We first want to encrypt our password: $ echo -n "temppass" | gpg --armor --trust-model always --encrypt -r Saltstack -----BEGIN PGP MESSAGE----- Version: GnuPG v1 hQEMA+IrJEeFbLG0AQf/dahR/1jvoO4+OV9yIenHkFKHerTNwWiBXgEq3nVCI1w/ 9KgNLpi5bvD/kv3NN0BhPNHCaqR2Y71t5hjHFqR8Fy8X99HXfXuyY/hvvKkr9vN6 76MKUPS4MCyldvcR1IXY+AZdvjM0eM3YkV5YnJpGry9m2ZXJXEEpIQFUq3hjFBk8 v4StCGaWnSgXzHCyMWjJ7L3iQi3GfOfUFeQlkRr380j2/PaaH4qJYKJ5FUMZ+D5D csIOQmyCQQT/gQSkBODQecIlD+sScZKdP+vMyMZmYcBDVupW/W6uF/xsll1dPLGG CTdbcPg5UMl/uR6v8YWtKovwD1VuTJ1X+ZuKIlFEodJEAW7q6N4tTPeVhyaZblNW ElE7zBSjB1anAtxJ7sv7+fir+lhFYjwtpBn2O6aXZ4irGmOBv8kqcMOIB0s9rId4 Pk3cyG0= =Brqj -----END PGP MESSAGE----- Make note of this output. This is what we'll be adding to our pillar file. Now we can open our mysql.sls pillar file: $ vim /srv/pillar/mysql.sls From here, we want to tell Salt that our file contains a cipher that Salt will need to decrypt. To do this, we add a hashbang at the top of our file, informing Salt which renderers it will need to use: #!yaml|gpg We can now add that PGP block to our password value. To do this, we use a pipe (|) to tell Salt to look one line down at the block of text provided: mysql: root: name: root password: | -----BEGIN PGP MESSAGE----- Version: GnuPG v1 hQEMA+IrJEeFbLG0AQf/dahR/1jvoO4+OV9yIenHkFKHerTNwWiBXgEq3nVCI1w/ 9KgNLpi5bvD/kv3NN0BhPNHCaqR2Y71t5hjHFqR8Fy8X99HXfXuyY/hvvKkr9vN6 76MKUPS4MCyldvcR1IXY+AZdvjM0eM3YkV5YnJpGry9m2ZXJXEEpIQFUq3hjFBk8 v4StCGaWnSgXzHCyMWjJ7L3iQi3GfOfUFeQlkRr380j2/PaaH4qJYKJ5FUMZ+D5D csIOQmyCQQT/gQSkBODQecIlD+sScZKdP+vMyMZmYcBDVupW/W6uF/xsll1dPLGG CTdbcPg5UMl/uR6v8YWtKovwD1VuTJ1X+ZuKIlFEodJEAW7q6N4tTPeVhyaZblNW ElE7zBSjB1anAtxJ7sv7+fir+lhFYjwtpBn2O6aXZ4irGmOBv8kqcMOIB0s9rId4 Pk3cyG0= =Brqj -----END PGP MESSAGE----- Also notice that the message is spaced in, underneath the `password` key, not inline with it. Save and exit, then refresh the pillar data: $ sudo salt '*' saltutil.refresh_pillar Let's see what happens when we run our MySQL state now: $ sudo salt 'minion*' state.sls mysql Nothing! It looks exactly as we left it before, because we didn't actually change anything, just supplied the password in an encrypted form. We now want to take that stored password key and use it in our debconf states: $ vim /srv/salt/mysql/server.sls mysql_debconf_settings: debconf.set: - name: mysql-server - data: 'mysql-server/root_password': {'type': 'password', 'value': '{{ pillar['mysql']['root']['password'] }}'} 'mysql-server/root_password_again': {'type': 'password', 'value': '{{ pillar['mysql']['root']['password'] }}'} - require: - pkg: debconf - require_in: - mysql_server_install Save and exit. Let's reapply the formula to ensure no changes are made: $ sudo salt 'minion*' state.sls mysql We can also check that our password is rendered properly by viewing our pillar items: $ sudo salt '*' pillar.items Notice that the password is unencrypted when the results are output. Finally, with our passwords removed from our state, we can now push to GitHub: $ cd /srv/salt/mysql/ $ git add . $ git commit -am "Added service, root, and python states; stripped passwords" $ git push origin master

Pillar and Jinja


Lesson Description:

Pillar, when used alongside Jinja, can allow for advanced state creation, letting us expand our formulas so we may quickly and easily perform a number of tasks with only a few states. Create Database State The first thing we want to do is create that MySQL database state. Within the mysql directory in our file_roots, create and open database.sls: $ $EDITOR database.sls From here, we're going to write our basic state. The state name we're giving it is going to be temporary, but for now we'll use mysql_db_create, along with the mentioned mysql_database.present module: mysql_db_create: mysql_database.present: We now need to set our parameters, which include the name of our database, its host, and the connection information. In this case, our connection information is taken from our mysql:root pillar data: mysql_db_create: mysql_database.present: - name: eski - host: localhost - connection_user: {{ pillar['mysql']['root']['name'] }} - connection_pass: {{ pillar['mysql']['root']['password'] }} - connection_charset: utf8 So now we know, when adding database information, we need to include the database name and the host information. Let's take this new knowledge and add it to our pillar file. We know we need two databases: eksi and internal, both on the localhost. But we first want to add the name of our subdictionary, datbase: $ $EDITOR /srv/pillar/mysql.sls mysql: root: name: root password: | database: We can now add information for each database: database: eski: host: localhost internal: host: localhost You may have noticed that we don't specifically define the name value; this is because when we create our loop, we'll pull the name from the sub-dictionary name itself. Save and exit. Return to the database.sls state file. We want to create a loop that will cycle through our dictionaries. This is done using Jinja and will function like any other programmatic loop. In fact, it works exactly the same as creating a loop in Python, with a few added Salt and Jinja features. $ $EDITOR /srv/salt/mysql/database.sls So let's create our for loop. I'm going to first provide the code line itself, and then we'll break it down: {% for database, arg in salt['pillar.get']('mysql:database', {}).items() %} database, arg: These are arbitrary values we're using to reference parts of our pillar. The database value references our database name, and the arg value references any keys below it. salt['pillar.get'](): Ignoring the values provided in the function, salt['pillar.get']() is simply a function that retrieves our pillar data. In this case, it takes two values -- specifically, the first value needs to be the location of the names of our databases, like the second is an empty set for the arg. ('mysql:database', {}): These are the values for this specific instance of our function. We call mysql:database for the first value because that is the dictionary that contains our database information. The empty set after this references all the key-value pairs included in each individual database set. .items: This informs Salt to go iterate through each subdictionary under database. In Python 2, this is .iteritems. Both versions currently work in Salt. Close the for loop at the end of the file: {% endfor %} Finally, we can call our values in our state. To reference the database name itself, we only need to use the database variable defined in the first bit of our loop. To reference an argument, we use arg.. To reference our host, for example, we would use {% for database, arg in salt['pillar.get']('mysql:database', {}).items() %} mysql_db_create: mysql_database.present: - name: {{ database }} - host: {{ }} - connection_user: {{ pillar['mysql']['root']['name'] }} - connection_pass: {{ pillar['mysql']['root']['password'] }} - connection_charset: utf8 {% endfor %} However, we aren't done quite yet. Although we're looping through twice -- once for each database -- both states created would have the same name declaration, which would result in an error. As such, we also want to reference our database name in the name declaration itself: {% for database, arg in salt['pillar.get']('mysql:database', {}).items() %} mysql_{{ database }}_db_create: mysql_database.present: - name: {{ database }} - host: {{ }} - connection_user: {{ pillar['mysql']['root']['name'] }} - connection_pass: {{ pillar['mysql']['root']['password'] }} - connection_charset: utf8 {% endfor %} Save and exit the file. Finally, let's test and run our database state to see what happens: $ sudo salt 'minion*' state.sls mysql.database test=true Notice in the output that each minion has two resulting states, not one: One for each database we need to create! We now run the state for real: $ sudo salt 'minion*' state.sls mysql.database

QUIZ: States and Formulas



Event System

Event System Overview


Lesson Description:

Salt's event system, or event bus, is used for inter-process communication and network transport and is comprised of two components: the event socket, which publishes events; and the event library, which listens to events and sends events to the Salt system. The Salt master and each Salt minion has its own event bus. Let's go ahead and take a look at the event bus in action. Open a second window, tab, or pane, and SSH into another instance of your Salt master. On the CLI for the instance of your master, input the following to connect to the event bus: $ sudo salt-run state.event pretty=True Since our test environment is pretty uneventful compared to most production systems, we're probably not going to see anything here. So, on the second instance of our master, try running a command: $ sudo salt 'minion*' salt/job/20180605133538313578/ret/minion2 { "_stamp": "2018-06-05T13:35:38.361872", "cmd": "_return", "fun": "", "fun_args": [], "id": "minion2", "jid": "20180605133538313578", "retcode": 0, "return": true, "success": true } salt/job/20180605133538313578/ret/minion1 { "_stamp": "2018-06-05T13:35:38.374056", "cmd": "_return", "fun": "", "fun_args": [], "id": "minion1", "jid": "20180605133538313578", "retcode": 0, "return": true, "success": true } Now watch the event bus. As each individual job is done, we receive information about the job ID, the command run, the function used, any arguments provided, and any return information, including if the job succeeded. Let's intentionally run a state to fail to see the differences. Since we know our mysql.config state can't run without its requisites, we can use this to test: $ sudo salt 'minion1' state.sls mysql.config salt/job/20180605133622335971/ret/minion1 { "_stamp": "2018-06-05T13:36:22.669850", "cmd": "_return", "fun": "state.sls", "fun_args": [ "mysql.config" ], "id": "minion1", "jid": "20180605133622335971", "out": "highstate", "retcode": 2, "return": { "file_|-mysql_server_config_|-/etc/mysql/mysql.conf.d/mysqld.cnf_|-managed": { "__run_num__": 0, "__sls__": "mysql.config", "changes": {}, "comment": "The following requisites were not found:n require:n pkg: mysql-servern", "result": false }, "module_|-mysql_restart_|-service.restart_|-wait": { "__run_num__": 1, "__sls__": "mysql.restart", "changes": {}, "comment": "One or more requisite failed: mysql.config.mysql_server_config", "result": false } }, "success": true } file.managed { "__run_num__": 0, "__sls__": "mysql.config", "_stamp": "2018-06-05T13:36:22.670215", "changes": {}, "comment": "The following requisites were not found:n require:n pkg: mysql-servern", "result": false, "retcode": 2 } Notice that the output in our event bus contains all the comments and results as the information returned to us when we ran the state itself, only in a more machine-readable format. Now let's look at the output itself. The event tag is the designating line of data that starts each job. It looks like this: salt/job/20180605133622335971/ret/minion1 and always begins with the word salt. The slash (/) is used for namespacing, with the following lines referencing what is happening (a job is running), the ID of the job (something like 20180605133622335971), as well as information regarding which minion this job was run on. To see how this varies between different kinds of jobs, log in to one of your Salt minions and restart the salt-minion service: $ sudo systemctl restart salt-minion Now watch the event bus. This time we get data about the entire minion connection process: First the salt/auth service authenticating the minion, then Salt refreshing the minion's data cache (minion/refresh/minion1). The minion service is then triggered and started (minion_start and salt/minion/minion1/start). The rest of the returned data is just that -- event data. This always contains a timestamp, but otherwise, the data returned is unique to the specific event. Notice how when we ran our mysql.config state it contained information about each state module used, and how when we restarted the salt-minion service some events contained only a timestamp.

Event Types


Lesson Description:

Salt contains a number of built-in events that the event bus reports on. Beyond this, we also have the option to enable optional events, and even write custom ones. Much like the "Components" lesson earlier in the course, this is going to be more of a conceptual rundown of event types.

Beacons and Reactors



Lesson Description:

We can use Salt to monitor non-Salt processes by using beacons. Beacons work by allowing minions to monitor various processes and send reports to the event bus when certain activities occur, such as when disk usage reaches a certain percentage or when a file is changed on the system. Beacons only monitor processes and send events to the event bus. To trigger any actions from these events, we have to use the Reactor system, which is covered in the next section. Create a Beacon to Monitor File Changes To get hands-on with using beacons, we're going to set up the Beacon system so that it triggers an alert whenever there are any changes to the MySQL configuration file we have managed by Salt. As with most minion configurations, to do this, we can either update our minion configuration, add a configuration file to the /etc/salt/minion.d directory, or save the information in our pillar_roots. Since we've added configuration changes to our minion configs more than once, let's add our beacon information to pillar. Before we begin, we need to install the Python inotify connector on our minions: $ sudo salt 'minion1' pkg.install python-pyinotify $ sudo salt 'minion2' pkg.install python-inotify Move into the pillar_roots and create a beacons.sls file to store our beacon configurations: $ cd /srv/pillar $ $EDITOR beacons.sls To track a file in pillar, we want to use the ionotify beacon. This beacon watches files for changes, then translates this into a Salt event. beacons: inotify: From here, we want to define the files we want to watch. We can use Jinja if statements to ensure the correct file is monitored on the correct operating system family: beacons: inotify: - files: {% if grains['os_family']=="RedHat" %} /etc/my.cnf.d/server.cnf: {% endif %} {% if grains['os_family']=="Debian" %} /etc/mysql/mysql.conf.d/mysqld.cnf: {% endif %} mask: - modify The modify mask tells inotify to trigger only when the file is modified. We can also change this to any time of the file is viewed, deleted, moved, or otherwise. Finally, since we're going to be using this beacon to notify a reactor that it needs to reset the file whenever it's edited, we need to add a line preventing this beacon from triggering whenever a state that updates these files are run: beacons: inotify: - files: {% if grains['os_family']=="RedHat" %} /etc/my.cnf.d/server.cnf: {% endif %} {% if grains['os_family']=="Debian" %} /etc/mysql/mysql.conf.d/mysqld.cnf: {% endif %} mask: - modify - disable_during_state_run: True Save and exit. Map the beacon configuration to our minions: $ $EDITOR top.sls base: 'minion*': - mysql - apache - beacons Save and exit, then force a pillar refresh: $ sudo salt '*' saltutil.refresh_pillar Now let's check to make sure this works. On the master, open up the event bus: $ sudo salt-run state.event pretty=true In a different tab, log in to the minion1 minion, then open the /etc/mysql/mysql.conf.d/mysqld.cnf file and make an arbitrary change. View the output in the event bus: salt/beacon/minion1/inotify//etc/mysql/mysql.conf.d/mysqld.cnf { "_stamp": "2018-06-11T12:51:34.552158", "change": "IN_IGNORED", "id": "minion1", "path": "/etc/mysql/mysql.conf.d/mysqld.cnf" } As expected, we get an event about the changes made to that file which notes the file that was changes, the minion where the change took place, and a timestamp. If we repeat this with our other minion, we'll get a similar result: salt/beacon/minion2/inotify//etc/my.cnf.d/server.cnf { "_stamp": "2018-06-11T12:51:08.192400", "change": "IN_IGNORED", "id": "minion2", "path": "/etc/my.cnf.d/server.cnf" } With our beacon set up, we can now take this trigger and use it with a reactor to enforce our states and prevent changes.



Lesson Description:

We can use Salt's reactor system alongside beacons to perform actions based on events. The reactor system monitors the event bus, looking for certain event tags based on what we define. These tags are then associated with certain reactor sls files, which define what actions we want Salt to take when certain events occur. Currently, there are four types of reactions we can use: Local or remote execution, runner, wheel, and caller. Create Reactor Directory Create /srv/reactor: $ sudo mkdir /srv/reactor Set the directory so all users in the salt group can add and edit files: $ sudo chgrp salt /srv/reactor/ $ sudo chmod 775 /srv/reactor/ $ sudo chmod g+s /srv/reactor/ Log out, then log back in; or run bash to refresh the terminal. Create Reactor to Reset Config Files Creating reactors as a two-fold process: We need to map the reactor sls file to the event tag that triggers it in our master config, and we need to create the reactor sls within our reactor directory. Note that although reactor files are similar to state files, they are not the same. Move into the reactor directory: $ cd /srv/reactor We can either add a file directly to this directory or store our files under organized subdirectories. Since we're specifically working with restoring our MySQL config file, let's create a mysql directory, then move into it: $ mkdir mysql $ cd mysql Create and open deb_config.sls: $ $EDITOR config.sls Although we could theoretically create a single configuration file for each operating system it would involve some advanced regex in the reactor configuration file; to keep things easy, we'll be separating things. We can now set up a reactor sls to reinforce our MySQL server configuration file on our Debian-based servers. Using the local reactor type, let's first define our targeted minions: restore_deb_mysql_server_config: local.state.single: - tgt: 'E@minion* and G@os_family:Debian' - tgt_type: compound Notice that we're using the state.single function which lets us run a single state function against our target. Add the arguments for the file.managed function: restore_deb_mysql_server_config: local.state.single: - tgt: 'E@minion* and G@os_family:Debian' - tgt_type: compound - args: - fun: file.managed - name: /etc/mysql/mysql.conf.d/mysqld.cnf - source: salt://mysql/files/deb_mysqld.cnf Save and exit. Repeat this process for rh_config.sls -- a RedHat version of the above sls: restore_rh_mysql_server_config: local.state.single: - tgt: 'E@minion* and G@os_family:RedHat' - tgt_type: compound - args: - fun: file.managed - name: /etc/my.cnf.d/server.cnf - source: salt://mysql/files/rh_server.cnf We now want to set up our master config for our reactors. We're using to store this at /etc/salt/master.d/reactor.conf: $ $EDITOR /etc/salt/master.d/reactor.conf This is where we map out what event tag causes which reaction. In our case, we want our Debian sls to run when the Debian file is updated, and the Red Hat sls to run when the Red Hat MySQL server config changes: reactor: - 'salt/beacon/*/inotify//etc/mysql/mysql.conf.d/mysqld.cnf': - /srv/reactor/mysql/deb_config.sls - 'salt/beacon/*/inotify//etc/my.cnf.d/server.cnf': - /srv/reactor/mysql/rh_config.sls Notice that we replace the minion names with wildcards in the event tag. Save and exit, then restart the salt-master daemon: $ sudo systemctl restart salt-master Now, we want to test that our reactors work. On the master, connect to the event bus: $ sudo salt-run state.event pretty=true In a new terminal, SSH into the minion1 minion and edit the MySQL server config file with any kind of arbitrary change: $ sudo $EDITOR /etc/mysql/mysql.conf.d/mysqld.cnf Watch the event bus. After the initial beacon event, we get a series of reactions: Our reactor reaction, then our file.managed job running. Reopen the configuration file on the minion1 server. Our changes are now gone. Close the minion1 server, then repeat the process on minion2: $ sudo $EDITOR /etc/my.cnf.d/server.cnf View the event bus. Again, our reactor has responded. Finally, reopen the server.cnf configuration file. Our changes have reset.

QUIZ: Salt Events


Runners and Orchestration

Salt Runners

The salt-run Command


Lesson Description:

Salt runners are convenience applications; that is, they provide helpful utilities related to managing Salt, such as viewing running or recent jobs, or orchestrating our Salt runs so that jobs are executed in the desired order. We can tell when we're using a Salt runner because they are used alongside the salt-run command.



Lesson Description:

By using the jobs runner, we can further view and manipulate our jobs; in other words, most actions that take place on our master and minions. Additionally, we can take a step closer towards proper orchestration by using the jobs scheduler to assign highstates, state runs, runner runs, and more at specific times or using specific credentials. Schedule a Highstate To see this in action, let's schedule a highstate. This will highstate our servers once every minute (if that seems like a lot, don't worry as we'll remove it shortly): $ sudo salt 'minion*' schedule.add highstate-1 function='state.highstate' seconds=60 In this instance, highstate-1 is a name we gave to the job. This can be anything we want. Now we can connect to our event bus and wait: $ sudo salt-run state.event pretty=true After a minute, we'll start receiving information about the jobs run during our highstate. Unfortunately, enforcing our states once every minute is a little excessive. Close the event bus with a CTRL+C and remove the scheduled job: $ sudo salt 'minion*' schedule.delete highstate-1 This is just one way of setting a schedule. If we wanted to do this via pillar data or minion config we would use: schedule: highstate: function: state.highstate minutes: 30 In this instance, it schedules the highstate every thirty minutes. If adding to pillar, also remember to map it to the appropriate minions: base: 'minion*': - mysql - beacons - schedule



Lesson Description:

When working across large systems and provisioning large amounts of servers, we often need to make sure certain components are configured, or steps are complete before we configure other components. For example, if the configuration of a web application on our web servers depends on the presence of a MySQL database for setup, that database needs to be configured first. However, beyond running states in batches, we cannot always guarantee that our databases will finish configuration before our web servers. This is where Salt's orchestration runner comes in. Unlike working with a highstate or state.sls, when we use the orchestration runner, we do not have our state run concurrently and independently on each minion. Instead, the master does the work, giving it a high-level view of what is happening across all minions so that any requirements and requisites are met across systems. If a highstate is working on a minion-level, then using state.orchestrate is working on the infrastructure-level -- one encompassing a group of, or even all of, your minions. Create an Orchestration SLS Create and open a file to store our orchestration work; we'll be calling ours setup.sls: $ $EDITOR setup.sls We're going to be going through some various options we can for setting up our infrastructure via Salt orchestration. To start, let's consider how we can set the hostname for our minion1 minion. Since we don't have our hostname.sls file set up for minion1, we can write a new one using the salt.function module: set_hostname: salt.function: - name: network.mod_hostname - tgt: 'minion1' - arg: - minion1 When we do this, we're essentially using an execution module in the orchestration file itself, same as we did when working with our reactors. Notice that we provide a function, a target, and the arguments, same as our previous sections. Next, we want to run a highstate on our minion2 minion. To do this via our orchestration file, we use the salt.state module and, instead of providing state information, we use the highstate parameter: configure_db_minion: salt.state: - tgt: 'minion2' - highstate: True Finally, we can use that same salt.state function to list out the sls files we want, like so: configure_web_minion: salt.state: - tgt: 'minion1' - sls: - apache - mysql.client Again, notice how we always need to provide a target. Save and exit. We can now use our setup sls alongside the state.orchestrate (or state.orch) runner to use our setup orchestration. We call the file as we would any other state in our file_roots: orch.setup: $ sudo salt-run state.orch orch.setup Note that this is just one way of accomplishing our goals. Remember, this scenario was intentionally set up to showcase a number of possible ways to use orchestration.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


QUIZ: Runners


Additional Components

Salt SSH

Salt SSH Setup


Lesson Description:

Salt lets us run commands on servers that do not have the salt-minion installed. We do this over SSH which essentially allows us to work with unsalted systems through the use of the salt-ssh command. Salt SSH is not installed alongside the salt-master, so if we want to use it, we need to install and configure it separately. Salt SSH is also not included in the bootstrap script; however, we can install it via the apt command: When we ran the script, SaltStack's repositories were added to our repolist, and now we can use them: $ sudo apt-get install salt-ssh Prep the Saltless Server Despite not having to install Salt on the minion itself, we do have to do a small amount of prep-work. That is, we need to log in to our cloud server and set the password when prompted upon initial login, then we need to update our sudoers file so that Salt can run commands as sudo without issue. Since we can't log in as root, Salt will be using our user user. $ sudo su - $ visudo user ALL=(ALL) NOPASSWD:ALL If you can log in via SSH as root, you do not need to make these changes. Our cloud servers have this disabled. Turning off SSH root login is also generally considered a best practice. Set Up the Salt SSH Roster So how does Salt know what servers to SSH into? We need to use the /etc/salt/roster file, which was added after we installed Salt SSH. As with most of our configurations, the file is written in simple YAML. To add a server to the roster, we first need to give the server an ID. Since our new minion was created specifically to work with Salt SSH, I just used a designation of ssh_minion; generally, however, this would be something more identifiable: $ sudo $EDITOR /etc/salt/roster ssh-minion: After this, we need to supply some parameters. At the very least, we need to always supply the host location (be sure to use your server's own private IP): ssh-minion: host: However, there are a number of other options we want to consider, including user, passwd, port, sudo (which always runs Salt with sudo privileges when using a non-root user), sudo_user (set a specific user for escalated privileges), tty (for systems where sudo is set to True and requiretty is set), priv (to set the private key for login), timeout (for establishing the SSH connection), minion_opts, thin_dir (for Salt-related temporary file storage on the SSH minion), and cmd_umask (for instances when using salt-call). In this case, we want to use the user option, since we can't log in via root, and the sudo option, so we can run commands that require any advanced privileges, just as we can regularly. ssh-minion host: user: user sudo: True At this point, you may be wondering why we're not supplying a password or private key information. This is because, by default, Salt will generate and supply its own SSH key for use on our SSH minions. However, for this to be generated, we need to attempt to run a salt-ssh command. Go ahead and save and exit the roster file, then try out a simple using salt-ssh: $ sudo salt-ssh '*' Of course, we received an expected error because our SSH public key is not yet on our minion. We can fix this with ssh-copy-id. By default, Salt SSH's key is located at /etc/salt/pki/master/ssh/ To use something else, use the priv argument in your roster. $ sudo sh-copy-id -i /etc/salt/pki/master/ssh/ user@ With our key added, we can try testing our SSH minion again: $ sudo salt-ssh '*'

Using Salt SSH


Lesson Description:

With Salt SSH installed, we can now go ahead and begin to use it; however, we need to be aware of some key differences that will affect our SSH minions: Namely, we cannot use targeting the same, and we need to be aware that all minion configurations are temporary. With that said, let's get hands-on again and make some changes to our ssh-minion. Work with Salt SSH When creating our SSH minion, you may have noticed that we did almost nothing to it during the configuration process; including the usual things like setting a hostname. Let's take this time to get our minion in a more appropriate state. As stated, we can use execution modules alongside Salt SSH, much as we would any other minion. Let's set our minion's hostname: $ sudo salt-ssh 'ssh-minion' network.mod_hostname ssh-minion But what about using our formulas? Well, we first need to update our Pillar top file, just as we would any other minion. Let's add the mysql pillar data: $ $EDITOR /srv/pillar/top.sls 'sshminion': - mysql Similarly, we can also include our ssh-minion in our regular top file located within our pillar_roots: $ $EDITOR /srv/salt/top.sls 'sshminion': - mysql Now, let's go ahead and run our MySQL formula: $ sudo salt-ssh ssh-minion state.sls mysql As we can see, this works just as it did when it ran out against our minion* minions, without any additional work past the initial server setup. Salt SSH also has no issues working with the file server (remember, our MySQL formula makes use the salt:// path in our configuration state). Additionally, we can also run raw Bash commands against our SSH minions. For example, instead of locating our MySQL package through cmd.which, we can just run: $ sudo salt-ssh -r 'sshminion' which mysql The -r or --raw flags denote that we are feeding in a raw command.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


Salt Cloud

Salt Cloud Setup


Lesson Description:

Salt doesn't have to be restricted to managing existing servers. If we plan in working with any number of cloud platforms, then we can utilize Salt to provision these servers straight from our master, bring them under Salt's control, then utilize them as we would any minion we spun up by hand and installed a salt-minion on. To use Salt Cloud, we first have to install it. Since the Salt bootstrapping process added SaltStack repos to our repolist for us, we only have to run: sudo apt-get install salt-cloud For this lesson, we'll also be using AWS as our cloud provider. If you do not already have an AWS account, create one now. Everything we do in the section will utilize AWS free tier. Prepare AWS We need the following configured in AWS: A default VPC (created by default; if you've removed your default VPC you can re-add it in the VPC dashboard, under Your VPCs > Actions > Create Default VPC) A default security group accepting inbound SSH connections (under the VPC dashboard, check Security Groups, click on the default group, then check the Inbound Rules tab) An access ID and key (go to the IAM dashboard, click on Users, select your user, then click the Security Credentials tab; create your access key from this page) A key pair (I named name salt-cloud.pem) Once that's set up, return to your Salt master Provision an EC2 Instance We first want to make sure Salt has access to the key pair needed to connect to any of our Salt servers. Since I'm on a Unix-based machine, I used scp to copy it into my home directory; Windows users should use a file client. Once the key is on your Salt master, move it somewhere safe. There is no default directory in which these need to be stored, so I add mine to our pki directory with the rest of our Salt-related keys: $ sudo su - $ cd /etc/salt/pki/master $ mv /home/user/salt-cloud.pem . We also need to ensure it has the correct permissions: $ chmod 400 salt-cloud.pem $ exit We can now get to work defining our provider file. This file contains information related to using EC2 -- things like our preferred VPC, our access keys, and that key file we added. Any provider files must be saved to /etc/salt/cloud.providers.d/ and end with .conf. We'll call ours ec2.sls: $ sudo $EDITOR ec2.conf' And as with anything else, we first provide it a name: ec2-web: Then we provide related settings: ec2-web: driver: ec2 id: '' key: '' private_key: /etc/salt/pki/master/salt-cloud.pem keyname: salt-cloud securitygroup: default Note that the driver, private_key, keyname, id, and key information must be provided (unless you provide an SSH password instead of the key information). Any other settings will automatically use their default settings. Save and exit when finished. Next, we need to add profile information. Profiles are descriptions of EC2 servers with information such as AMI ID, provider information, ssh username information, and server size. It's also where we store our minion config information, such as the location of our master. Multiple profiles can be saved to one file. Similar to our provider files, profile files must be saved as a .conf file in a particular location; this time, /etc/salt/cloud.profiles.d/. As with our last one, we're going to call this ec2.conf: $ sudo $EDITOR ec2.conf' We're going to call our profile tiny-server because it should bring up a tiny server for us to use: tiny-server: Next, we want to associate the profile with a provider file, then define the image, ssh information, and size: tiny-server: provider: ec2-web image: ami-a4dc46db ssh_username: ubuntu size: t1.micro The information provided above provisions an Ubuntu server using a free-tier AMI. ubuntu is the default username provided in this image. Finally, we want to add our minion configuration information. Namely, we want to define our Salt master. We do have a small problem, however. Since we're using Linux Academy Cloud Server (presumably), we cannot use our private IP address or salt as the master name. Instead, we need to use either our public IP address (which resets on reboot) or, more logically, one of our provided hostnames. This can be found on the cloud server page: tiny-server: provider: ec2-web image: ami-a4dc46db ssh_username: ubuntu size: t1.micro minion: master: Save and exit the file. We're now ready to provision a server. To do so, all we have to do is run sudo salt-cloud -p . Let's only provision one server right now: $ sudo salt-cloud -p tiny-server web-test We can now watch our client as it goes through the provisioning process. Note that this may take several minutes. When finished, we can check that the process worked by checking our Salt keys: $ sudo salt-key -L Accepted Keys: salt web-test Notice that our new server was automatically pulled under our Salt master. And, of course, we can also delete our servers with: $ sudo salt-cloud -d web-test

Using Salt Cloud with Orchestration


Lesson Description:

As we wind down, let's take a moment to combine some of our newly-earned skills and create an orchestration state that will generate a LAMP stack for us to use. However, before we begin, let's add a PHP formula to our Salt master, so we have all the tools we need: $ git clone -b php /srv/salt/php With that done, we can go ahead and get started. Open up our new orchestration file, which I'm calling lamp.sls: $ $EDITOR /srv/salt/orch/lamp.sls From here, we can use a Salt runner to deploy servers based on the cloud profiles we created in the last lesson. This runner command is cloud.profile: deploy_vms: salt.runner: - name: cloud.profile - prof: tiny-server - instances: - web1 - db1 cloud.profile creates minions based on the profile provided, so all we have to do it provide that profile and a list of desired minions. Next, we want to run our web states; this is the same as in our last lesson: configure_web_minion: salt.state: - tgt: 'web1' - sls: - apache - php - mysql.client And then our database formula: configure_db_minion: salt.state: - tgt: 'db1' - sls: - mysql Save and exit the file. Finally, we can add our web and db minions to our pillar top file: $ $EDITOR /srv/pillar/top.sls 'web*': - apache 'db*': - mysql Let's now go ahead and run the orchestration state. Remember that this process may take several minutes: $ sudo salt-run state.orch orch.lamp When the deployment process is finished, we have a fully-created LAMP stack. Manage VMs One additional thing we want to note in this lesson is that Salt can do more than create and delete virtual machines; we can further manage our minions through action commands. Action commands vary by cloud provider, but for AWS we have a number of options, including attach_volume, create_attach_volume, del_tags, delvol_on_destroy, detatch_volume, disable_term_project, enable_term_project, get_tags, keepvol_on_destroy, list_keypairs, rename, set_tags, show_devol_on_destroy, show_instance, show_term_project, start, stop, and take_action. A full list of all providers can be found here. Let's go ahead and add some tags to our servers to see one of these in action: $ sudo salt-cloud -a set_tags web1 db1 project='Internal Website' We can then see our newly-set tag with: $ sudo salt-cloud -a get_tags web1 db1

QUIZ: Salt SSH and Salt Cloud


Cloud Server Troubleshooting


Help! My Salt Master is Running Out of Memory


Lesson Description:

If you find that your Salt Master is running out of memory while taking this course, use the steps in this video to create a swap file and get things moving again! Run the following commands: sudo fallocate -l 1G /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile Then add the following to your file system tab (/etc/fstab): /swapfile none swap sw 0 0


Practice Exam

Final Exam Review


Lesson Description:

With our course all but done, all that's left is the practice exam. Way back when we started this journey, I noted that the exam is eighty multiple choice questions -- our practice exam is the same. While sitting for the SaltStack Certified Engineer exam, you can use books and the internet; I particularly encourage you to keep the lists of execution and state modules up, as well as the versions of the master and minion configurations. I'll also note that, despite the exam being open book and internet, you cannot and should not talk during the exam or ask people around you questions if you're taking it in a group -- as SaltStack says, it's not "open friend." The exam takes place over the course of an hour, and you can review your questions after answering them. As such, I encourage you to go through and answer the questions you're feeling confident about first, give your best answers to the rest, then take the time to review the harder questions or the ones you need to spend a little time on. Good luck!

SSCE Practice Exam



Next Steps


Lesson Description:

Congratulations! We've reached the end of the course! It's been a long journey, but since we've started, you've learned so much. From that first Salt install to our orchestration state that provisions a full LAMP stack, we've written formulas, run execution modules, learned how to use the event system for self-healing, responsive infrastructure, and got deep into just how Salt does the things it does. You have the skills to pass the SSCE and to manage Salt in the real world. If you're interested in further expanding your Salt skills, I suggest taking a look at our Python 3 course, Python 3 For System Administrators. Salt, as we know, is written in Python, and if you want to venture into writing custom execution modules other features, learning Python is the right place to start. Happy learning!