Skip to main content

Learning Vagrant

Course

Intro Video

Photo of Elle Krout

Elle Krout

Content Team Lead in Content

Elle is a Course Author at Linux Academy and Cloud Assessments with a focus on DevOps and Linux. She's a SaltStack Certified Engineer, and particularly enjoys working with configuration management. Prior to working as a Course Author, she was Linux Academy's technical writer for two years, producing and editing written content; before that, she worked in cloud hosting and infrastructure.Outside of tech, she likes cats, video games, and writing fiction.

Length

07:11:05

Difficulty

Beginner

Videos

44

Hands-on Labs

8

Course Details

Learning Vagrant introduces students to Vagrant, a virtual machine environment management tool. This course aims to provide even beginners both basic virtualization skills and knowledge and provide working knowledge of Vagrant and its default underlying provider, VirtualBox. From our very first vagrant up, we'll create reusable, customizable, and consistent development and test environments, learning how to sync files between host and guest, network between virtual machines, provision machines via configuration management tools, and more. We'll also explore using Vagrant as a container management platform through a series of related hands-on labs.

When finished, the learner will be able to create Vagrant environments using a number of providers, provisions, and Vagrant features, and will have the ability to apply core Vagrant skills to any additional plugins, provisioners, or providers they may need to use outside of this course.

https://interactive.linuxacademy.com/diagrams/LearningVagrant.html

Syllabus

Introduction

Getting Started

Course Introduction

00:01:53

Lesson Description:

Welcome to the Learning Vagrant course! This course is designed to turn virtualization beginners into Vagrant masters, able to create reusable development environments with Vagrant quickly and efficiently. We'll start by reviewing what Vagrant is, how it works, and the role it plays in the virtualization process. Then we'll start working with our own Vagrantfile by downloading a base box and spinning up an Ubuntu 18.04 guest using VirtualBox as our underlying virtualization tool. Once we understand the basics, we'll then look into the various ways we can expand our Vagrantfile, enabling various features to ensure our environments provision the same every time without using features. This includes setting up file syncs between the host and guest, creating multi-machine networking, and using provisioners like Chef or Puppet to configure our guest machine. Finally, we'll wrap up the course with some suggestions for what to do next — either with Vagrant or your next course!

About the Training Architect

00:01:31

Lesson Description:

Hello! I'm Elle, the training architect for this course. I've been working at Linux Academy for four years — first as a technical writer, and now as a training architect. I'm a years-long Vagrant enthusiast, most recently using Vagrant to mock up environments for my Linux Academy courses. I'm excited to start teaching you!

Foundational Knowledge

Concepts

What Is Virtualization?

00:04:02

Lesson Description:

HashiCorp describes Vagrant as "a tool for building and managing virtual machine environments in a single workflow" ("Introduction to Vagrant"). This begs the question, however: what is a virtual machine? And, by extension, what is virtualization? In the most general sense, virtualization is the process of creating a "virtual" version of a tangible thing. Chances are you are viewing this course on a computer of some kind — a laptop, a desktop, or even a phone. This is a tangible, real-life machine you can touch, hold, and otherwise physically manipulate. A virtual machine is simply another machine within or on top of this physical hardware that leverages the existing hardware to create an entirely separate, virtualized environment that uses a separate operating system despite shared resources. To create a virtual machine on existing hardware, we need to use a control program, more commonly called a hypervisor. A hypervisor exists to do all the hard work of doling out CPU, RAM, and disk space to the guest operating system. The guest consists of the virtualized computer, and the host is the physical machine that contains the guest or guests. Common hypervisors include KVM, XEN, VMware, VirtualBox, and Parallels, each of which has its own use cases. Why Virtualize? In general, there are two types of virtualization. First, there is datacenter virtualization, which is how major companies like AWS and Google take massive physical hosts and dole them out as smaller EC2 or Google Compute instances. Then there is the more commonly used local virtualization that lets engineers, developers, administrators, and even designers leverage virtualization on their workstation computers so they can test code on example environments that mimic their company's production environment. This is where Vagrant comes in. Summary Virtualization is the process of creating a virtual version of something that is normally physical. In this course, we'll be discussing virtual machines, which are simply guest computer hosts that run on top of an existing machine and leverage the host's hardware.

What Is a Hypervisor?

00:03:38

Lesson Description:

As we discussed in the last lesson, a hypervisor (or control program) is what takes our existing host machine and uses its resources to run guest servers, performing all the actual virtualization for us. However, not all hypervisors work the same way, and most hypervisors can be divided into two groups: Type I or Type II hypervisors. A Type I hypervisor is used against bare metal infrastructures. Remember those data centers we mentioned in the previous lesson? Those use a Type I hypervisor. Type I hypervisors run directly on a system without an operating system to create and manage the guest machines. In contrast, Type II hypervisors run as a process on top of an existing operating system. This further abstracts the guest systems from the host and the host OS. These are the ones that are generally used to create single-person development environments (although plenty of development environments leverage Type I hypervisors, as well). There are also some hypervisors that straddle the line between Type I and Type II control programs. KVM, for example, works on top of a Linux distribution but is a kernel module. This effectively makes it a Type I hypervisor that looks and functions through an OS, which means it also qualifies as Type II. Vagrant has no preference for which type of hypervisor is used, and is packaged with both VirtualBox (Type II) and Hyper-V (Type I, Windows only). It is also packaged with Docker, which isn't a hypervisor at all but, instead, a containers platform. This makes Vagrant a little more well-rounded than it may seem at first. Additionally, other hypervisors, container services, and even cloud platforms — called providers in Vagrant — can be added by installing the appropriate plugin. Summary A hypervisor is the process or control program that takes an underlying host and creates and manages guests on the host. There are two types of hypervisors: Type I and Type II. Type I hypervisors work against bare metal infrastructures, and Type II hypervisors work on hosts with an existing operating system. Vagrant supports both hypervisor types.

What Is Vagrant?

00:04:31

Lesson Description:

Vagrant itself is not a hypervisor. Instead, Vagrant works as a wrapper around an existing hypervisor, allowing users to create full development environments that can be pre-provisioned and easily reused — just share your Vagrantfile, and any other Vagrant user can provision the same environment. This means we can ensure that all our engineers are working from the same testing base. It also means that should we brick our environment when trying something out; we can replace it with only a few short vagrant commands (or simply fall back to an older version). Components Vagrant can be broken down into a few basic components. First, there's the Vagrantfile, which is a file written in Ruby that describes the desired development environment. When writing this file, we'll have to reference which Vagrant box (or boxes) we want to base our guests on. These boxes determine the operating system and default packages and configurations — essentially, they work as Vagrant-specific "golden images." Boxes can be created from scratch or found on Vagrant's website. Once we have a Vagrantfile, we can spin up, tear down, and otherwise manage our environment with the vagrant command line tool. Vagrant commands remain the same regardless of the host OS (even Windows!), so we won't have to worry about OS-specific discrepancies when working across platforms. Finally, we have the option to add functionality to Vagrant through the use of plugins. We already mentioned that additional providers can be added this way, but beyond that, plugins allow us to do anything from installing guest additions on our host to using a configuration management system to provision the end state of our hosts. There are even plugins that let us set the hostname and DNS configuration for cloud platforms, like AWS's Route 53! Summary By providing us a single tool to spin up, provision, and otherwise manage our virtual machine environments, Vagrant lets users create full development environments that we can share, tear down, and replicate as needed. These environments are easily configurable and very portable, so we never have to be without our desired testing setup!

The Course Environments

00:03:59

Lesson Description:

Now that we have a basic understanding of Vagrant, let's consider how we're going to use Vagrant in this course. Unlike other Linux Academy courses, we will not be working with the Cloud Playground. Instead, since Vagrant is expected to work with the local user's workstation, that is what you should use — your own laptop or computer. For our virtualization program, we'll be using VirtualBox, since it is in many ways the default Vagrant provider. It's also free. For our development environment, we're going to be working as through we're deploying an application and some application monitoring. So our end goal is to have two hosts: one that deploys an application and one that deploys Prometheus. We'll primarily focus on the application guest, creating an Ubuntu 18.04 server that either deploys a Node.js app directly or does so through Docker. We'll be able to access this application from our localhost as well, and any changes will be synced between the host and guest. Essentially, we'll be walking through all of Vagrant's options in a single full example. Our hands-on labs are going to work a little differently, however. Since lab servers are themselves virtualized, we're going to turn to our container providers — Docker and LXC — to reinforce various Vagrant and Vagrantfile concepts. Our focus won't be on the specifics of these providers but rather how these providers can best display command and code options that should work across many providers.

Setup

Installation

Linux

00:09:33

Lesson Description:

Since Vagrant is intended to be used from a workstation computer, we won't be using Linux Academy's Cloud Playground — which is already virtualized. We will instead focus on using our actual workstations. Additionally, Vagrant is cross-platform; therefore, it doesn't matter what you choose to work from. You can view the setup video for the appropriate operating system. Vagrant While it might be tempting to see if you can download Vagrant on your favorite Linux distro through the package manager, for the best results we should download the most recent version of our system through the Vagrant Download page. Specifically, we'll be working with Vagrant 2.2.5; if Vagrant has been updated since, you can download 2.2.5 at this URL. I'll be using CentOS 7 for this particular lesson, but the basic steps we follow will be the same for all CentOS and Debian-based hosts. Let's begin by grabbing the URL for the 64-bit CentOS download:

cd /tmp/
wget https://releases.hashicorp.com/vagrant/2.2.5/vagrant_2.2.5_x86_64.rpm
To ensure the file has not been tampered with, we also want to grab the checksum and verify it against HashiCorp's public key:
wget https://releases.hashicorp.com/vagrant/2.2.5/vagrant_2.2.5_SHA256SUMS.sig
wget https://releases.hashicorp.com/vagrant/2.2.5/vagrant_2.2.5_SHA256SUMS
vim hashicorp.asc
Copy HashiCorp's public key:
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1

mQENBFMORM0BCADBRyKO1MhCirazOSVwcfTr1xUxjPvfxD3hjUwHtjsOy/bT6p9f
W2mRPfwnq2JB5As+paL3UGDsSRDnK9KAxQb0NNF4+eVhr/EJ18s3wwXXDMjpIifq
fIm2WyH3G+aRLTLPIpscUNKDyxFOUbsmgXAmJ46Re1fn8uKxKRHbfa39aeuEYWFA
3drdL1WoUngvED7f+RnKBK2G6ZEpO+LDovQk19xGjiMTtPJrjMjZJ3QXqPvx5wca
KSZLr4lMTuoTI/ZXyZy5bD4tShiZz6KcyX27cD70q2iRcEZ0poLKHyEIDAi3TM5k
SwbbWBFd5RNPOR0qzrb/0p9ksKK48IIfH2FvABEBAAG0K0hhc2hpQ29ycCBTZWN1
cml0eSA8c2VjdXJpdHlAaGFzaGljb3JwLmNvbT6JATgEEwECACIFAlMORM0CGwMG
CwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEFGFLYc0j/xMyWIIAIPhcVqiQ59n
Jc07gjUX0SWBJAxEG1lKxfzS4Xp+57h2xxTpdotGQ1fZwsihaIqow337YHQI3q0i
SqV534Ms+j/tU7X8sq11xFJIeEVG8PASRCwmryUwghFKPlHETQ8jJ+Y8+1asRydi
psP3B/5Mjhqv/uOK+Vy3zAyIpyDOMtIpOVfjSpCplVRdtSTFWBu9Em7j5I2HMn1w
sJZnJgXKpybpibGiiTtmnFLOwibmprSu04rsnP4ncdC2XRD4wIjoyA+4PKgX3sCO
klEzKryWYBmLkJOMDdo52LttP3279s7XrkLEE7ia0fXa2c12EQ0f0DQ1tGUvyVEW
WmJVccm5bq25AQ0EUw5EzQEIANaPUY04/g7AmYkOMjaCZ6iTp9hB5Rsj/4ee/ln9
wArzRO9+3eejLWh53FoN1rO+su7tiXJA5YAzVy6tuolrqjM8DBztPxdLBbEi4V+j
2tK0dATdBQBHEh3OJApO2UBtcjaZBT31zrG9K55D+CrcgIVEHAKY8Cb4kLBkb5wM
skn+DrASKU0BNIV1qRsxfiUdQHZfSqtp004nrql1lbFMLFEuiY8FZrkkQ9qduixo
mTT6f34/oiY+Jam3zCK7RDN/OjuWheIPGj/Qbx9JuNiwgX6yRj7OE1tjUx6d8g9y
0H1fmLJbb3WZZbuuGFnK6qrE3bGeY8+AWaJAZ37wpWh1p0cAEQEAAYkBHwQYAQIA
CQUCUw5EzQIbDAAKCRBRhS2HNI/8TJntCAClU7TOO/X053eKF1jqNW4A1qpxctVc
z8eTcY8Om5O4f6a/rfxfNFKn9Qyja/OG1xWNobETy7MiMXYjaa8uUx5iFy6kMVaP
0BXJ59NLZjMARGw6lVTYDTIvzqqqwLxgliSDfSnqUhubGwvykANPO+93BBx89MRG
unNoYGXtPlhNFrAsB1VR8+EyKLv2HQtGCPSFBhrjuzH3gxGibNDDdFQLxxuJWepJ
EK1UbTS4ms0NgZ2Uknqn1WRU1Ki7rE4sTy68iZtWpKQXZEJa0IGnuI2sSINGcXCJ
oEIgXTMyCILo34Fa/C6VCm2WBgz9zZO8/rHIiQm1J5zqz0DrDwKBUM9C
=LYpS
-----END PGP PUBLIC KEY BLOCK-----
Save and exit. Import the key:
gpg --import hashicorp.asc
Validate the signature:
gpg --verify vagrant_2.2.5_SHA256SUMS.sig vagrant_2.2.5_SHA256SUMS
Then validate the download:
rpm --checksig vagrant_2.2.5_x86_64.rpm
Once verified, we can install Vagrant with:
sudo yum install vagrant_2.2.5_x86_64.rpm
Confirm the install was successful by running:
vagrant --version
VirtualBox Installing Vagrant is not enough to get up and running. We also need to ensure Vagrant has a provider to work with. For this, we'll be using VirtualBox. As with Vagrant, we'll want to use the most recent version: version 6.0.10 at the time of writing. For Linux, we can grab this for our respective distros at the Linux Downloads page. If you're on CentOS 7, as I am, you can grab the rpm with:
wget https://download.virtualbox.org/virtualbox/6.0.10/VirtualBox-6.0-6.0.10_132072_el7-1.x86_64.rpm
We also want to download and import Oracle's public key:
wget https://www.virtualbox.org/download/oracle_vbox.asc
sudo rpm --import oracle_vbox.asc
Then confirm the package signature:
rpm --checksig VirtualBox-6.0-6.0.10_132072_el7-1.x86_64.rpm
And install VirtualBox:
sudo yum install VirtualBox-6.0-6.0.10_132072_el7-1.x86_64.rpm
Finally, we can confirm that the install worked by running:
VBoxManage --version

Mac

00:04:21

Lesson Description:

Since Vagrant is intended to be used from a workstation computer, we won't be using Linux Academy's Cloud Playground — which is already virtualized. We will instead focus on using our actual workstations. Additionally, Vagrant is cross-platform; therefore, it doesn't matter what you choose to work from. You can view the setup video for the appropriate operating system. Vagrant Installing Vagrant on a Mac is a relatively quick process. We have the option of using something like brew to add Vagrant to our workstation. However, to ensure we have the most recent package, we're going to download it directly from the Vagrant Downloads page. Vagrant 2.2.5 is the version used in this course; if Vagrant has been updated since this time, you can download older versions of Vagrant here. Select the 64-bit Mac image, and download the .dmg file to your workstation. Once downloaded, open the file and double-click on the vagrant.pkg icon. Review the installer steps, and then click Install for Vagrant. We can ensure Vagrant has been successfully installed by running vagrant version on our command line. VirtualBox Installing Vagrant isn't enough for us to start defining our environments. We need to supply Vagrant with a provider — VirtualBox, in this case. As with the Vagrant package above, we have to grab the appropriate download for OS X hosts from VirtualBox's download page. We are using version 6.0.10 in this course; links to older versions can be found on the same page. Once the .dmg file has been downloaded, open the file and then select the VirtualBox.pkg icon and follow the installer prompts until you can click Install.

Windows

00:05:47

Lesson Description:

Since Vagrant is intended to be used from a workstation computer, we won't be using Linux Academy's Cloud Playground — which is already virtualized. We will instead focus on using our actual workstations. Additionally, Vagrant is cross-platform; therefore, it doesn't matter what you choose to work from. You can view the setup video for the appropriate operating system. Git Vagrant is used entirely on the command line. We could install Vagrant and VirtualBox on our Windows host. However, the best course of action is to leverage the Vagrant CLI so that it works the same way as it would on other platforms. We will also install Git for Windows to add some additional functionality. To start, we first want to download Git from the Git download page. Once the download is complete, click on the .exe file to start the installer. We will then follow the prompts that are presented. While many of the options can be left with the default settings, we do need to pay close attention and select four options: 1) the Git from the command line and also from 3rd-party software option, 2) the Use the OpenSSL library option, 3) the Checkout as-is, commit Unix-style line endings option, and 4) the Use MinTTY option. Once these options are set, you can complete the install. Vagrant Next, we need to install Vagrant itself. As with Git, we first need to download the appropriate package from the Vagrant Downloads page, then open the provided .msi file. Follow the installation prompts, leaving everything with the default settings. Once the installation is complete, Vagrant will prompt you to restart your computer; however, we are going to wait to install VirtualBox prior to the reboot. VirtualBox Remember that if you are already using Hyper-V, it must be disabled before you can use VirtualBox. Now we need to download our final component. Grab the Windows download from the Download VirtualBox page, and start the .exe file to open the installer. As with Vagrant, we can simply follow the prompts and use the default settings. Note that during the installation process, VirtualBox will briefly interrupt any network connections. Once finished, we can then restart our Windows machine. When the reboot is complete, if we want to ensure Vagrant is working, we can then open our command prompt (cmd.exe) and run:

vagrant version

Plugins

Plugin Overview

00:07:46

Lesson Description:

While Vagrant ships with native support for VirtualBox, Hyper-V, and Docker, many additional features — including many helpful provisioning options, such as Berkshelf and Ohai for Chef — are available only through the addition of plugins. Most Vagrant plugins are third-party — that is, maintained by people or companies other than HashiCorp. While this means there is a vast array of plugin options, it also comes with its own set of risks. Third-party plugins are not always maintained and can even be used maliciously by the maintainers. Use common sense when adding plugins. Always read the README file and any information provided, check the supported Vagrant versions, and verify that it's a maintained and active project. Vagrant has also been around long enough that some common plugins have been implemented as features in Vagrant. So before you try to download the vagrant-salt provisioner, for example, check that Salt support hasn't been added to Vagrant itself (hint: it has). That said, there are still plenty of plugins we will need as our Vagrant skills grow. Plugins can be added with a single command:

vagrant plugin install PLUGIN
For example, if we wanted to download the vbinfo plugin, which tells us which ports we have mapped to a Vagrant environment, we can use:
vagrant plugin install vbinfo
We can then use vagrant vbinfo in any directory with a Vagrantfile. While plugins are usually a set-it-and-forget-it kind of add-on, we do need to be aware of a couple of other commands for plugin management. When a new version of a plugin is released, we will need to run the vagrant plugin update PLUGIN command against the desired plugin. We may also find a plugin that fails to work as expected — in that case, vagrant plugin repair will search for failing plugins and attempt to repair them. Proprietary plugins, such as the VMWare provider, also require some additional work right out of the box. Since VMWare requires a license to use, we will have to feed Vagrant any licensing information with vagrant plugin license LICENSE-FILE. Finally, there will be times when a plugin or plugins are of no use to us, and we may want to remove those unused plugins. Removing a plugin takes only one command: vagrant plugin uninstall PLUGIN. However, there may be times where we must remove all plugins, such as when we're updating Vagrant. In this case, vagrant plugin expunge removes all plugins, dependencies, and metadata from our host. If we want to reinstall the expunged plugins after any changes are made, we can use vagrant plugin expunge --reinstall. Summary While Vagrant is ready to use out of the box, plugins offer expanded features, including additional providers, provisioners, and quality-of-life features that the core Vagrant offering lacks. Adding and managing plugins is done through the vagrant plugin command line tool, and most plugin setups are a single-command task.

Provider Plugins

00:05:04

Lesson Description:

Every additional Vagrant feature we can add should be added through a plugin — this includes additional provider support. We have mentioned that there are several third-party providers available for Vagrant, but let's talk about just how many options we have available. At the time of writing, there are 34 different providers available, not including the three that Vagrant ships with. These providers are not just limited to hypervisors but are also for cloud platforms, container services, and even other virtualization management tools like Proxmox. As with any third-party tool, these providers come with the same warnings that we addressed in the previous lesson. An additional issue is that sometimes there are multiple providers for the same platform. Take OpenStack, for example, where there are two different providers maintained by two different users, and both are independently updated with similar frequencies. However, one is visibly more popular. Chances are, as you progress with Vagrant, you will want to go beyond simply using VirtualBox. You will reach a point where you will want to add support to your usual cloud platform and become capable of working more than just locally. Let's go ahead and do this now. To add the AWS provider, we can run:

vagrant plugin install vagrant-aws
We also have options for Google (vagrant-google), Azure (vagrant-azure), HP (vagrant-hp), DigitalOcean (vagrant-digitalocean), Linode (vagrant-linode), RackSpace (vagrant-rackspace), and more. A full list can be found here. Go ahead and add the one(s) you need! Summary One of the primary features of Vagrant plugins is the ability to add additional providers for Vagrant, extending its use beyond managing hypervisor-based environments. With 34 third-party providers available, we can manage our Vagrant workflows so that they work locally and beyond.

Additional Plugins

00:05:14

Lesson Description:

Finally, before we close out this section on plugins, there are some quality-of-life plugins that we can add. These plugins will make the process of using Vagrant with VirtualBox go a little more smoothly. Since we're using VirtualBox, we will want to add vagrant-vbguest, which adds VirtualBox Guest Additions to our guest systems:

vagrant plugin install vagrant-vbguest
Additionally, if you are on a Windows host and need NFS support, you must add vagrant-winnfsd:
vagrant plugin install vagrant-winnfs
While not necessary for this course, vagrant-hostmanager lets us manipulate our /etc/hosts file before provisioning, which may be helpful for your workflow:
vagrant plugin install vagrant-hostmanager
vagrant-netinfo shows us the ports that are mapped to our host:
vagrant plugin install vagrant-netinfo
And vagrant-git lets you deploy and clone code from Git to your guests:
vagrant plugin install vagrant-git
Review the Vagrant Wiki page of plugins to find support for your toolchain.

Creating a Vagrant Environment

The Basics

Boxes

00:05:46

Lesson Description:

Before we define in code how we want to deploy our virtualized environment, we must determine what we want to deploy. After all, with any virtualization, container, or cloud platform, some kind of image needs to be supplied so the deployment can take place. In Vagrant, these golden images are called boxes. A Vagrant box can be a simple as a bare image of a distro, such as Ubuntu 18.04. It can also be as elaborate as deploying a ready-made Puppet master or full LAMP stack. Boxes are provider-limited; however, a box must be made for the certain providers it is used with, although boxes can support multiple providers. For now, we will be working with boxes from the official Vagrant box repository, although we also have the option to make our box images. This is also something we will cover in a later section of this course. When discovering boxes in the repository, we want to ensure the desired provider so we only see options we can use. In this case, we are working with VirtualBox and would want to filter our results for that platform. Notice how the boxes themselves are displayed in the username/box format — and be warned that this would be misleading because the username namespaces are not specially reserved for specific companies. That said, you will find most popular distros, such as Ubuntu and CentOS, provide their official images. Now let's go ahead and grab a box for us to get started. Move to your command line and run:

vagrant box add ubuntu/bionic64
This pulls down the latest LTS version of Ubuntu, ready for us to use in the Vagrantfile we will be making in the next lesson. Wrap Up Vagrant boxes provide the golden standard for how we want our virtual machines to deploy. While we can add code, alter the networking, and otherwise manipulate the virtual machine once provisioned by Vagrant, a box is the base image from which our work is done. Boxes can be user-created or downloaded for free from the Vagrant Cloud box repository.

The Vagrantfile

00:06:46

Lesson Description:

Most of the work you will do with Vagrant for a project or environment will be in a single file known as a Vagrantfile. The Vagrantfile describes the overall configuration and provisioning that needs to happen for the desired environment to deploy — so this will include everything from the desired box to the location of your various provisioner files to the networking between multiple virtual machines. With the being said, we don't need to do all of that to start. Let's go ahead and create a project directory now. We are working in an overall envs directory, and then we will create a new folder called webapp:

mkdir webapp
cd webapp
From here, we can generate a Vagrantfile using:
vagrant init
Open the Vagrantfile in your preferred text editor; I'll be using vim:
vim Vagrantfile
The Vagrantfile is written in Ruby. However, don't worry if you are not a Ruby expert — or even a beginner. The Vagrantfile is written primarily in variable assignments and simple statements, avoiding the need for any advanced — or even intermediate Ruby skills. The Simplest Possible Vagrantfile Now let's break down the provided Vagrantfile. Much of the file provided is documenting the various options we have when creating a Vagrantfile. Since we want to get things up and running, we can focus on the uncommented lines provided, which create the simplest single Vagrantfile we could write:
1  Vagrant.configure("2") do |config|
2    config.vm.box = "ubuntu/bionic64"
3  end
Breaking this down line by line, we start with line 1, which contains our Vagrant.configure("2") function. This tells Vagrant which version we want to use, and all configurations will be contained within this setting. do |config| lets us name the arguments for the block; you'll notice in line 2 that the variable we assign begins with config — this is where that comes in. config is the name that is always used here. Also in line 2, we have the vm.box. vm is an overall namespace used for virtual machine settings; box is just one of many options we have for configuring our virtual machines. box, however, is mandatory if using a hypervisor-based provider, such as VirtualBox. Let's update it to use the box we added in the previous lesson, leaving us with:
Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/bionic64"
end
Wrap Up The Vagrantfile works as the overall description of our desired environment and defines any additional configurations we need to run after our box is initially deployed. A Vagrantfile can contain everything from a single box declaration to complicated networking between multiple machines and even extensive configuration management with your platform of choice.

Vagrant Up!

00:05:57

Lesson Description:

To deploy an environment, all we have to do is run vagrant up in our project's directory. Wait for the provisioning to finish, and then we have a virtual machine! But how is our Vagrantfile actually being loaded? When we ran that initial vagrant up, Vagrant first looked in the current working directory for an existing Vagrantfile. If it does not find one there, it continues up the directory tree, until it hits /Vagrant. For example, if we were to run vagrant up in /Users/vagrant_user/academy/vagrant/envs/lamp, it would run through this series of directories before concluding we have no provided Vagrantfile:

/Users/vagrant_user/academy/vagrant/envs/lamp
/Users/vagrant_user/academy/vagrant/envs
/Users/vagrant_user/academy/vagrant
/Users/vagrant_user/academy
/Users/vagrant_user
/Users
/
But what about when we do have a Vagrantfile, such as in our webapp directory? In those instances, Vagrant still does not rely on that single Vagrantfile alone to determine the final outcome of our environment, and the Vagrantfile itself is processed in a particular way. First, the box or boxes that are being deployed are determined; then, Vagrant looks for any Vagrantfiles that are packaged with the boxes themselves. After this, the Vagrantfile in the user's home directory is considered. For Linux and Mac, this is ~/.vagrant.d/Vagrantfile. Assuming you have not previously added a Vagrantfile here in your own work, this file is not automatically generated. Generally, the user-level Vagrantfile is ideal for storing universal data, such as API keys. Next, the project-level Vagrantfile is loaded, and then any multi-machine settings from the resulting Vagrantfile. Finally, provider-specific settings are loaded. As this is happening, during each step of the load process, the new file is merged with the one from the previous step, creating a single, working Vagrantfile, with any duplicate data overwritten as each Vagrantfile is loaded — so project-level settings will override user-level, and box-level settings; but not multi-machine or provider settings. The only time data is not overridden is networking data — that is simply appended to the existing network data. Wrap Up When we run a vagrant up, we are doing more than just telling Vagrant to deploy the environment in a single directory. Vagrant has a specific lookup path and load/merge order it follows when we deploy an environment, which allows us to do things like store API keys in a single overall Vagrantfile.

Accessing the VM

00:03:37

Lesson Description:

With our Vagrant environment deployed, most of the work on the Vagrant side of things is complete, and we can access our virtual machine through SSH. When Vagrant deploys a box — assuming the box is not a private box created for your specific use case — it provides users with an overall vagrant user. This user has superuser privileges and is provided with a private key that can be used to SSH into the host. This key is located in a .vagrant directory, found in the same directory as the Vagrantfile itself. The permissions for this directory are locked down to the root user, so it's not even visible when we run an ls as our regular user. That said, we can retrieve the private key — and SSH information as a whole — for our vagrant user on our virtual host by running:

vagrant ssh-config
This outputs a host definition for our virtual machine:
Host default
  HostName 127.0.0.1
  User vagrant
  Port 2222
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /Users/elle/academy/vagrant/envs/ubuntu/.vagrant/machines/default/virtualbox/private_key
  IdentitiesOnly yes
  LogLevel FATAL
However, to log in via SSH to a host managed by Vagrant on our local workstation, all we have to do is run:
vagrant ssh
Or:
vagrant ssh <hostname>
With the hostname being the name of the virtual machine as provided in the Vagrantfile. Generally, this is used when we have a multi-machine setup, but we can provide the hostname for single machine environments as well:
vagrant ssh default
Once we're accessing the machine, we can use it as we would any other Ubuntu host. Now let's go ahead and log out, dropping back into our ubuntu directory:
exit
Wrap Up When using any public box, all we must do to access our virtual machine is log in as the vagrant user. This can be done from the same directory as the Vagrantfile by using the vagrant ssh command, or we can log in from anywhere using the credentials provided when we run a vagrant ssh-config.

VM Management

00:07:11

Lesson Description:

One of the benefits of working locally on virtual machines or in the cloud is that when you are not using your development environment, it can be torn down or turned off and then restored at a later time. This means we are not persistently using system resources (and, if on the cloud, not paying for those resources). To view the status of any given Vagrant environment, we can run the vagrant status command. As with our ssh commands, we can also add the hostname of the machine to view the status of that particular VM for multi-machine environments:

vagrant status <hostname>
We can also view the status of all VMs on our system using the global-status command:
vagrant global-status
Depending on how long you've used Vagrant, the output here might be a little overwhelming, and some of the status reports may even be inaccurate. For example, if you shut down a virtual machine managed by Vagrant via the VirtualBox console and never ran a vagrant command against that environment afterward, then any virtual machines from that environment will still be shown as running. We can ensure our global-status is more accurate by pruning old entries:
vagrant global-status --prune
Now, our ubuntu environment is currently listed as running. If we wanted to turn off our environment, we have two options: a halt or a suspend. The halt command performs a graceful shutdown (through the shutdown command on *nix hosts):
vagrant halt
We can then start our environment again with a vagrant up:
vagrant up
Alternatively, we can perform a suspend, which saves our virtual environment at the current point-in-time, then stops the machine:
vagrant suspend
This means when we resume the machine, it won't perform any boot procedures; it just restarts where we left off:
vagrant resume
Note that when we suspend our virtual machine, however, we are using more disk space than if we were to perform a full shutdown. We might also find it necessary to reboot the machine at times:
vagrant reload
When we run this command, Vagrant first performs a graceful shutdown (vagrant halt), then brings the machine back up (vagrant up). Finally, if we are done with an environment for the foreseeable future, we can remove it entirely with the destroy command:
vagrant destroy
Once we destroy an environment, we cannot get any of the work we performed on that environment back, although we can reprovision it using our Vagrantfile. There are ways to ensure our work is more persistent; however, we will cover those in future lessons. Wrap Up When working with Vagrant environments, there are a few commands that will be daily drivers — our frequent-use commands for general virtual machine management. These commands let us start, stop, and otherwise manage our project machines efficiently.

Snapshots

00:08:13

Lesson Description:

We noted in the previous lesson that when we run a vagrant destroy, any of the work we complete on our virtual machines is not saved; yet the purpose of a development environment is to let us try new things, and sometimes those things leave our environment either irreparably broken, or just broken enough that it would take less time to reprovision. So how can we do this without losing our work? One way of doing this is by using snapshots, which are point-in-time images of our virtual machines. We can take a snapshot of an entire environment or just of a single server. Depending on our snapshot method, we also have the option to store multiple snapshots. Do note that snapshots are not supported on all providers, however. Should a provider not have snapshot support, the vagrant snapshot command will output an error. We can use snapshots in one of two ways — and the two ways cannot be mixed. We can either save and restore our snapshots, or push and pop them. save and restore From the ubuntu environment we have been working from, let's go ahead and make some changes to our server:

vagrant up
vagrant ssh
sudo apt-get update & sudo apt-get upgrade -y
exit
Let's now save this as a snapshot, so we won't have to update our environment again:
vagrant snapshot save updated
Note that if we wanted to specify a virtual machine, we could define it in the command:
vagrant snapshot save default updated
To see a list of snapshots we've taken, run:
vagrant snapshot list
Now, let's go ahead and make some changes to our virtual machine by installing the apache2 package — maybe we're hosting a web environment:
vagrant ssh
sudo apt-get install apache2 -y
exit
Except let's say we then find out we are using nginx. While it would be easy for us just to install Apache, imagine we took the time to actually set up Apache more, and we are concerned about leaving unnecessary artifacts in our environment, so we want to fall back to our updated state. To do this, we can run:
vagrant snapshot restore updated
We can also do this for a specific VM:
vagrant snapshot restore default updated
Now, let's see if Apache remains installed on our environment:
vagrant ssh
sudo systemctl status apache2
It's gone! With our Vagrant environment restored, we can go ahead and log out of the machine, then remove the origin snapshot:
exit
vagrant halt
vagrant snapshot delete updated
Notice how we shut down the machine before removing the snapshot. While this is not mandatory, it does cut down the time it takes to delete the snapshot substantially. push and pop While the above method of using snapshots is plenty effective, if we are only planning to work with a single snapshot at any given time, we can instead use the push and pop method. With push and pop, we cannot specify any specific VMs, nor can we define the name of our snapshot. Instead, a single snapshot is stored, then removed once "popped." We cannot use the push and pop method on a guest where we've saved and restored, so let's first copy our environment:
cd ..
cp -r webapp webapp2
cd webapp2
Now, let's go ahead and take a snapshot of our host:
vagrant up
vagrant snapshot push
We can see that a name gets automatically assigned to it by running vagrant snapshot list:
vagrant snapshot list
To restore our virtual machine to that snapshot, we have to run:
vagrant snapshot pop
Notice how the snapshot no longer exists should we search for it:
vagrant snapshot list
Wrap Up Snapshots allow us to create point-in-time images of our virtual environments, allowing us to fall back to a stable version of our environment should something go wrong in our environment. Whether we use the on-the-fly push and pop method or save multiple fallback points with save and restore is ultimately up to us, but either way we will not have to entirely destroy our environment should something go wrong.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:15:00

Expanding Vagrant

Syncing Files

Basic Syncing

00:11:53

Lesson Description:

Now, if all Vagrant did was provision servers, we wouldn't have much of a use for it. Since the primary goal of Vagrant is to let us provision full development environments, we most likely provide that environment with a number of packages. We could do this manually with the vagrant upload command:

vagrant upload <source> <destination> [hostname]
But to perform this process automatically, we can instead use the config.vm.synced_folder option. This lets us sync files between folders on our host and destinations on our guest. We can have as many synced folders as we need, and Vagrant, by default, uses VirtualBox's native folding syncing system to sync, although we'll review the additional options in subsequent videos. Let's return to our lamp directory, if not there already. Now, when we reference a directory on the host to sync with the guest machine, we reference that directory from where it is in reference with the Vagrantfile. As such, the directories we sync from the host are often placed in the same directory as the Vagrantfile itself. As such, let's create an app directory:
mkdir app
Then clone an example application into it:
git clone https://github.com/linuxacademy/content-vagrant-demo-app.git
We can now open up our Vagrantfile:
vim Vagrantfile
The config.vm.synced_folder option might initially look like it's similar to config.vm.box, but we do not assign a single value to it. Instead, our configuration directive might be considered closer to a list. For example, for us to sync our app directory to /var/www/app, we can use:
config.vm.synced_folder "app/", "/var/www/app"
Wherein we first declare the configuration directive (config.vm.synced_folder), then the source folder (app/), add a comma, then list the destination (/var/www/app). The configuration ends with no comma. That said, should we want to expand our configuration to include any of the available options, we can simply add a comma and continue. Options Vagrant provides a few basic options for working with synced folders. First, we can have Vagrant create the application folder itself:
config.vm.synced_folder "app/", "/var/www/app",
  create: true
Notice how we added a comma after the initial configuration line. We could have continued with the create option on the same line, but generally additional options are provided in a new line, tabbed in once below the directive. We can also set the group and owner of the folder; this defaults to vagrant, but once we get our provisioners working, we'll want to substitute this with an appropriate user, so let's add it so we don't forget:
config.vm.synced_folder "app/", "/var/www/app",
  create: true, owner: "vagrant", group: "vagrant"
We also have the option of setting an overall ID for these synced folders — this is used when we want to manage the mount using the mount command on the guest. Let's just set ours to app:
config.vm.synced_folder "app/", "/var/www/app",
  create: true, group: "vagrant", owner: "vagrant", id: "app"
Additionally, we can add any other options that are available via mount using mount_options; the mount_options option always overrides any other settings. So if we set the user ID via mount_options, the user with ID 111 would be the owner, not vagrant:
config.vm.synced_folder "app/", "/var/www/app",
  create: true, group: "vagrant", owner: "vagrant",
  id: "app", mount_options: "uid=111"
We also can disable the entire block with disabled. This is generally placed at the end of the first line:
config.vm.synced_folder "app/", "/var/www/app", disabled: true
  create: true, group: "vagrant", owner: "vagrant", id: "app"
Let's go ahead and test our sync by saving and exiting the Vagrantfile, then reloading Vagrant:
vagrant reload
Should we now SSH in to our host, we can see that our application lives at /var/www/app:
vagrant ssh
ls /var/www/app
Exit the guest:
exit
Git From this point on in the course, we're going to be altering our Vagrantfile extensively, but we also want the option to fall back to certain points in time in our Vagrantfile. For this, we're going to employ the use of git. Windows users should already have git installed, while Mac users can download it from the website. Linux users will mostly likely find it in their distro's default repos under the name git. Let's initialize a Git repo in our webapp directory:
git init
And add any hidden files to our .gitignore — including the .vagrant directory our private key lives in:
vim .gitignore

.*
Make an initial commit:
git add .
git commit -m "Basic Ubuntu Vagrantfile with single synced dir"
Wrap Up A development environment wouldn't be complete without the ability to get the appropriate files on the guest host. Oftentimes, we want these same files accessible on our workstation so we can work on our infrastructure or application code and deploy it as needed. This is config.vm.synced's strength.

NFS

00:07:40

Lesson Description:

Before we begin, create a new nfs branch in your webapp repo and move into that branch:

git checkout -b nfs
Beyond the initial parameters we set in the previous lesson, we can also change what we use to sync our files. Right now, Vagrant is defaulting to VirtualBox's file syncing service, but we can change this to something like NFS. To change our syncing implementation, we can use the type option. That said, it's not enough to just add it to our Vagrantfile:
...
  config.vm.synced_folder "app/", "/var/www/app", type: "nfs",
    create: true, group: "vagrant", owner: "vagrant",
    id: "app"
...
NFS does not support all of the same options as the default file sharing mechanism. We need to remove both the group and owner options; in exchange, we have the nfs_export, nfs_udp, and nfs_version options:
...
  config.vm.synced_folder "app/", "/var/www/app", type: "nfs",
    create: true, id: "app",
    nfs_export: true, nfs_udp: true, nfs_version: 3
...
nfs_export allows Vagrant to update the /etc/exports file on the host — you may need to type in your superuser password; if not set to true, we would have to update it manually. nfs_udp defaults to UDP as the transport, allowing for faster transfer speeds. If you cannot use UDP, set this to false and NFS will use TCP instead. nfs_version lets us set the version of NFS we wish to use; this defaults to 3. Finally, we need to add a static network for NFS to use. For this, we can use the config.vm.network configuration to create a DHCP private network:
...
  config.vm.network "private_network", type: "dhcp"
end
Save and exit the Vagrantfile. Before we proceed with reloading our environment, however, we need to make sure our host has the ability to create an NFS server. Windows users should already have vagrant-winnfsd, and NFS is already installed on OSX. For Linux, just download the needed package (nfs-kernel-server on Debian, nfs-utils on RedHat). While not necessary with the Ubuntu 18.04 box, other images may require us to download the NFS client before we reload our Vagrantfile, as well. That said, assuming our host has NFS installed, We can now test out our new Vagrantfile:
vagrant reload
Commit the changes to this Git branch, then return to the master:
git add .
git commit -m "NFS synced with default options"
git checkout master
Wrap Up As with many other Vagrant features, we are not limited to using only the default file syncing solution. The Network File System protocol can be enabled using the type option for our existing synced_folder configuration with minimal changes — but be wary of the different options available.

Rsync

00:08:09

Lesson Description:

Before we discuss the details of using rsync to sync directories between host and guest, let's switch to a new rsync branch and open up the Vagrantfile:

git checkout -b rsync
vim Vagrantfile
Unlike with NFS, we can switch to using rsync by only adding the type option to our initial code (as long as our default vagrant user has access to the destination directory):
...
  config.vm.synced_folder "app/", "/var/www/app", type: "rsync",
    create: true, group: "vagrant", owner: "vagrant",
    id: "app"
...
However, before we vagrant up, there are some differences in Vagrant's behavior when working with rsync that we need to account for. Specifically, the rsync method of file syncing is generally not recommended since it's not intended to be a constant sync. Instead, rsync is run only when the vagrant up or vagrant reload commands or used, or it's manually updated with vagrant rsync. That said, we are not completely without an auto-sync option; however, it's recommended that we disable it prior to suspending or halting our hosts. We can enable it either first by adding the rsync__auto option to our Vagrantfile:
  config.vm.synced_folder "app/", "/var/www/app", type: "rsync",
    create: true, group: "vagrant", owner: "vagrant",
    rsync__auto: true,
    id: "app"
Then restarting our guest. We can then use the vagrant rsync-reload command so it watched the source directory for any changes. Additionally, by enabling rsync we open up the option to use a number of other rsync-specific settings. We just saw one — rsync__auto. Notice the double underscore in the name, as well — all of our rsync options will follow this format. rsync__chown disables the owner and group settings we are currently using. rsync__exclude works like our .gitignore file — any file or directory listed here will not be synced between the host and guest. rsync__rsync_ownership doesn't set the ownership of our synced directories, but rather determines how the ownership is set. By default, should Vagrant need to check the ownership of the destination directory, it will happen seperately from the sync itself. If you are using rsync version 3.1.0 or later on both host and guest, however, rsync can be enabled to do this instead. rsync__rsync_path sets the location of rsync if the sudo rsync command is not in the path. rsync__verbose enabled verbose output. And finally, we can also pass in any existing rsync flag or parameter through the rsync__args option. Now let's go ahead, save and exit our Vagrantfile, then reload our environment:
vagrant reload
Commit the changes when finished:
git add .
git commit -m "Rsync syncing"
Then return to the master branch:
git checkout master
Wrap Up When a sophisticated offering — such as NFS or SMB — is not available, we can also just use rsync to keep our guest files up to date with our host. Although we should keep be mind, rsync is not without its drawbacks — specifically with regards to autosyncing.

SMB

00:06:36

Lesson Description:

Our last syncing option uses the Server Message Block (SMB) protocol to sync our source and destination. This option is only available on Windows and Mac hosts — Linux users can skip this lesson without affecting the rest of this course. Mac users will also want to ensure SMB sharing is enabled on their workstations. To do this, go to the System Preferences and select Sharing. Check File Sharing, then go to Options and ensure your user is checked off. Save changes. Windows users need PowerShell version 3 or later installed. Now, let's start by creating a new smb Git branch:

git checkout -b smb
And opening our Vagrantfile:
vim Vagrantfile
Like rsync, we can switch to the smb protocol by simply adding the type option:
...
  config.vm.synced_folder "app/", "/var/www/app", type: "smb",
    create: true, group: "vagrant", owner: "vagrant",
    id: "app"
...
In most instances, this is all we'll want to do — unless we have an outside SMB host we'd rather direct Vagrant to. For that, we can use the smb_host option. There are also two additional options: smb_password and smb_username but, generally, we do not want to include these in our Vagrantfile — whether to start a .vagrant.d/Vagrantfile here is up to you, but ultimately I am not comfortable including a password anywhere on my system. Save and exit the file. Let's try out our SMB share:
vagrant reload
Note that it may take several minutes for the share to mount. Finish off the lesson by committing the SMB share to git and returning to the master.
git add .
git commit -m "SMB sync"
git checkout master
To clear the SMB mount, restart the guest using the "default" Vagrantfile. Turning file sharing off is desired.
vagrant reload
Wrap Up Windows and OSX hosts offer one final syncing option in SMB. From the Vagrant side of things, there is minimal setup to get SMB working as long as your host system is already prepared; that said, be sure to have your login information handy!

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:15:00

Networking

Port Management

00:08:25

Lesson Description:

Thus far in the course, we have been working with the assumption that anything we'd be doing with our guest servers is over SSH, but realistically, if working on an app or website, or testing any kind of platform with a front-end element, we're going to need to access our guest through a web browser, as well. For this — as well as more advanced features like multi-machine setups — we have the config.vm.network configuration directive and its myriad options, including the forwarded_port setting for mapping guest ports to the host. Let's begin by opening our Vagrantfile. We can work directly from the master branch here, since any settings here will carry over as part of our core Vagrantfile:

vim Vagrantfile
And adding our basic forwarded_port configuration. Here we're going to map port 8080 on the guest to 8081 on the host:
config.vm.network "forwarded_port", guest: 8080, host: 8081
The guest and host options let us set which port is forwarding where. While there are no limits on which IP on the guest you forward to your host, the host parameter only accepts ports above 1024 — unless you run Vagrant as root, which is not suggested. When the port forwarding is configured, the ports bind to all interfaces, meaning any other devices on your forwarded port. In most instances where we're working from a local workstation, this is acceptable. But if we need to limit our scope to only specific IPs, we can leverage the guest_ip and host_ip options — we won't need to add these in our case, but the option for more restrictive access is there. By default, anything sent via the TCP protocol is what is forwarded through the port. We can change the protocol to UPD by using the protocol option, but should we need to accept both, we would need to create two separate rules. And what if whichever host port we assign is already being used? Right now, Vagrant would output an error should that happen. However, we can also have Vagrant reassign the host port with the auto_correct option:
config.vm.network "forwarded_port", guest: 8080, host: 8081,
  auto_correct: true,
Finally, we can also assign a name for this rule, which can be viewed in VirtualBox:
config.vm.network "forwarded_port", guest: 8080, host: 8081,
  auto_correct: true, id: "wanderer-app"
Let's now test this out! Save and exit the file, then reload Vagrant (or vagrant up):
vagrant reload
Before we check in our browser, however, we still have a little housekeeping to do. Our environment isn't quite ready yet, so we're going to have to manually install a couple packages and deploy the app ourselves:
vagrant ssh
sudo apt-get install nodejs npm
cd /var/www/app
node index.js
Now, navigate to localhost:8081 in your browser. Our app is hosted on the guest but accessible via the localhost, just like we required! Return to the terminal and CTRL+C to kill the application, then log out of the guest:
exit
Commit the changes to the Vagrantfile to git:
git add .
git commit -m "Added forwarded port"
Wrap Up SSH isn't the only way engineers, developers, and administrators need to access their working environment, and Vagrant allows us to leverage the config.vm.network command to create various networking rules that make accessing our environment a little easier.

Working with Multiple Machines

00:11:19

Lesson Description:

Most often, networking begins to come into play when we begin to expand our environment. Production is never just a single server, and most often our applications or infrastructures are set up on a variety of servers — web servers, database servers, monitoring servers, and more. Before we start, ensure your machine is shut down (vagrant halt). Return to your Vagrantfile — we're still working on the master branch. Right now, we are set up as though we are working with only a single application server. But let's say we know dev's next sprint will be dedicated to adding application monitoring, so let's provide a Prometheus host in our configuration as well. To do this, we're going to have to do a bit of refactoring. Add some space between the config.vm.box setting and the config.vm.synced_folder block. To work with multiple machines in a single Vagrantfile, we need to essentially create a "Vagrantfile within a Vagrantfile" for each one, with each "Vagrantfile" being a block of code in our existing Vagrantfile. Our config.vm.synced_folder and config.vm.network settings are going to go in an app block, whereas our second, prom block, will use global settings for now. To define a machine block in our Vagrantfile, we use the config.vm.define method — a method is a repeatable set of code, similar to a function. In the method, we define a name for our machine:

config.vm.define "app" do |app|
Notice how this mimics the Vagrant.configure("2") do |config| at the start of our Vagrantfile. And as with our Vagrantfile, any setting we use in this method block will use the name in the pipes (|app|) as the namespace for the configurations. This means we need to update our config.vm.synced_folder and config.vm.network settings to use the appropriate namespace. We also need to tab them in so they are under the app method block, not part of the greater Vagrantfile. We also need to end the method:
config.vm.define "app" do |app|
  app.vm.synced_folder "app/", "/var/www/app",
    create: true, group: "vagrant", owner: "vagrant",
    id: "app"
  app.vm.network "forwarded_port", guest: 8080, host: 8081
    auto_correct: true, id: "wanderer-app"
end
Let's do the same thing for the prom machine, also ensuring port 9090 on the guest is forwarded to 9090 on the host:
config.vm.define "prom" do |prom|
  prom.vm.network "forwarded_port", guest: 9090, host: 9090,
    auto_correct: true, id: "prometheus"
end
But what of the config.vm.box outside of the individual machine blocks? Well, remember the lesson on the "Vagrantfile" here — multi-machine machine configurations are the last things added to the overall cached Vagrantfile used to ultimately provision our guests, which means everything from the overall Vagrantfile is also included in the provisioning. In other words, anything outside of the machine blocks is applied to all machines, and applies before any configurations in those machine blocks. Additionally, when we provision our guests with a vagrant up, all guests will be provisioned. If there is a particular machine we do not want to bring up with the greater infrastructure, we can add autostart: false to our method call:
config.vm.define "app", autostart: false do |app|
We can later spin up that machine using the vagrant up command by supplying the name of the machine, as noted in earlier lessons. Should we want to define a default machine to use when no individual machine is provided in the vagrant command, we can as well — but note that this means vagrant up, vagrant reload, and others will no longer manage the greater environment as a whole. This can be done with primary: true:
config.vm.define "app", primary: true do |app|
We don't want to use either of these for our use case, however, so ensure your overall Vagrantfile looks like:
Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/bionic64"

  config.vm.define "app" do |app|
    app.vm.synced_folder "app/", "/var/www/app",
      create: true, group: "vagrant", owner: "vagrant",
      id: "app"
    app.vm.network "forwarded_port", guest: 8080, host: 8081,
      auto_correct: true, id: "wanderer-app"
  end

  config.vm.define "prom" do |prom|
    prom.vm.network "forwarded_port", guest: 9090, host: 9090,
      auto_correct: true, id: "prometheus"
  end
end
Save and exit the file, then spin up our new environment:
vagrant up
To manage each individual node, use either the name app or prom, depending on which server you wish to work with. Wrap Up Networking in Vagrant isn't just about networking between the host and guest machine. Few environments are limited to only one server, so naturally the same can be said of Vagrantfile. By using the config.vm.define method, we can create "Vagrantfiles within Vagrantfiles" and manage multiple machines at once.

Private Networking

00:07:22

Lesson Description:

Currently, we have been gaining access to our machine via vagrant ssh and also through a forwarded port that provides access to the front end of our application. But we also have the option to generate a private IP for the server, allowing anything within the network access to the guest. This includes other guests generated via the Vagrantfile, so our entire environment will work as a private network, most likely similar to production. The simplest way to do this is through that same DHCP networking line we used in the NFS lesson, but adapted now to our individual machines:

...
    app.vm.network "private_network", type: "dhcp"
...
    prom.vm.network "private_network", type: "dhcp"
Now, should we reload our environment and log in to our individual machines via SSH, we can see they each have an IP in the same network space:
vagrant reload
vagrant ssh app
ip addr list
Your IP information may vary, but for my system, I now have three network interfaces output when I run this command: One with the 10.0.2.15 IP, used by VirtualBox; one in the 172.0.0.0 range (specifically, 172.28.128.6), which is the private DHCP network we just added in our Vagrantfile; and our loopback address, 127.0.0.1. Make note of the address assigned to your app guest. Let's switch to our prom guest:
exit
vagrant ssh prom
ip addr list
Like our app guest, our VirtualBox network and loopback address are 10.0.2.15 and 127.0.0.1, respectively — these will be on all VirtualBox-created guests. But our DHCP private network provides an individual IP to the prom host. Mine is 172.28.128.7, although yours may differ. We can also specify which IP we want our guests to use. While Vagrant itself doesn't restrict what IP we assign our guests, we do want to ensure it's not an IP already in use, and it should come from one of the reserved IP blocks: 10.0.0.0 - 10.255.255.255, 172.16.0.0 - 172.31.255.255, and 192.168.0.0 - 192.168.255.255. Let's try this out. Log out of the prom guest:
exit
And reopen the Vagrantfile:
vim Vagrantfile
To set an IP, we simple have to switch the type parameter to ip and provide the desired IP address. This can be either in IPv4 or IPv6; although, note that when using IPv6, the entire /64 subnet will be reserved — this can be changed with the netmask setting, however. I'm going to use IPv4 and switch to the 192.168.0.0 block, leaving me with the following full Vagrantfile:
Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/bionic64"

  config.vm.define "app" do |app|
    app.vm.synced_folder "app/", "/var/www/app",
      create: true, group: "vagrant", owner: "vagrant",
      id: "app"
    app.vm.network "forwarded_port", guest: 8080, host: 8081,
      auto_correct: true, id: "wanderer-app"
    app.vm.network "private_network", ip: "192.168.200.10"
  end

  config.vm.define "prom" do |prom|
    prom.vm.network "forwarded_port", guest: 9090, host: 9090,
      auto_correct: true, id: "prometheus"
    prom.vm.network "private_network", ip: "192.168.200.11"
  end
end
Save and exit, then reload Vagrant:
vagrant reload
Should we access either of our machines and use ip addr list, we can see they are now using the IP addresses we assigned:
vagrant ssh app
ip addr list
With our network set up, we can now finish off this lesson by committing our changes to git:
exit
git add .
git commit -m "Added private network"
Wrap Up Private networks do more than just allow our host to talk to our guests — they let our guests talk to each other and any other server on the network. This allows us to mimic production environments, including whether we're using IPv4 or IPv6.

Public Networking

00:06:33

Lesson Description:

Unlike previous lessons, I encourage you to watch and not follow along in this short, public networking overview. Since I do not know what networks you're connected to, I don't want to accidentally grant access to your host to someone you shouldn't! Right now we're using the default vagrant user, which means we're using keys that everyone who has used Vagrant has access to. Be smart! Only create a public network if you know who you're granting access to your host. Public networking with Vagrant is very provider-dependent, but with VirtualBox, you are creating a bridged networking that will allow users on the greater network to which you are bridged access to your guest server. In my case, I'll be granting my wireless network access to my app guest. Creating a bridged network is a simple enough process, assuming we can use DHCP. To create my public network, all I have to do is add this configuration declaration to my app block:

app.vm.network "public_network"
I have the option to assign it an IP with ip, but I'm going to have it auto-assign one for me. I'm going to now reload my Vagrant environment:
vagrant reload
Notice how it asks which network to bridge to:
==> app: Fixed port collision for 22 => 2222. Now on port 2201.
==> app: Clearing any previously set network interfaces...
==> app: Available bridged network interfaces:
1) en0: Wi-Fi (AirPort)
2) en1: Thunderbolt 1
3) en2: Thunderbolt 2
4) bridge0
5) p2p0
6) awdl0
==> app: When choosing an interface, it is usually the one that is
==> app: being used to connect to the internet.
I'm going to select my WiFi, option 1. If familiar with the name of the network you wish to use, you can also add this option directly to the Vagrantfile with the bridge option. With my environment deployed, I can now access my app guest:
vagrant ssh
And discover the IP of my guest on the public network:
ip addr list
I'm now going to create a user for myself:
sudo -i
useradd elle
passwd elle
And temporarily enable password login:
vim /etc/ssh/sshd_config
# PasswordAuthentication no

systemctl restart sshd
I can now switch to another computer on my WiFi network and log in as the elle user! Wrap Up The config.vm.network "public_network" configuration declaration lets us assign an IP on a greater, public network to a VirtualBox guest. Be careful how your box is configured when doing so, however! Vagrant's vagrant user keys are public.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Provisioning

Provisioning Overview

00:04:51

Lesson Description:

In our lesson on "Port Management," when we went to test our forwarded ports, we had to manually install a number of packages — but that's not the ideal behavior we want from our environment. Instead, wouldn't it be better if those packages were already installed? This is where provisioning comes in: We can install packages, configure settings, upload files, and more at the time we deploy our environment so we're not using the unaltered base box but instead one more suited to our needs. Provisioning can be done through a simple shell script, or by using one of the many default provisioners that leverage existing configuration management solutions to manage your guests. We're going to explore how to use the basics of each provisioner by creating separate git branches for each scenario, similar to how we managed it in the "Syncing" section. If you already know which configuration management solution you wish to use, I would finish out this video to see how provisioning works, then feel free to skip to the relevant video. Note that while we test our provisioning, we cannot perform a simple vagrant reload as we have in prior lessons. Provisioning is set up so it only brings the guests under enforcement during the initial vagrant up. A machine that has been provisioned already and was only suspended or shut down must use the --provision flag during reload or up to actually run the provisioner. Finally, before we finish, we want to ensure our provisioners are able to use our node's hostnames, so let's open up our Vagrantfile and add the hostname value to each guest. The finished file should look like:

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu/bionic64"

  config.vm.define "app" do |app|
    app.vm.hostname = "app"
    app.vm.synced_folder "app/", "/var/www/app",
      create: true, group: "vagrant", owner: "vagrant",
      id: "app"
    app.vm.network "forwarded_port", guest: 8080, host: 8081,
      auto_correct: true, id: "wanderer-app"
    app.vm.network "private_network", ip: "192.168.200.10"
  end

  config.vm.define "prom" do |prom|
    prom.vm.hostname = "prom"
    prom.vm.network "forwarded_port", guest: 9090, host: 9090,
      auto_correct: true, id: "prometheus"
    prom.vm.network "private_network", ip: "192.168.200.11"
  end
end
Commit the changes:
git add .
git commit -m "Added hostnames"
Wrap Up The first time we deploy our Vagrant environment, it should give us everything we need to start developing. To make sure this happens, we need to use provisioners to install and configure our box so it's in the appropriate state.

Shell

00:11:07

Lesson Description:

The most basic manner with which we can provision our servers is with old-fashioned shell scripts. These can either be written directly into our Vagrantfile or we can store them in our environment directory and have Vagrant run them as needed. Scripts in Vagrant are run as the vagrant user with elevated privileges (i.e., with sudo), so we can leave out the sudo. As with our file syncing lesson, we're going to work from different branches as we explore our different provisioner options. Let's create our shell branch:

git checkout -b shell
And open our Vagrantfile. We're going to focus only on the app machine for this lesson. Let's start by making sure Node.js is installed on our host. We can do this by using the shell provisioner and supplying an inline script:
config.vm.define "app" do |app|
  ...
  app.vm.provision "shell", inline: "apt-get install -y nodejs npm"
end
But, as you can see, this can be fairly limiting. We don't want to feed in massive scripts in the middle of our Vagrantfile. We have two options to circumvent this: We can embed the script elsewhere in the file, such as the top, or we can save it in a separate file entirely. To embed a script, we can set a variable, then simply feed the script into that variable. We can then call that variable as a value for the inline setting. For example, let's take our existing script and instead embed it at the top of the file:
$script = <<-SCRIPT
apt-get install nodejs npm -y
SCRIPT
We can then update our app.vm.provision line to call the variable instead of run a single command:
app.vm.provision "shell", inline: $script
But this still involves us dumping a lot of Bash into our Vagrantfile, and will quickly become messy if we need to work with more with one script, so let's break it out instead. Save and exit the Vagrantfile for now, then create a scripts directory:
mkdir scripts
Let's now use a script that doesn't just install Node.js and NPM but also sets up Prometheus's Node Exporter. I'm going to save our new script as pre.sh — with the understanding that eventually I'll need to write scripts that will run later, such as after I add Node Exporter's system file.
vim scripts/pre.sh
Move the script:
#! /bin/bash

echo "Installing Node..."
apt-get install nodejs npm -y
echo "Creating Node Exporter user..."
useradd --no-create-home --shell /bin/false node_exporter
echo "Downloading and installing Node Exporter..."
wget https://github.com/prometheus/node_exporter/releases/download/v0.17.0/node_exporter-0.17.0.linux-amd64.tar.gz
tar -xvf node_exporter-0.17.0.linux-amd64.tar.gz
mv node_exporter-0.17.0.linux-amd64/node_exporter /usr/local/bin/
chown node_exporter:node_exporter /usr/local/bin/node_exporter
rm -rf node_exporter-0.17.0.linux-amd64/
rm *
Save and exit. We can now update our Vagrantfile:
vim Vagrantfile
Remove the embedded script. When calling a script in another file, we have to switch from the inline option to the path option, and supply the path of the script in relation to the Vagrantfile:
app.vm.provision "shell", path: "scripts/pre.sh"
Save and exit when finished. Remember since our machines have already been through that initial provision, we're going to have to use the --provision flag as we reboot our hosts for our changes to take effect:
vagrant reload --provision
Once our guests are running again, commit the changes:
git add .
git commit -m "Begun node exporter provisioning"
Wrap Up The shell provisioner lets us supply Bash scripts for Vagrant to use during provisioning. Of course, these come with all the drawbacks of using a simple Bash script for provisioning — namely, a lack of idempotency. But for just getting things up and running, the shell remains a quick and easy choice for provisioning.

File

00:09:13

Lesson Description:

Undoubtedly an issue that arises when using the Bash provisioner — and in other instances — is how to add the appropriate configuration files. Vagrant offers a simple "file" provisioner that performs that task, generally used to add configuration files. It takes only two options — source and destination — and works differently from our synced_folder option in that it only uploads during provisioning and does not maintain an active sync of our files. Still working from our shell branch — since these two provisioners really go hand in hand — let's create a new directory for configurations, including a folder to store our service files:

mkdir -p configs/services
Add the service file for the Node Exporter:
vim configs/services/node_exporter.service

[Unit]
Description=Node Exporter
After=network.target

[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target
Save and exit the file. Let's now open up our Vagrantfile:
vim Vagrantfile
We want our configuration files added after our initial script runs, so add the app.vm.provision option after the initial Shell provisioning line. We can either add the file directly to its destination, or sync the folder. However, unlike the Shell provisioner, the file provisioner does not work with elevated privileges. This means we have to move our configuration to a temporary location, then use the Shell provisioner again to finish up the process. Let's first add our files to /tmp/services:
app.vm.provision "file", source: "configs/services",
destination: "/tmp/services"
Save and exit the Vagrantfile. We're now going to create a post.sh script to finish up our move and start the Node Exporter:
#! /bin/bash

mv /tmp/services/* /etc/systemd/system/
systemctl start node_exporter
Save and exit, then return to the Vagrantfile and add the new provisioning script:
app.vm.provision "shell", path: "scripts/post.sh"
Reload the Vagrant environment and reprovision:
vagrant reload --provision
When finished, we can check that our scripts succeeded by seeing if the node_exporter service is running on the app machine:
vagrant ssh app
sudo systemctl status node_exporter
exit
Let's go ahead and commit our changes:
git add .
git commit -m "Finished provisioning scripts"
Return to the master branch and destroy the guest:
git checkout master
vagrant destroy app
Wrap Up Although not powerful on its own, when paired with the Shell provisioner, the File provisioner lets us spin up fully configured environments without having to know anything about configuration management.

Ansible

00:10:26

Lesson Description:

Ansible is an automation tool that allows us to automate tasks across our infrastructure. These tasks are written in YAML-structured files, called playbooks, and are run against groups of servers defined in an inventory file. While you do not need Ansible knowledge to proceed with this lesson, it will be most beneficial to existing Ansible users who want to learn the ropes of the Ansible provider. Before we begin, create a new branch to work from:

git checkout -b ansible-local
And create a location to store our Ansible playbooks, including a simple playbook that will install Node.js and NPM on our app guest:
mkdir playbooks
vim playbooks/node.yaml

--- # Install Node.js and NPM
- hosts: all
  tasks:
    - name: Install Node.js
      apt:
        name: nodejs
        state: latest
    - name: Install NPM
      apt:
        name: npm
        state: latest
Vagrant currently has two provisioner options: Ansible and Ansible Local. The plain Ansible provisioner should be used when you're already using Ansible from your workstation to control your infrastructure. If you'd rather keep Ansible off your workstation, then Ansible Local is most likely the best option, which will install Ansible on the guest and use it to provision itself. And while both have their own set of general configuration options, the Ansible provisioners share a common set of options we can use to set up our guest. For this lesson, we'll be focusing on Ansible Local and some useful common options that can carry across both provisioners. Open up the Vagrantfile. We're going to be working specifically in our app method block, this time creating another method block for our Ansible Local provisioner, set to use the ansible namespace:
app.vm.provision "ansible_local" do |ansible|
end
Any options we need to set for our Ansible configuration then go in this block. Formatting should follow the ansible.key = "value" format. At the very least, we'll need to specify a playbook:
app.vm.provision "ansible_local" do |ansible|
  ansible.playbook = "playbooks/node.yaml"
end
However, those familiar with Ansible might be wondering how Ansible will know to work with our guest machine, since Ansible normally works from an inventory. And while we do have the option to specify an inventory file (or create inventory groups inline), Vagrant automatically generates an inventory file based on our Vagrantfile should we fail to provide one. And since our node.yaml file is set to work with "all" hosts and our Ansible configuration is set to work against the app node specifically, we're going to use the auto-generated option. That said, there are still a few additional options we need to provide. By default, Ansible is going to work as an ansible user, not our vagrant user, and this user does not have the escalated privileges it needs to install packages. Ansible offers the ability to become another user (such as root) to address this, and Vagrant lets us tap into this — or any ansible-playbook command setting — through its shared Ansible options. These options let us set inventory files, define Ansible Galaxy settings, and more, including setting the "become" user:
app.vm.provision "ansible_local" do |ansible|
  ansible.playbook = "playbooks/node.yaml"
  ansible.become = true
  ansible.become_user = "root"
end
Save and exit the Vagrantfile. Let's see if our provisioning works:
vagrant up app
Once finished, log in to the guest via SSH and ensure the appropriate packages have been installed:
vagrant ssh app
which node
which npm
exit
With our Ansible provisioner working, we can now commit our changes, and destroy our guest before the next lesson:
git add .
git commit -m "Added Ansible provisioning"
vagrant destroy app
Wrap Up With two ways to work, when paired with Ansible, Vagrant is able to automate not just the provisioning of our AMIs but all the packages, configurations, and services we need. And if we're already using Ansible, we can run the same playbooks as in production, to ensure we're testing on as close to production as possible.

Chef

00:12:19

Lesson Description:

Chef provides infrastructure automation wherein users create end-state "recipes" that configure fleets of servers. Servers can be assigned roles, and recipes can be applied on a by-role or by-server basis. Chef also offers a robust development kit, ChefDK, that can also be installed in place of the Chef itself. Vagrant offers four different Chef provisioners: Chef Solo, Chef Zero, Chef Client, and Chef Apply. Chef Solo is a lightweight version of Chef, wherein the chef-client is used to execute recipes without a Chef server. This is the provisioner suggested for any users — but especially new ones. Chef Zero installs a lightweight version of the Chef Infra Server temporarily on the guest to execute recipes during provisioning. This is the next step up from the Chef Solo option, providing slightly more advanced functionality. The Chef Client provider should be used when you have an existing Chef Infra Server you want your Vagrant environment to connect to; the majority of its configuration is around creating this connection. Finally, Chef Apply applies single recipes to a host. The Chef Apply provisioner is suggested only for advanced Chef users. As with the Ansible provider, the Chef providers all share a common set of options as well. These options focus on the uploading and managing of recipes, roles, and environments, as well as general Chef settings, such as logging and reporting. For this lesson, we'll familiarize ourselves with some general Chef-related settings using the suggested Chef Solo provisioner. Let's create a new branch:

git checkout -b chef https://github.com/linuxacademy/content-vagrant-provisioners-demo 
And a cookbooks directory:
mkdir -p cookbooks
Then clone in the simple nodejs cookbook we're going to use to demonstrate:
git clone
All this cookbook does is install Node.js and NPM on our host. If we needed to port more of our Chef configurations over — such as roles or data bags — we would follow a similar process, creating roles and data_bag directories in our project folder, then adding our related files in the same structure they would normally be uploaded on the Chef Infra Server itself. Let's open up our Vagrantfile:
vim Vagrantfile
We need to create a method block for our Chef options; we'll be working in our existing app block:
app.vm.provision "chef_solo" do |chef|
end
We can adjust how the provisioner works with both Vagrant and Chef by reviewing the provisioner's default options, but as long as we do not deviate from the default directory structures, minimal configuration is necessary. Let's go ahead and add a recipe:
app.vm.provision "chef_solo" do |chef|
  chef.add_recipe "nodejs"
end
If we were to attempt to provision our host with this Vagrantfile, we would receive an error:
Chef Infra Client cannot execute without accepting the license
To combat this, we can use the arguments global option to accept the license:
app.vm.provision "chef_solo" do |chef|
  chef.add_recipe "nodejs"
  chef.arguments = "--chef-license accept"
end
We can now save and exit, then reprovision our guest:
vagrant up app
To check that provisioning was successful, access the guest and check that Node.js and NPM have been installed:
vagrant ssh app
which node
which npm
exit
Commit changes to git, and destroy the host:
git add .
git commit -m "Added Chef provisioning"
vagrant destroy app
Return to the master branch:
git checkout master
Wrap Up Between Chef and the full ChefDK, Chef provides a robust, full-featured option for configuration management and infrastructure automation. Paired with Vagrant, it can ensure your development environments are consistent across reloads and reprovisions.

Puppet

00:09:29

Lesson Description:

Puppet is a configuration management solution built around the idea of a single "master" Puppet Server that enforces end-state descriptions, called manifests, on a fleet of hosts. Manifests can be stored in modules to configure an entire application or service. Vagrant offers two different Puppet provisioners: Puppet Apply and Puppet Agent. Puppet Apply runs Puppet modules and manifests against the guest without a master Puppet Server, whereas Puppet Agent sets up a connection to a Puppet Server, which then applies modules and manifests based on its existing setup. As with previous lessons, we'll be using the one that does not request a master host: Puppet Apply. Create a new git branch:

git checkout -b puppet https://github.com/linuxacademy/content-vagrant-provisioners-demo
And add directories for a manifest and module:
mkdir manifests
mkdir modules
Once we configure our Vagrantfile to work with Puppet, Vagrant will look for a default manifest at manifests/default.pp and use this to make changes to our guests. That said, we want to be able to use modules locally stored in our modules directory, which is our goal. Let's first pull down our desired module:
git clone
And then create a default.pp that maps our nodejs module to the app guest:
vim manifests/default.pp

node app {
  include nodejs
}
Save and exit. We can now update our Vagrantfile. Unlike previous provisioners, the Puppet Apply and Puppet Agent provisioners do not share a common options library, so our setup here will be Puppet Apply-specific. Additionally, Vagrant does not auto-install Puppet on our guest, so our best option for now is to use the shell provisioner to install the puppet package, then use the Puppet Apply provisioner to do the rest. Let's open up the Vagrantfile and add a line that will install the puppet package on all our guests:
config.vm.provision "shell", inline: "sudo apt-get install puppet -y"
Next, we want to create a method block calling the puppet provisioner. The provisioner knows to use our default.pp file, so the only additional configuration we need is to specify our modules directory — if we were using Hiera or other related options, we would simply have to do the same, defining the appropriate file or directory in the configuration:
config.vm.provision "puppet" do |puppet|
  puppet.module_path = "modules"
end
Save and exit the file. Let's now reprovision our app guest:
vagrant up app
Once finished, ensure the appropriate packages are installed on the guest, then commit the changes, return to the master, and destroy the guest:
vagrant ssh app
which node
which npm
exit
git add .
git commit -m "Added Puppet provisioning"
git checkout master
vagrant destroy app
Wrap Up Puppet is a configuration management and automation platform that leverages end-state configurations known as manifests — or collections of manifests called modules — to provision our hosts. Paired with Vagrant, it ensures our development environments are using the same code that prod does.

Salt

00:08:50

Lesson Description:

Salt is a configuration management, orchestration, and automation solution wherein users create end-state "formulas" that describe their desired system. These formulas can then be used in orchestration states, or Salt's "beacon" and "reactor" event-driven automation tool. Vagrant offers only a single Salt provisioner, but it is capable of running both masterless Salt and using a Salt master. As before, let's begin by creating a new branch to work from:

git checkout -b salt
We also need to create a directory to store our formulas, pillar data, and configurations. Vagrant expects this to be a primary salt directory, with subdirectories for each system. Specifically, we want to add a root subdirectory that will store the formulas we normally keep in /srv/salt:
mkdir -p salt/roots
Since the Salt provisioner can run with or without a master, if we choose to run it masterless, we also need to provide an updated minion configuration, setting the minion to run masterless. The Salt provisioner will expect this to be in salt/minion, so let's place this here:
vim salt/minion
Specifically, we need to set the file_client to local, ensuring the minion will not attempt to connect to a master:
file_client: local
Save and exit. Let's now clone in our sample Salt formula, which will install Node.js and NPM:
git clone -b salt https://github.com/linuxacademy/content-vagrant-provisioners-demo.git
And add a top.sls file to assign our formula to our app guest:
vim salt/roots/top.sls

base:
  'app*':
    - nodejs
Now, let's update our Vagrantfile:
vim Vagrantfile
Unlike previous provisioners, we need to sync our salt/roots directory to /srv/salt on our guest. Let's add a synced_folder configuration to our file:
config.vm.synced_folder "salt/roots/", "/srv/salt/"
Now we can create our salt method block:
config.vm.provision "salt" do |salt|
end
To ensure Salt runs masterless, we need to set the masterless option. We also need to set run_highstate to true so our highstate is run upon provisioning:
config.vm.provision "salt" do |sa
  salt.masterless = true
  salt.run_highstate = true
end
Otherwise, most of the configuration should be done by syncing the appropriate files and letting Salt to the work. Let's now save and exit, and launch our guest:
vagrant up app
Check that Node.js and NPM have been installed:
vagrant ssh app
which node
which npm
exit
Then commit the changes and destroy the host:
git add .
git commit -m "Added Salt provisioning"
vagrant destroy app
git checkout master
Wrap Up Salt is a robust configuration management and automation solution, and, paired with Vagrant, it can ensure our development environment is prepared from the first vagrant up.

Docker

00:08:59

Lesson Description:

Unlike the Docker provider, the Docker provisioner does not deploy containers on our host but rather configures Docker and manages containers on our guest — so if we need to develop not just our application but the larger server that supports it, we can. To demonstrate the Docker provisioner, we're going to work directly with our application, deploying it to a container, and mapping the container port to our VM, still allowing us front-end access to the app from our localhost. Let's begin by creating the git branch of our work:

git checkout -b docker
And opening our Vagrantfile — we have no extra files to add:
vim Vagrantfile
At its most basic, the Docker provisioner can be used to simply install Docker on the guest, so if we added app.vm.provision "docker" to our app method block, Docker would automatically install on our guest. We want to work with some more advanced options, however, so let's create a method block:
    app.vm.provision "docker" do |docker|
    end
We then have the option to go on to specify an image or images — if we already know the one we want to use from the Docker repository — but since we want to build an image, we're instead going to look not at our images option but build_image, which will build an image based on a provided Dockerfile. Since our application has an existing Dockerfile in /var/www/app, let's add this to our configuration:
app.vm.provision "docker" do |docker|
  docker.build_image "/var/www/app"
end
We still aren't quite done yet, however, because while this builds our image, it does not deploy the container itself. To deploy the container, we need to know the container name, so use the args option alongside build_image to specify additional parameters:
app.vm.provision "docker" do |docker|
  docker.build_image "/var/www/app", args: "-t app"
end
Then use the app tag within the run parameter to launch the container. Since we want to ensure our container port is mapped to our guest, let's include additional args here, too:
app.vm.provision "docker" do |docker|
  docker.build_image "/var/www/app", args: "-t app"
  docker.run "app", args: "-p 8080:8080"
end
Save and exit the changes. Let's now test it out:
vagrant up app
To ensure everything is working, visit localhost:8081 in your browser — our application should load! When finished, commit the changes and destroy the environment, returning to the master branch:
git add .
git commit -m "Added Docker provisioning"
vagrant destroy app
git checkout master
Wrap Up When used with a typical virtual machine guest, the Docker provisioner allows us to deploy containers, build images, and otherwise manage the Docker setup on our guest server. This gives us the ability to take an infrastructure-level look at our applications and fleet, compared to the Docker provider, which only gives us access to the containers themselves, deployed on our workstation.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Boxes

Box Overview

00:05:56

Lesson Description:

At the very start of this course, we touched on Vagrant boxes, learning that boxes are the package format for Vagrant environments, that we can find base boxes at app.vagrantup.com, and to pull in a box, we only have to use the vagrant box add command and provide the name of the box. However, as we work with our environments over time, you'll find boxes become outdated, or, perhaps, there are no boxes for your desired use case. In these cases, we'll need to take a deep look at our box management options. To list the boxes currently downloaded on the system, use:

vagrant box list
Depending on how much you've used Vagrant, you may find only a few different boxes here or you might discover multiples of the same box. For example, here is every version of the Ubuntu Bionic Beaver 18.04 box I have on my system:
ubuntu/bionic64                 (virtualbox, 20181206.0.0)
ubuntu/bionic64                 (virtualbox, 20190814.0.0)
ubuntu/bionic64                 (virtualbox, 20190828.0.0)
ubuntu/bionic64                 (virtualbox, 20190905.0.0)
ubuntu/xenial64                 (virtualbox, 20180215.0.0)
ubuntu/xenial64                 (virtualbox, 20180323.0.0)
ubuntu/xenial64                 (virtualbox, 20180509.0.0)
ubuntu/xenial64                 (virtualbox, 20180518.0.0)
ubuntu/xenial64                 (virtualbox, 20180525.0.0)
Depending on how long it's taken you to go through this course, we may also find our Ubuntu box also needs an update. We can check with:
vagrant box outdated
Then, should we need to update our box, we can use:
vagrant box update
But note we'll need to destroy and reprovision any guests we have using the updated box for the changes to take effect on our environments. Additionally, we can remove any unneeded, older versions of a box with:
vagrant box prune
We can also remove a box entirely with:
vagrant box remove <box>
The Box File That said, there will be times when using boxes from the cloud is not enough. In these instances, we have two options: We can repackage an existing environment as a box — useful in instances wherein we need to prepackage some prerequisites, like the Puppet agent; or we can create a new base box from scratch. We'll cover the processes around both of these things in future lessons, but for now let's consider what actually makes up a Vagrant box. A Vagrant box file — which is packaged as packagename.box — generally comprises three components: the box file, the metadata, and the box information. The box file is a compressed (tar, gzip, zip, etc.) file containing the base image specific to the desired provider. Vagrant does not handle this file directly, once packaged — instead it is passed to the provider itself. The metadata comprises a JSON-formatted document that contains the name of the box, its descriptions, the version, available providers, and any box URL(s). While, finally, the optional box information contains output used during a vagrant box list -i — that said, you'll notice it is rarely used. Wrap Up As we work with Vagrant, we'll undoubtedly work with more and more boxes — even if it is just versions of the same box. Because of this, learning how to manage our boxes — or even how to repackage or create our own — can allow us greater control over our environments.

Packaging an Existing Machine

00:06:40

Lesson Description:

Most often, you'll be fine using an image from the Vagrant Box repository, but sometimes you may need to install additional packages to effectively use your Vagrant environment. In those instances, taking an existing Vagrant box and repackaging it after making some quick changes can be a time-saving alternative to creating a new bare box from scratch. For this lesson, we're going to move away from our webapp environment and work fresh, but we're going to confront an issue from a former lesson. If you watched through the Puppet lesson, then you know Puppet needs to be installed for the provisioner to work. While we got around this in the lesson by using the shell provisioner to install Puppet from the default repositories, maybe we want to use Puppet's repository instead. One option for this is to create a script to perform the install, but another is prepackaging it in the box itself. Let's create a new environment directory to work from and move into it:

mkdir puppet-box
cd puppet-box
Then initialize an environment using the same ubuntu/bionic64 image we've been working with. A shortcut to generating this particular Vagrantfile is to specify the box in the vagrant init command itself:
vagrant init ubuntu/bionic64
Then we can just run a vagrant up:
vagrant up
And log in to the host via SSH:
vagrant ssh
Now, all we need to finish off our image is to actually update the guest so the Puppet agent is installed. To do this, we first need to add the PuppetLabs Puppet 6 repository:
wget https://apt.puppetlabs.com/puppet6-release-bionic.deb
sudo dpkg -i puppet6-release-bionic.deb
sudo apt-get update
Then install the puppet-agent (note that the puppet package will use the default Ubuntu repositories):
sudo apt-get install puppet-agent
Log out of the guest:
exit
We can now package our existing VirtualBox virtual machine automatically:
vagrant package --output puppet.box
Should we desire to package this with a Vagrantfile or metadata information, we could use the --vagrantfile and --include options, respectively. We can now add our box:
vagrant box add puppet.box --name puppet
To test that it works, destroy the existing box, update the Vagrantfile, then redeploy the environment:
vagrant destroy
vim Vagrantfile

Vagrant.configure("2") do |config|
  config.vm.box = "puppet"
end

vagrant up
Check that Puppet is installed on the new guest:
vagrant ssh
puppet --version
Destroy the environment when finished:
exit
vagrant destroy
Wrap Up When a base box is only missing a few tools or features, instead of creating a new one from scratch, it can be more efficient to simply alter and repackage an existing one via the vagrant package command.

Creating a Base Box

00:20:03

Lesson Description:

In instances where the base boxes provided in the Vagrant box repository are not acceptable, we can always create our own custom base boxes. This process will be different for every provider, but there are some general commonalities across all options. For this lesson, we'll continue to focus on the VirtualBox provider as we create a Debian base box. To begin, download the amd64 option for the Debian netinstall. We now want to access VirtualBox directly. First, open the console, then click on the New button. Give your virtual machine a name; for this, I chose DebianBox. Set the Type to Linux and the Version to Debian (64-bit), if not automatically selected. Continue. Most providers, including VirtualBox, allow users to set the memory size in the Vagrantfile. Because of this, it's best to start small so users experience minimal impact when they use the default box. 512 MB is the recommended setting, but for Debian we'll need to set this to 1024 MB. Continue. We also want to Create a virtual hard disk now. Click Create, then select VDI (VirtualBox Disk Image) and Continue. We want to ensure we're using a dynamically-allocated virtual hard disk file that will grow in size as needed. Be sure that the maximum size set is enough for the user to really work with the host. The default of 8 GB is a good start. Create. We now want to add our Debian image to a storage mount so we can actually boot and install Debian. Select the Settings button, then click Storage. Select the empty blue CD and mount the Debian image to the optical drive. While we're here, we also want to go to Network and ensure our first adapter is set to use NAT. Additionally, we'll want to make note of our MAC address for later (found under Advanced). Click OK to close the settings. Start the VM. We'll now walk through the Debian installation process. For most options, we'll use the default, but start by selecting the (non-graphical) Install. Select your chosen language, location, and keyboard settings, then wait for the initial loading process to commence. We will eventually be prompted to set a hostname; I left mine as the debian default. I also left the domain name blank. Next, we need to set the root password. vagrant is the suggested option for this, since it's easy for Vagrant users to guess. Set the password, then continue. We are then prompted to create a new user. Set the user's name and username to vagrant, and the password to the same. Select your chosen time zone, then follow along with the default settings for the partitioned disk using the Guided - use entire disk option. After a time, we will then be asked if we want to add another disk. Select No, then choose your package manager mirror, leaving the HTTP proxy blank. We then need to choose whether we want to participate in the distribution survey. Since this install is intended as a Vagrant box, I personally select No. Finally, we have to select our software. Unselect any desktop environments and the print server and select the SSH server and standard system utilities. Wait for the installation to complete; install GRUB when prompted on the only available disk. Once finished, log in as root. Let's first install some needed packages:

apt-get install linux-headers-$(uname -r) build-essential dkms
Then, add the vagrant user to the sudo group:
usermod vagrant -G sudo
And enable passwordless sudo via visudo:
%sudo ALL=(ALL:ALL) NOPASSWD: ALL
We now want to switch to the vagrant user and add the needed public key:
su - vagrant
mkdir .ssh
wget https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant.pub
mv vagrant.pub .ssh/authorized_keys
sudo chmod 700 .ssh
sudo chmod 600 .ssh/authorized_keys
With this done, we can now switch to our command line and create our box metadata. For our general metadata.json file, all we need to do is define the provider:
{
  "provider": "virtualbox"
}
Finally, since we're using VirtualBox, we need to create a Vagrantfile to package with the box that contains the MAC address for the VM and syncs the environment directory to /vagrant on the guest:
vim Vagrantfile

 Vagrant.configure("2") do |config|
   config.vm.base_mac = "080027A75737"
   config.vm.synced_folder ".", "/vagrant"
 end
Save and exit. We can then package the VM, metadata, and Vagrantfile with the vagrant package command. Note that we need to specify that name of the VM in VirtualBox when we use the --base option to generate a base box:
vagrant package --output debian.box --base DebianBox --include metadata.json --vagrantfile Vagrantfile
A package named debian.box has now been added to our working directory. To add it to Vagrant, all we have to do is run:
vagrant box add debian.box --name Debian-Test-Box
Now let's test out our new box by creating a new Vagrant environment with it:
mkdir debian-box-test
cd debian-box-test
vagrant init Debian-Test-Box
vagrant up
And logging in to our new guest via SSH:
vagrant ssh
Go ahead and run some other arbitrary tasks to see that it's running as expected. We should be able to install packages, manage services, and otherwise use it as though it were no different than any other virtual machine. Wrap Up When we can't find an existing base box that fits our needs, there's no choice but to make our own! Although the creation of Vagrant boxes is provider-specific, there are a number of basic requirements every box needs to work out-of-the-box with Vagrant.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:15:00

Share

Vagrant Share Overview

00:07:03

Lesson Description:

While in the networking lesson, we discussed using the public networking option to allow users on a greater, wireless network access to our guests, Vagrant Share allows users to share their environment with the greater world. Vagrant Share leverages ngrok, which exposes private networks to the public internet through the use of secure tunnels. ngrok is an enterprise solution, but free users can have one active tunnel at a time, which should be more than enough for personal use. We do need to create an ngrok account, so do that now, then install ngrok by unzipping the file and moving the binary to the appropriate location:

mv ngrok /usr/local/bin/
Add the authorization token provided by ngrok on their website:
ngrok authtoken
We also need to install the vagrant-share plugin:
vagrant plugin install vagrant-share
And remove vbguest, which currently conflicts with Vagrant Share:
vagrant plugin uninstall vagrant-vbguest
But before we use Vagrant Share, we should consider our use case: Vagrant Share provides us the HTTP, SSH, and "general" sharing, each with its own set of benefits and drawbacks. HTTP sharing allows users to access to your host via a URL generated by ngrok, and anyone using this URL can access the HTTP endpoint(s) of your guest so long as there is a designated forwarded port. Vagrant is not required by anyone accessing your environment via this URL. SSH sharing allows SSH access to the actual guest through the vagrant connect --ssh command using a public key generated during the share process. If we want to provide access to an exposed port that is neither HTTP(s) or SSH, we can use "general" sharing via the vagrant connect command itself; this allows users access to your guest as though it were another computer on a local area network. Vagrant Share comes with its own set of concerns, however — namely, while ngrok provides end-to-end TLS for non-HTTP connections and uses encrypted SSH keys, you are still providing global access to a guest using most likely default Vagrant keys. As such, when using Vagrant Share, security through obscurity is considered the best practice — in other words, only provide the links or connection information to those who absolutely need it. Finally, this section requires a bit of a caveat— as of the most recent release of ngrok, Vagrant Share is partially broken. We will still be able to access our HTTP share, and SSH and "general" sharing provide general overview lessons about who Vagrant Share should work an in ideal (i.e., fixed) setting. These lessons will be updated and a related hands-on lab added once these issues are resolved. The GitHub issue for this bug can be found here. Wrap Up While sharing with our greater public network can be effective when sharing among local teams, there may be times when you need to share with the greater public. For this, Vagrant Share leverages ngrok to create a public URL from which end users can connect. References Vagrant Share Issue #10799

HTTP Sharing

00:03:28

Lesson Description:

Since HTTP sharing implies we're sharing a website endpoint, let's go ahead and merge our "Docker" provisioner branch into our working master:

git merge docker
Now let's reprovision our app guest:
vagrant reload app --provision
From here, sharing our HTTP endpoint is simple — we only have to run:
vagrant share app
However, as mentioned in the previous lesson, you'll notice the output here hangs. In order to access the URL of our host, we'll have to grab it from the ngrok console itself. Once we have access to the URL, however, we can navigate to it and see our application! Note that we have access to only the front end of the application. So while this could be a console of some kind, it won't be the actual underlying host. References Vagrant Share Issue #10799

SSH Sharing

00:02:08

Lesson Description:

Less secure than HTTP sharing, SSH sharing lets public users access our Vagrant guest via an automatically added public key whose private key is shared with users we wish to work with. This key is also password protected with a password set by the environment's creator. To share an environment with SSH, all we have to do is run:

vagrant share --ssh
However, note that output is current broken at this time. Were this working, we would be granted a list of keywords we could then use to connect via the vagrant connect command:
vagrant connect --ssh word1_word2:word3_word4
We would then be able to log in using the private key and password provided when we generated the share. References Vagrant Share Issue #10799

Vagrant Connect

00:02:29

Lesson Description:

Should we need to share a port outside of the SSH or HTTP protocol, we can use the vagrant share --full option. This will expose all ports to users who connect via the vagrant connect command. Unlike when connecting with SSH, however, vagrant connect won't drop you directly into the guest. Instead, an IP address should be provided that we can then use to connect to the host at any port using whatever method we require. Note that since vagrant share is not outputting the connect information, this functionality of Vagrant Share is broken at this time. References Vagrant Share Issue #10799

Conclusion

Advanced Vagrant

Windows Considerations

00:03:12

Lesson Description:

Windows users — either as the host or the guest — may find Vagrant is slightly more difficult to wrangle than *nix users. If planning on using Windows, beyond the considerations mentioned in the video, you may also wish to familiarize yourself with the following Vagrantfile and Vagrant CLI options. Users of Windows hosts may need to adjust SSH connection information as needed. For this, we have both the config.vm.wimrm and config.vm.winssh options, which configures access to the guest using either WinRM or the native Windows OpenSSH client, respectively. If using a Windows guest, there are two additional CLI options you may want to consider: vagrant powershell, which will open up a PowerShell prompt in which we can then use PowerShell commands; and vagrant rdp which starts an RDP client on the guest, letting us connect via a remote desktop session. Windows users will also want to pay close attention to any providers and plugins they use in the event any configuration settings vary for the operating system.

Tools and Scripting

00:03:59

Lesson Description:

In this course, we covered the vast majority of Vagrant's default functionalities — but that doesn't mean there isn't still more of Vagrant to explore. If after taking this course you're still itching to uncover more about Vagrant, there are a few places you can start: Vagrant Push lets users upload their application to a remote location, such as FTP or Heroku, which are the default options. We also have the option to add our own machine triggers, which is additional functionality to be run before or after certain vagrant commands. For example, if we want to ensure we preserve needed logs and results before a vagrant destroy, we can add a trigger to ensure our files are properly backed up to the localhost before that destroy command is run — without us having to actually run anything but vagrant destroy. Still not enough? If you know Ruby or are aching to learn more, you can write your own Vagrant plugins using Ruby and packaged with RubyGems. HashiCorp provides a pretty good development guide to get you started and covers a number of use cases wherein you'll want to write additional plugins. Be sure to view the additional topics on the left menu dropdown!

Final Steps

What's Next?

00:02:46

Lesson Description:

Finished with Vagrant but not with learning? You may want to check out some of our courses on either containers or virtualization. While Vagrant is great, many people now use container platforms, such as Docker, to create development environments. Alternatively, now that you've gotten a taste of virtualization, why not explore it at a deeper level? Either way, here are some course suggestions to get you started: Docker Deep DiveLearn Docker By DoingLXC/LXD Deep DiveRed Hat Certified Specialist in Virtualization Prep CourseLPIC-3 Exam 304 - Virtualization and High Availability