Scenario Based Docker Security
About the Author
This video introduces the course author that will be the guide for all the lectures.
About the Course
This is an important video related to this course because it goes over what knowledge and skills one can gain upon completing it. This Docker course focuses on its security aspect, and everything we will be doing will be within that context. Do not skip this video. Use it to be sure this is the right course.
How to Get Help
Here we provide how to get all the help from Linux Academy that it provides in its **Community** section. The **Interact with Students**, **Slack**, and **Instructor/Site Support** pages will enable interaction with the instructor and other users that can help with any problems or questions.
This lecture talks about the prerequisites to get the most out of this course. For this course, students should have basic knowlege in Python and bash scripting, as well as know how to use the VIM text editor, the Linux file system, and the Linux command line. A basic knowlegde in computer networks is also needed.
Text Editor Vim Basics (Optional)
This video covers a few basics about the VIM text editor. VIM stands for Vi IMproved, and it is an improved version of the older Vi editor. It is used for any kind of Linux interactions through the shell. VIM will be used throughout this course in several videos, so it would be beneficial to know how to navigate through it. We go over some of the necessary basic commands. First, we install VIM on our machines and then see how we can open or create a file, enter insert mode, how to write to a file, how to save it, and how to quit edit mode.
Job Market (Optional)
In this video, we provide an overview of the kind of jobs to look out for with the knowledge gained from this course. We talk about the salary ranges for these jobs and review some websites to use to look for these positions.
Installing and Configuring Docker
In this video, we perform an initial Docker setup. First we install Docker, start it, and check its status. We need to verify that it runs properly, so we run a Docker hello-world image. By running this image, Docker performs a series of steps that are important for verifying if everything is working the way it's supposed to (e.g. contacting the docker daemon, pulling the image file, etc.) If the installation is working correctly, we should receive a “Hello from Docker!” message on our screen. After that, we will configure Docker to be used as a regular user so we don't need to start our containers from images using `root` or `sudo`, but instead as a regular user. We will open the `etc/sysconfig/docker` file where we can make modifications that change the way the docker daemon runs and add the `dockerroot` group there. When everything is done, we will need to restart Docker, as well as log back in.
Resource Restrictions Part 1 - General talk
The topic of this video is how we can better control the allocation of resources to containers. The usage of system resources such as CPU, GPU, DISK, and RAM oftentimes needs to be limited to certain things using these resources (e.g. web servers, shared storage, and databases). The Linux kernel enables the usage of cgroups that help impose these resource limitations. When a limitation is put on a cgroup, whatever processes are in that cgroup can only use the sum total of resources allocated to that group. The best practice is to put some limits that are within the domain of expected use plus an additional 30 percent more.
Resource Restrictions Part 2 - Resource Usage Controls
In this video, we see how our limits within Docker actually work. We first run a Docker container that will have Python there because we can do an easy stress test with Python. In the Python shell, we will write code to max out our CPU usage with threads and an infinite loop. We will limit the CPU usage to 20 percent and the memory to 500 MB. We will also install `htop` that will allow us to easily monitor processes and resources on the system. We will use `htop` to monitor a single process and its CPU usage. When we run it, we will see that more threads of the process are together using up to 20 percent of the overall CPU usage, not 20 percent individually.
Resource Restrictions Part 3 - systemd and cgroups
In this video, we use cgroups to impose resource limitations. We will be able to impose one set of limits for any container of our choosing, and everybody that is part of that group will adhere to those limits. The sum total resource consumption of all the containers administered by this group will not be able to exceed the sum total limitations imposed on the group itself. We will open up a file and give it a name ending in `.slice`. We insert our custom configuration there that sets a memory limit and CPU quota. After that, we will reload and run our Docker container as well as specify the cgroup container. Then we will run the same container and tests that we ran before in the previous video.### LINK: http://man7.org/linux/man-pages/man5/systemd.resource-control.5.html
Docker and seccomp Part 1 - seccomp Default Profile, syscalls
In these next few videos, we take a look at how one of the security mechanisms for Docker (`seccomp`) works. `seccomp` is a security tool mechanism Docker can use to achieve additional security. The first thing that needs to be done is establish whether the kernel where we are running Docker supports `seccomp` or not. Next, we analyze the `docker.service` file. Here we will find the `seccomp` profile. This file can be edited if needed. We talk about the content in this file such as the default action, archmaps, syscalls, etc. Lastly, we take a look at the man pages for `seccomp`.
Docker and seccomp Part 2 - seccomp custom profile, syscalls, strace
In this video, we write our custom `seccomp` profile and see how it works, what are some of the problems we can encounter when writing it, and how we can overcome these problems when we faced with them. The first thing we will do is open up a file inside a folder that has read and write permissions for the user running a Docker container. We will call the file `my_seccomp_profile.json`. We will define the default action, the architectures, and the system calls. When we define all that, we will check and see if everything works properly or if we need to change something. We will also talk a bit more in depth about system calls, as well as which system calls are called when and why. We will need to figure out which system call is used on Debian. We will do this using `strace`.
Docker and seccomp Part 3 - seccomp custom profile, syscalls, strace
In the last video, we saw that our system call is not being called by `cmod` in Debian, but rather by something else. In this video, our goal is to figure out what is making this system call so we can block it. This time when we run Docker, we run it with `seccomp` unconfined since we will be using `strace` to track all our calls.
Securing Software Sources Part 1 - SSL Certificate Certbot Letsencrypt
In this video, we are going to deal with one important aspect of running containers and images within them. In general, we should be able to secure software and image sources for the containers. We have to be sure that they are secure and no tampering has been done. We will talk about private repositories and why we should opt for a private repo rather than just pulling an image from the Docker repository. We will create and configure a private repository to not be something communicated via an unsecured channel. We will go through the process of creating an SSL certificate so that the traffic between us and the software source is encrypted. To do all this, we will use Certbot and Let's Encrypt.
Securing Software Sources Part 2 - SSL Certificate Certbot Letsencrypt Docker Registry
Continuing on where we left off, here we continue with the creation of our certificate. We create our own registry that will be called `docker-registry`. We create a user and password for it and then pull an image from Docker to our local registry. It stores all the images that we either make from scratch or pull from a repository. All this is done so that we can then run the image in our registry on a container. When everything is set, all we need to do is run Docker. However before we do, some elaboration is needed regarding this command.
Securing Software Sources Part 3 - Registry and Letsencrypt
Here we explain the complex Docker `run` command presented in the last video before we actually run the container. It has a lot of elements we can set and we will go through it step-by-step to see what each part of it represents. When everything is explained, we will run the command. After we run it, it's important to additionally check if it's actually running. This is because the more elements there are, the higher chance some error can occur. We will use `curl` to access the container and test things out.
Securing Software Sources Part 4 - Registry and Letsencrypt
Here we create our own private image from scratch and put it into our repository into the registry. The image is intended to be a web server, so we create a file to later test out if the server will serve the file. We create our image from a base file onto which we will add our own configurations. When all is set, we will build our image and check that everything works properly.### LINK: https://github.com/CentOS/sig-cloud-instance-images/tree/d9560a37644faaeac50162582f82a5ee920dc37e/docker
Securing Software Sources Part 5 - Custom Docker Image from Scratch
Now that the image is created, we can start a container or push this image into our registry from where we can pull it. In this video, we test this out. We tag the image and push it to our repository. In order for us to push it to the registry, we will need to tag the image and log into our registry. To conclude these last several videos, we basically created a private place where we can store whatever we make. We made a basic authentication for it so only we can pull it. We also added an extra layer of security with SSL to prevent tampering with data in transit.
Securing Software Sources Part 6 - MySQL Database Custom Image
Here we create a MySQL image using MariaDB. First, we once again go over a more thorough explanation of a Docker container `run` command. We then double check to see if it's running properly. Next, we install and configure MariaDB, as well as assign where we want our database images to be. Like in the previous videos, we create our image, push it to the repository, pull it, do some changes, push it back, and pull it back to check if our changes have been preserved.
Securing Software Sources Part 7 - MySQL Database Custom Image
Continuing on from where we left off in the last video, what's left is to create our image, push it to the repository, pull it, do some changes, push it back, and pull it back in order to verify our changes have been preserved. This all needs to be done in a secure way to minimize the risk of tampering.
Securing Software Sources Part 8 - MySQL Database Custom Image
Continuing on the topic of databases, we go over some common configuration routines in this video. We configure certain permissions and SELinux security features of our container. Docker works very well with SELinux and the security mechanisms existing on the underlying operating system.
Docker Bench Part 1 - Initial Scan and Initial Results
In this video and the following videos, we take a closer look at the Docker bench. It's a security tool also used for optimization. When it runs, a scan gives suggestions on what should or shouldn't be fixed. In this video, we show the standardized way of running Docker bench. We create a Docker bench file where we put some code that allows Docker to automatically run a series of checks and scans. When we run this, we get a series of different messages. Some are passes, and some are warnings. We will go over these outputs and analyze what they mean, what is important, and what isn’t.
Docker Bench Part 2 - Adjust Logging
Since the scans from the previous video generated a lot of information and warnings, we use this video to address these messages and fix what needs to be fixed. We need to ensure that only trusted users are allowed to control the Docker daemon. This is because Docker bench does not know who our trusted users are. We also need to ensure auditing is configured for a series of specific Docker files and directories such as `docker.service`, `docker.socket`, etc.
Docker Bench Part 3 - Docker Daemon
In this video, we perform tasks tied to the Docker daemon configuration. We need to ensure traffic is restricted between containers on the default bridge, ensure that the logging level is set to `info`, and ensure that Docker is allowed to make changes to iptables. We also need to ensure that insecure registries and aufs storage driver are not used, that TLS authentication for Docker daemon is configured, and that the default `ulimit` is configured properly. We need to enable user namespace support, so we ensure the following:* The default cgroup usage has been confirmed. * The base device size is not changed until needed. * Authorization for Docker client commands is enabled and centralized. * Remote logging is configured.
Docker Bench Part 4 - User Remap
Here we are first going to enable Docker content trust. This is the ability of Docker to verify the integrity, verify the publisher, and add an additional layer to prevent tampering. In this video, our goal is to perform a remap of a user.
Docker Bench Part 5 - Docker Bench Second Pass
So we have seen how we can run a container with a regular user, but it still runs as `root` sometimes. This can pose a security issue in some situations, so we use this video to see how to fix this. We will run Docker again, but this time some parameters will be different. Then we analyze the output it provides.
Deploy and Secure Application Back End
Configure a Docker Container to Run an Apache Web Server-to-Server as an Application Back End - API Part 1 - Initial Application Test
In this video and the rest of the videos in this section, we configure a Docker container to run an Apache Web Server as an application back end. We will configure a Docker container for which we need to create a custom image to run a Python application (more specifically a Flask application). This application will have an active API that's fully functional and will achieve a connection to a database for which we also need to construct a container. So we will have two Docker containers communicating between each other. We will also set up an Apache web server that will be a gateway to the outer world so to say. It will be used as a reverse proxy to allow connections from the outer world to reach our containers. In this video, we install the prerequisites, go over the code of our Flask application, and run the application to see if it is functional.
Configure a Docker Container to Run an Apache Web Server-to-Server as an Application Back End - API Part 2 - Database Setup
Now that we know our application works, we use this video to set some things up regarding the database. We first need to create a Docker container for our database. After that, we create the database itself and add a table to it. When everything is set, it is then important to check that our database and application can communicate. But before that, we need to make a container for our application too. This is in essence because the containers will communicate between each other.
Configure a Docker Container to Run an Apache Web Server-to-Server as an Application Back End - API Part 3 - Python App Custom Image
In this video, we proceed to create a container for our application. First we download a Centos7 image from a GitHub repository. We then go to our Docker file and define all the elements our application image needs to have. Then we build that image. The build process will take a while because it needs to perform a lot of steps with installations and updates.
Configure a Docker Container to Run an Apache Web Server-to-Server as an Application Back End - API Part 4 - Connecting the Application to the Database
Now that we have created our application image, we need to run it and confirm it works. Then we need to run the database image in the background. The goal of this video is to successfully connect the application to the database and confirm it all works properly. We will encounter some problems along the way, so we will also need to do some debugging.
Configure a Docker Container to Run an Apache Web Server-to-Server as an Application Back End - API Part 5 - Apache Web Server Reverse Proxy
Now that everything is connected and the back end is successfully communicating with the database, we need a firewall and the web server that we will install on the host rather than individually on each container. The first step is to install firewalld. Then, we set up two gateways: one will be firewalld and the other Apache. Through these two gateways, we dictate what services are accessible from outside to Apache and which ports will be allowed to accept outside connections. We need to write a configuration file that will instruct Apache to serve our content.
Configure a Docker Container to Run an Apache Web Server-to-Server as an Application Back End - API Part 6 - SSL, Reverse Proxy, Summary
Our task in this video uses certbot to create an SSL certificate for our domain. Certbot will automatically do all that's needed and save us some time and writing. We will first need to do some other slight modifications to our `http.conf` file. None of the services can be accessed from outside due to the firewall block we implemented. They are only accessible through the Apache virtual host.
Here we go over what we've done in the course!
Take this course and learn a new skill today.
Transform your learning with our all access plan.Start 7-Day Free Trial