Skip to main content

Docker Certified Associate (DCA)

Course

Intro Video

Photo of Will Boyd

Will Boyd

DevOps Team Lead in Content

Length

19:19:57

Difficulty

Intermediate

Videos

57

Hands-on Labs

12

Quizzes/Exams

1

Course Details

Docker is an extremely powerful tool for running and managing containers. Currently, Docker is the industry-leading container runtime platform and offers a colossal number of features revolving around container management, plus orchestration.

This course serves as preparation for learners who seek to pass the Docker Certified Associate certification exam offered by Docker. Ultimately, the certification exam aims to validate one’s skills as a Docker practitioner. In this course, we will dissect both the concepts and objectives needed to use Docker effectively. Learners will also uncover how to leverage core features contained within Docker Community Edition (DCE), as well as the additional features offered through Docker Enterprise Edition (DEE).

Syllabus

Course Introduction

Introduction

Course Introduction

00:05:27

Lesson Description:

The Docker Certified Associate exam covers a wide range of Docker-related topics. In this video, I will introduce you to the structure of this course and how it will help prepare you to earn your Docker Certified Associate certification.

About the Training Architect

00:00:27

Lesson Description:

This video introduces William Boyd, the author of this course!

Course Features and Tools

00:03:06

Lesson Description:

Linux Academy offers more than just videos. In this video, I will briefly introduce you to a few of the tools and features offered by Linux Academy that can help you in your journey as you proceed through the course so that you can take advantage of everything this course has to offer.

Core Docker Concepts

Docker Community Edition Installation and Configuration

Introduction to Docker Community Edition

00:02:11

Lesson Description:

In this section, we will be discussing installing and configuring Docker Community Edition (CE). This lesson covers the basics of what Docker CE is and gives a general idea of some of its features. It will also give some background knowledge to help prepare for the following lessons, where we will have the opportunity to work with Docker CE. Relevant Documentation https://docs.docker.com/install

Installing Docker on CentOS

00:07:05

Lesson Description:

The process of installing Docker CE is fairly straightforward, but can differ slightly depending on the environment. In this lesson, we'll explore how to install and configure Docker CE in a CentOS environment. We'll also go through the process of granting a user permission to execute docker commands. This lesson will provide the understanding of the steps necessary to install Docker on a CentOS machine. Relevant Documentation https://docs.docker.com/install/linux/docker-ce/centos/ Lesson Reference Install required packages:

sudo yum install -y device-mapper-persistent-data lvm2
Add the Docker CE repo:
sudo yum-config-manager 
    --add-repo 
    https://download.docker.com/linux/centos/docker-ce.repo
Install the Docker CE packages and containerd.io:
sudo yum install -y docker-ce-18.09.5 docker-ce-cli-18.09.5 containerd.io
Start and enable the Docker service:
sudo systemctl start docker
sudo systemctl enable docker
Add cloud_user to the docker group, giving the user permission to run docker commands:
sudo usermod -a -G docker cloud_user
Log out and back in. Test the installation by running a simple container:
docker run hello-world

Installing Docker on Ubuntu

00:05:55

Lesson Description:

While the process of installing Docker CE on Ubuntu is similar to the installation process for CentOS, there are some differences. In addition, a Docker Certified Associate is required to be able to install Docker in a variety of environments. In this lesson, we will install Docker CE in an Ubuntu environment. Relevant Documentation https://docs.docker.com/install/linux/docker-ce/ubuntu/ Lesson Reference Install required packages:

sudo apt-get update

sudo apt-get -y install 
  apt-transport-https 
  ca-certificates 
  curl 
  gnupg-agent 
  software-properties-common
Add the Docker GPG key and repo:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo apt-key fingerprint 0EBFCD88

sudo add-apt-repository 
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu 
   $(lsb_release -cs) 
   stable"
Install Docker CE packages:
sudo apt-get update

sudo apt-get install -y docker-ce=5:18.09.5~3-0~ubuntu-bionic docker-ce-cli=5:18.09.5~3-0~ubuntu-bionic containerd.io
Give cloud_user permission to run docker commands:
sudo usermod -a -G docker cloud_user
Log out and back in. Test the installation by running a simple container:
docker run hello-world

Selecting a Storage Driver

00:07:29

Lesson Description:

The variety of environments and use cases in which Docker can be used means that there are a variety of storage needs. Container storage can be implemented in multiple ways through the use of various storage drivers, and those provide a pluggable framework for using different kinds of container storage. In this lesson, we will discuss what storage drivers are and identify the most widely-used ones. We will also demonstrate how to explicitly set the storage driver to use for a system. Relevant Documentation https://docs.docker.com/storage/storagedriver/select-storage-driver/ Lesson Reference This lesson was performed on a CentOS 7 server running Docker CE Get the current storage driver:

docker info
Set the storage driver explicitly by providing a flag to the Docker daemon:
sudo vi /usr/lib/systemd/system/docker.service
Edit the ExecStart line, adding the --storage-driver devicemapper flag:
ExecStart=/usr/bin/dockerd --storage-driver devicemapper ...
After any edits to the unit file, reload Systemd and restart Docker:
sudo systemctl daemon-reload
sudo systemctl restart docker
We can also set the storage driver explicitly using the daemon configuration file. This is the method that Docker recommends. Note that we cannot do this and pass the --storage-driver flag to the daemon at the same time:
sudo vi /etc/docker/daemon.json
Set the storage driver in the daemon configuration file:
{
  "storage-driver": "devicemapper"
}
Restart Docker after editing the file. It is also a good idea to make sure Docker is running properly after changing the configuration file:
sudo systemctl restart docker
sudo systemctl status docker

Running a Container

00:17:16

Lesson Description:

Executing containers is the core feature of Docker. In this lesson we will dive into the process of executing containers using docker run. We will demonstrate how to use this command, and learn some of the important options and flags that can be used with it. We will also discuss some additional commands that can allow us to manage containers on a host. After completing this lesson, we'll know how to run and manage containers with Docker. Relevant Documentation https://docs.docker.com/engine/reference/run/ Lesson Reference Run a simple container using the hello-world image:

docker run hello-world
Run a container using a specific image tag:
docker run nginx:1.15.11
Run a container with a command and arguments:
docker run busybox echo hello world!
Run an Nginx container customized with a variety of flags:
docker run -d --name nginx --restart unless-stopped -p 8080:80 --memory 500M --memory-reservation 256M nginx
List any currently running containers:
docker ps
List all containers, both running and stopped:
docker ps -a
Stop the Nginx container:
docker container stop nginx
Start a stopped container:
docker container start nginx
Delete a container (but it must be stopped first):
docker container rm nginx

Upgrading the Docker Engine

00:04:12

Lesson Description:

When using Docker to manage containers, it is important to be able to keep the Docker engine up-to-date. In this lesson, we will discuss the process of both downgrading and upgrading the Docker engine. Relevant Documentation https://docs.docker.com/install/linux/docker-ce/ubuntu/#upgrade-docker-ce Lesson Reference Downgrade to a previous version:

sudo systemctl stop docker
sudo apt-get remove -y docker-ce docker-ce-cli
sudo apt-get update
sudo apt-get install -y docker-ce=5:18.09.4~3-0~ubuntu-bionic docker-ce-cli=5:18.09.4~3-0~ubuntu-bionic
docker version
Upgrade to a new version:
sudo apt-get install -y docker-ce=5:18.09.5~3-0~ubuntu-bionic docker-ce-cli=5:18.09.5~3-0~ubuntu-bionic
docker version

Configuring Logging Drivers (Splunk, Journald, etc.)

00:06:49

Lesson Description:

Storing and accessing container logs is an essential part of managing containers. Docker logging drivers allow us to choose our own logging implementation to fit our particular needs. In this lesson, we will discuss logging drivers. We will also see how to customize the system default logging driver configuration, as well as how to override the defaults for individual containers. Relevant Documentation https://docs.docker.com/config/containers/logging/configure/ Lesson Reference Check the current default logging driver:

docker info | grep Logging
Edit daemon.json to set a new default logging driver configuration:
sudo vi /etc/docker/daemon.json
Add the configuration to daemon.json:
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "15m"
  }
}
Restart docker after editing daemon.json:
sudo systemctl restart docker
Run a docker container, overriding the system default logging driver settings:
docker run --log-driver json-file --log-opt max-size=50m nginx

Introduction to Docker Swarm

00:01:39

Lesson Description:

Docker Swarm is a great way to get even more value out of using containers. It allows for easily building a distributed cluster where a container can be run across multiple available servers. In this lesson we will briefly introduce the basics of what Docker swarm can do. We will also discuss the specifications for the servers we'll be using, in order to proceed as we install and configure a swarm cluster in the following lessons. Relevant Documentation https://docs.docker.com/engine/swarm/key-concepts/

Configuring a Swarm Manager

00:04:29

Lesson Description:

The swarm manager is responsible for controlling and orchestrating the Docker swarm. It delegates workloads to the worker nodes in the cluster. In this lesson, we will discuss and demonstrate the process of initializing a new swarm and setting up the swarm manager. Relevant Documentation https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/ Lesson Reference Install Docker Engine on the Swarm Manager:

sudo apt-get update

sudo apt-get -y install 
  apt-transport-https 
  ca-certificates 
  curl 
  gnupg-agent 
  software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo apt-key fingerprint 0EBFCD88

sudo add-apt-repository 
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu 
   $(lsb_release -cs) 
   stable"

sudo apt-get update

sudo apt-get install -y docker-ce=5:18.09.5~3-0~ubuntu-bionic docker-ce-cli=5:18.09.5~3-0~ubuntu-bionic containerd.io

sudo usermod -a -G docker cloud_user
Now initialize the swarm! Note: Be sure to use the private IP (NOT the public IP) for the --advertise-addr:
docker swarm init --advertise-addr <swarm manager private IP>
This shows some basic information about the current status of the swarm:
docker info
List the current nodes in the swarm and their status:
docker node ls

Configuring Swarm Nodes

00:04:31

Lesson Description:

Docker swarm worker nodes handle the processing of workloads in the swarm cluster. We have already set up a swarm manager, so in this lesson we will discuss the process of adding worker nodes to the swarm. We will demonstrate this by adding our two worker nodes to the cluster that was initialized in the previous lesson. Relevant Documentation https://docs.docker.com/engine/swarm/swarm-tutorial/add-nodes/ Lesson Reference Install Docker Engine on both worker nodes:

sudo apt-get update

sudo apt-get -y install 
  apt-transport-https 
  ca-certificates 
  curl 
  gnupg-agent 
  software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo apt-key fingerprint 0EBFCD88

sudo add-apt-repository 
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu 
   $(lsb_release -cs) 
   stable"

sudo apt-get update

sudo apt-get install -y docker-ce=5:18.09.5~3-0~ubuntu-bionic docker-ce-cli=5:18.09.5~3-0~ubuntu-bionic containerd.io

sudo usermod -a -G docker cloud_user
Get a join token from the manager. Run this command on the swarm manager:
docker swarm join-token worker
Now copy the docker swarm join command provided in the output and run it on both workers:
docker swarm join --token <token> <swarm manager private IP>:2377
On the swarm manager, verify that the two worker nodes have successfully joined:
docker node ls

Docker Swarm Backup and Restore

00:04:50

Lesson Description:

If you are managing a swarm cluster, it is important to be able to backup current swarm data and restore a previous backup. In this lesson, we will demonstrate the process of performing a simple backup and restore in a swarm cluster. Relevant Documentation https://docs.docker.com/engine/swarm/admin_guide/#back-up-the-swarm Lesson Reference Create the Backup On the manager:

sudo systemctl stop docker
sudo tar -zvcf backup.tar.gz -C /var/lib/docker/swarm
sudo systemctl start docker
Restore from Backup On the manager:
sudo systemctl stop docker
sudo rm -rf /var/lib/docker/swarm/*
sudo tar -zxvf backup.tar.gz -C /var/lib/docker/swarm/
sudo systemctl start docker
docker node ls

Namespaces and Cgroups

00:06:39

Lesson Description:

Namespaces and cgroups are Linux Kernel features which Docker uses in order to provide the basic functionality of running containers. It is important to have a basic understanding of these concepts in order to have a general idea of how Docker works. In this lesson, we will discuss namespaces and cgroups. We will talk about what they are and how Docker uses them in order to provide container functionality. Relevant Documentation https://docs.docker.com/engine/docker-overview/#the-underlying-technologyhttps://docs.docker.com/engine/security/userns-remap/

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Image Creation, Management, and Registry

Introduction to Docker Images

00:09:36

Lesson Description:

Images are a key component when using Docker. They provide the components and software necessary to run containers. In this lesson, we will discuss Docker images. We will talk about how they are built using a layered file system, and the advantages of the layered file system model. We will also demonstrate how to examine the layers of an existing image. Relevant Documentation https://docs.docker.com/v17.09/engine/userguide/storagedriver/imagesandcontainers/ Lesson Reference Run a container. The image will be automatically downloaded if it does not exist on the system:

docker run nginx:1.15.8
Download an image:
docker image pull nginx
View file system layers in an image:
docker image history nginx

The Components of a Dockerfile

00:11:36

Lesson Description:

Docker Hub provides a wide variety of useful public images. However, anyone running their own software using Docker needs to know how to create images themselves. In this lesson we will discuss the Dockerfile, the primary mechanism for defining an image that we want to create. We will demonstrate some of the most common Dockerfile directives by building a simple custom Nginx image. We will also demonstrate how to build a real image using the Dockerfile and how to run that image as a container. Relevant Documentation https://docs.docker.com/engine/reference/builder/ Lesson Reference Set up a project directory:

mkdir ~/custom-nginx
cd ~/custom-nginx
vi index.html
Add a simple message to index.html:
Hello, World!
Create a Dockerfile:
vi Dockerfile
Add some directives to the Dockerfile:
# Simple nginx image
FROM ubuntu:bionic

ENV NGINX_VERSION 1.14.0-0ubuntu1.6

RUN apt-get update && apt-get install -y curl
RUN apt-get update && apt-get install -y nginx=$NGINX_VERSION

CMD ["nginx", "-g", "daemon off;"]
Build and test the image:
docker build -t custom-nginx .
docker run --name custom-nginx -d -p 8080:80 custom-nginx
curl localhost:8080

More Dockerfile Directives

00:10:22

Lesson Description:

Dockerfiles offer many different directives which we can use to build and customize our images. In this lesson, we will discuss some additional directives and demonstrate how to use them by further customizing our Nginx image. Relevant Documentation https://docs.docker.com/engine/reference/builder/ Lesson Reference Edit your Dockerfile:

cd ~/custom-nginx
vi Dockerfile
Add more customization to the Dockerfile:
# Simple nginx image
FROM ubuntu:bionic

ENV NGINX_VERSION 1.14.0-0ubuntu1.2

RUN apt-get update && apt-get install -y curl
RUN apt-get update && apt-get install -y nginx=$NGINX_VERSION

WORKDIR /var/www/html/
ADD index.html ./

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

STOPSIGNAL SIGTERM
HEALTHCHECK CMD curl localhost:80
Rebuild the image ad test it:
docker build -t custom-nginx .
docker run -d -p 8080:80 custom-nginx
curl localhost:8080
Locate our running container with docker ps, then remove it to clean up the environment:
docker ps
docker container rm -f <container id>

Building Efficient Images

00:14:24

Lesson Description:

Dockerfiles allow us to build our own images containing any software we need. However, it is important to ensure that our Dockerfiles are built to produce small, efficient images that do not contain unnecessary data. In this lesson, we will briefly discuss some general tips for creating efficient images. We will also demonstrate how to use multi-stage builds to significantly decrease image size in certain situations. Relevant Documentation https://docs.docker.com/develop/develop-images/dockerfile_best-practices/https://docs.docker.com/develop/develop-images/multistage-build/ Lesson Reference Create some project directories:

cd ~/
mkdir efficient
mkdir inefficient
cd inefficient
Create the source code file:
vi helloworld.go
package main
import "fmt"
func main() {
    fmt.Println("hello world")
}
Create the Dockerfile:
vi Dockerfile
FROM golang:1.12.4
WORKDIR /helloworld
COPY helloworld.go .
RUN GOOS=linux go build -a -installsuffix cgo -o helloworld .
CMD ["./helloworld"]
Build and test the inefficient image:
docker build -t inefficient .
docker run inefficient
docker image ls
Switch to the efficient project directory and copy the files from the inefficient project:
cd ~/efficient
cp ../inefficient/helloworld.go ./
cp ../inefficient/Dockerfile ./
Change the Dockerfile to use a multi-stage build:
vi Dockerfile
FROM golang:1.12.4 AS compiler
WORKDIR /helloworld
COPY helloworld.go .
RUN GOOS=linux go build -a -installsuffix cgo -o helloworld .

FROM alpine:3.9.3
WORKDIR /root
COPY --from=compiler /helloworld/helloworld .
CMD ["./helloworld"]
Build and test the efficient image:
docker build -t efficient .
docker run efficient
docker image ls

Managing Images

00:08:05

Lesson Description:

We have already learned how to create Docker images. In this lesson, however, we will learn how to manage the images located on a machine. We will demonstrate a few commands that are useful for downloading, inspecting, and removing images from the system. Relevant Documentation https://docs.docker.com/engine/reference/commandline/image/ Lesson Reference Download an image:

docker image pull nginx:1.14.0
List images on the system:
docker image ls
docker image ls -a
Inspect image metadata:
docker image inspect nginx:1.14.0
docker image inspect nginx:1.14.0 --format "{{.Architecture}}"
docker image inspect nginx:1.14.0 --format "{{.Architecture}} {{.Os}}"
Delete an image:
docker image rm nginx:1.14.0
Force deletion of an image that is in use by a container:
docker run -d --name nginx nginx:1.14.0
docker image rm -f nginx:1.14.0
Locate a dangling image and clean it up:
docker image ls -a
docker container ls
docker container rm -f nginx
docker image ls -a
docker image prune

Flattening a Docker Image to a Single Layer

00:06:25

Lesson Description:

In some rare cases, we may want to flatten the file system of a multi-layer image into a single layer. While Docker does not have a simple command to do this, we can accomplish it by exporting a container's filesystem and importing it as an image. In this lesson, we will see how to flatten an image filesystem into a single layer. Lesson Reference Set up a new project directory to create a basic image:

cd ~/
mkdir alpine-hello
cd alpine-hello
vi Dockerfile
Create a Dockerfile that will result in a multi-layered image:
FROM alpine:3.9.3
RUN echo "Hello, World!" > message.txt
CMD cat message.txt
Build the image and check how many layers it has:
docker build -t nonflat .
docker image history nonflat
Run a container from the image and export its file system to an archive:
docker run -d --name flat_container nonflat
docker export flat_container > flat.tar
Import the archive to a new image and check how many layers the new image has:
cat flat.tar | docker import - flat:latest
docker image history flat

Introduction to Docker Registries

00:14:31

Lesson Description:

Docker registries provide a central location to store and distribute images. In this lesson, we will discuss what registries are and some of the available options for using them. We will demonstrate how to run a private registry, and discuss how to enable authentication and TLS to secure a private registry. Relevant Documentation https://docs.docker.com/registry/deploying/https://docs.docker.com/registry/configuration/https://docs.docker.com/registry/insecure/ Lesson Reference Run a simple registry:

docker run -d -p 5000:5000 --restart=always --name registry registry:2
docker container stop registry && docker container rm -v registry
Override the log level using an environment variable:
docker logs registry
docker container stop registry && docker container rm -v registry
docker run -d -p 5000:5000 --restart=always --name registry -e REGISTRY_LOG_LEVEL=debug registry:2
docker logs registry
docker container stop registry && docker container rm -v registry
Secure the registry by generating an htpasswd file to be used for authentication:
mkdir ~/registry
cd ~/registry
mkdir auth
docker run --entrypoint htpasswd registry:2 -Bbn testuser password > auth/htpasswd
Generate a self-signed certificate. When generating the cert, leave the prompts blank except for Common Name. For Common Name, put the public hostname of the registry server. The public hostname is in the playground interface:
mkdir certs
openssl req 
  -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key 
  -x509 -days 365 -out certs/domain.crt
Run the registry with authentication and TLS enabled:
docker run -d -p 443:443 --restart=always --name registry 
  -v /home/cloud_user/registry/certs:/certs 
  -v /home/cloud_user/registry/auth:/auth 
  -e REGISTRY_HTTP_ADDR=0.0.0.0:443 
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt 
  -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key 
  -e REGISTRY_AUTH=htpasswd 
  -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" 
  -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd 
  registry:2

Using Docker Registries

00:12:39

Lesson Description:

Once we have created a private registry, we need to be able to access and interact with it. In this lesson, we will discuss how to use registries from the command line then we will demonstrate how to authenticate against a private registry with a self-signed certificate, as well as how to push to and pull from a private registry. Relevant Documentation https://docs.docker.com/engine/reference/commandline/push/https://docs.docker.com/engine/reference/commandline/pull/https://docs.docker.com/engine/reference/commandline/login/https://docs.docker.com/registry/insecure/https://docs.docker.com/engine/reference/commandline/search/ Lesson Reference Pull and search images on Docker hub:

docker pull ubuntu
docker search ubuntu
Attempt to authenticate against the private registry:
docker login <registry public hostname>
Log in with the credentials we created earlier (testuser and password). A certificate signed by unknown authority message should pop up, because we are using a self-signed certificate. Configure docker to ignore certificate verification when accessing the private registry:
sudo vi /etc/docker/daemon.json
{
  "insecure-registries" : ["<registry public hostname>"]
}
Restart docker:
sudo systemctl restart docker
Try docker login again:
docker login <registry public hostname>
This time it should work! However, this method of accessing the registry is very insecure. It turns off certificate verification entirely, exposing us to man-in-the-middle attacks. So, let's do this the right way. First, log out of the private registry:
docker logout <registry public hostname>
Next, remove the insecure-registries key and value from /etc/docker/daemon.json. Restart Docker:
sudo systemctl restart docker
Download the cert public key from the registry and configure the local docker engine to use it:
sudo mkdir -p /etc/docker/certs.d/<registry public hostname>
sudo scp cloud_user@<registry public hostname>:/home/cloud_user/registry/certs/domain.crt /etc/docker/certs.d/<registry public hostname>
Try docker login:
docker login <registry public hostname>
Push to and pull from your private registry:
docker pull ubuntu
docker tag ubuntu <registry public hostname>/ubuntu
docker push <registry public hostname>/ubuntu
docker image rm <registry public hostname>/ubuntu
docker image rm ubuntu
docker pull <registry public hostname>/ubuntu

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Orchestration

Locking and Unlocking a Swarm Cluster

00:07:10

Lesson Description:

Docker supports the ability to securely store certain encryption keys. These encryption keys are used to encrypt sensitive cluster data, but are stored unencrypted on the disks of swarm managers by default. Autolock allows for greater security for these keys, but requires each manager to be unlocked whenever Docker restarts. In this lesson, we will discuss autolock. We will demonstrate how to enable and disable autolock, as well as how to use it when it is enabled. Relevant Documentation https://docs.docker.com/engine/swarm/swarm_manager_locking/ Lesson Reference Enable autolock.

 docker swarm update --autolock=true
Make note of the unlock key! Run a command to interact with the swarm, then restart docker and try the command again to verify that the swarm is locked.
 docker node ls
 sudo systemctl restart docker
 docker node ls
Unlock the swarm using the unlock key and verify that it is unlocked.
 docker swarm unlock
 docker node ls
Obtain the existing unlock key.
 docker swarm unlock-key
Rotate the unlock key.
 docker swarm unlock-key --rotate
Disable autolock.
 docker swarm update --autolock=false
 sudo systemctl restart docker
 docker node ls

High Availability in a Swarm Cluster

00:08:47

Lesson Description:

In a production scenario, it is a good idea to consider fault tolerance when designing a swarm cluster. In this lesson, we will discuss some of the considerations for high availability when it comes to Docker Swarm clusters, such as how multiple swarm managers maintain consistent swarm state and how to distribute manager nodes across multiple availability zones. Relevant Documentation https://docs.docker.com/engine/swarm/admin_guide/https://docs.docker.com/engine/swarm/raft/

Introduction to Docker Services

00:10:34

Lesson Description:

Services are the simplest way to make use of a Docker Swarm cluster. In this lesson, we will be discussing Docker services. We will demonstrate how to create and manage services and talk about several important concepts related to services. Relevant Documentation https://docs.docker.com/engine/swarm/services/https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/https://docs.docker.com/engine/reference/commandline/service_create/ Lesson Reference Create a simple service running the nginx image.

 docker service create nginx
Create an nginx service with a specified name, multiple replicas, and a published port.
 docker service create --name nginx --replicas 3 -p 8080:80 nginx
Use a template to pass the node hostname to each container as an environment variable.
 docker service create --name node-hostname --replicas 3 --env NODE_HOSTNAME="{{.Node.Hostname}}" nginx
Get the container running on the current machine, and print its environment variables to verify that the NODE_HOSTNAME variable is set properly.
 docker ps
 docker exec <CONTAINER_ID> printenv
List the services in the cluster.
 docker service ls
List the tasks for a service.
 docker service ps nginx
Inspect a service.
 docker service inspect nginx
 docker service inspect --pretty nginx
Change a service.
 docker service update --replicas 2 nginx
Delete a service.
 docker service rm nginx
Create a global service.
docker service create --name nginx --mode global nginx
Two different ways to scale a service:
docker service update --replicas 3 nginx
docker service scale nginx=4

Using docker inspect

00:04:56

Lesson Description:

docker inspect is a great tool for managing and troubleshooting your Docker objects. docker inspect is the simplest way to find additional information about existing objects, such as containers, images, and services. In this lesson, we will talk about some of the different forms of docker inspect commands and some of the options associated with those commands. Relevant Documentation https://docs.docker.com/engine/swarm/services/https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/https://docs.docker.com/engine/reference/commandline/service_create/ Lesson Reference Run a container and inspect it.

 docker run -d --name nginx nginx
 docker inspect <CONTAINER_ID>
List the containers and images to get their IDs, then inspect an image.
 docker container ls
 docker image ls
 docker inspect <IMAGE_ID>
Create and inspect a service.
 docker service create --name nginx-svc nginx
 docker service ls
 docker inspect <SERVICE_ID>
 docker inspect nginx-svc
Use the type-specific commands to inspect objects.
 docker container inspect nginx
 docker service inspect nginx-svc
Use the --format flag to retrieve a subset of the data in a specific format.
 docker service inspect --format='{{.ID}}' nginx-svc

Docker Compose

00:07:39

Lesson Description:

Docker is a powerful tool for running containers, but many applications consist of a collection of multiple containers running different software and interacting with one another. These complex applications can become difficult to maintain when all of their components are managed separately. Docker Compose offers a solution to this problem by allowing you to declaratively describe a set of multiple resources and manage them as a unit. In this lesson, we will discuss Docker Compose and demonstrate creating and managing a simple, multi-container application. Relevant Documentation https://docs.docker.com/compose/ Lesson Reference Install Docker Compose.

 sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
 sudo chmod +x /usr/local/bin/docker-compose
 docker-compose version
Set up a Docker Compose project.
 mkdir nginx-compose
 cd nginx-compose
 vi docker-compose.yml
Create your docker-compose.yml.
 version: '3'
 services:
   web:
     image: nginx
     ports:
     - "8080:80"
   redis:
     image: redis:alpine
Start your Compose app.
 docker-compose up -d
List the Docker Compose containers, then stop the app.
 docker-compose ps
 docker-compose down

Introduction to Docker Stacks

00:12:31

Lesson Description:

Docker Swarm's orchestration functionality really shines when using stacks. Stacks allow you to easily manage complex, multi-container applications and orchestrate them within your swarm cluster. In this lesson, we will discuss Docker stacks. We will demonstrate how to create and manage Docker stacks, as well as a few of the available options for designing stacks. Relevant Documentation https://docs.docker.com/get-started/part5/https://docs.docker.com/engine/reference/commandline/stack/ Lesson Reference Create a compose file for the stack.

 vi simple-stack.yml
 version: '3'
 services:
   web:
     image: nginx
   busybox:
     image: radial/busyboxplus:curl
     command: /bin/sh -c "while true; do echo Hello!; sleep 10; done"
Deploy the stack and examine it using various commands.
 docker stack deploy -c simple-stack.yml simple
 docker stack ls
 docker stack ps simple
 docker stack services simple
 docker service logs simple_busybox
Modify the stack to use an environment variable.
 vi simple-stack.yml
 version: '3'
 services:
   web:
     image: nginx
   busybox:
     image: radial/busyboxplus:curl
     command: /bin/sh -c "while true; do echo $$MESSAGE; sleep 10; done"
     environment:
     - MESSAGE=Hello!
 docker stack deploy -c simple-stack.yml simple
 docker service logs simple_busybox
Modify the stack to expose a port.
 vi simple-stack.yml
 version: '3'
 services:
   web:
     image: nginx
     ports:
     - "8080:80"
   busybox:
     image: radial/busyboxplus:curl
     command: /bin/sh -c "while true; do echo $$MESSAGE; sleep 10; done"
     environment:
     - MESSAGE=Hello!
 docker stack deploy -c simple-stack.yml simple
 curl localhost:8080
Modify the stack to use the BusyBox service to communicate with the web service.
 vi simple-stack.yml
 version: '3'
 services:
   web:
     image: nginx
     ports:
     - "8080:80"
   busybox:
     image: radial/busyboxplus:curl
     command: /bin/sh -c "while true; do echo $$MESSAGE; curl web:80; sleep 10; done"
     environment:
     - MESSAGE=Hello!
 docker stack deploy -c simple-stack.yml simple
Delete the stack.
 docker stack rm simple

Node Labels

00:10:21

Lesson Description:

Sometimes, you may wish to have more control over which nodes will be used to execute particular tasks in your swarm. In this lesson, we will talk about how you can use node labels to influence and even control which nodes will and will not be used to execute a given service's tasks. Relevant Documentation https://docs.docker.com/engine/reference/commandline/node_update/#add-label-metadata-to-a-nodehttps://docs.docker.com/engine/swarm/services/#placement-constraints Lesson Reference List your current nodes.

 docker node ls
Add a label to a node.
 docker node update --label-add availability_zone=east <NODE_NAME>
 docker node update --label-add availability_zone=west <NODE_NAME>
View existing labels with:
 docker node inspect --pretty <NODE_NAME>
You can use --constraint when creating a service to restrict which nodes will be used to execute a service's tasks.
 docker service create --name nginx-east --constraint node.labels.availability_zone==east --replicas 3 nginx
 docker service ps nginx-east
 docker service create --name nginx-west --constraint node.labels.availability_zone!=east --replicas 3 nginx
 docker service ps nginx-west
Use --placement-pref to spread tasks evenly based on the value of a specific label.
 docker service create --name nginx-spread --placement-pref spread=node.labels.availability_zone --replicas 3 nginx
 docker service ps nginx-spread

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Storage and Volumes

Docker Storage in Depth

00:08:31

Lesson Description:

Storage is essential to almost any system, and containers are no exception. In this lesson, we will go into a little more detail about some of the storage driver concepts that we discussed earlier. We will go over the three most common storage drivers and discuss which operating systems they belong to. We will also discuss storage methods and how to locate the underlying data for containers and images on the host file system. Relevant Documentation https://docs.docker.com/storage/storagedriver/select-storage-driver/https://rancher.com/block-object-file-storage-containers/ Lesson Reference Run a basic container:

 docker run --name storage_nginx nginx
Use docker inspect to find the location of the container's data on the host:
 docker container inspect storage_nginx
 ls /var/lib/docker/overlay2/<STORAGE_HASH>/
Use docker inspect to find the location of an image's data:
 docker image inspect nginx

Configuring DeviceMapper

00:06:57

Lesson Description:

We have already discussed some of the storage drivers that are available with Docker, but in this lesson we will focus specifically on the DeviceMapper storage driver. We will configure DeviceMapper to use direct-lvm mode to manage storage on a block storage device, which is recommended when using DeviceMapper in production. Relevant Documentation https://docs.docker.com/storage/storagedriver/device-mapper-driver/ Lesson Reference For this lesson, use a CentOS 7 server with a size of Small. Before starting the lesson, you'll first need to install Docker. Add a new storage device to your server. In Playground, select Actions, then select Add /dev/xvdb and wait for it to finish adding the device. Stop and disable Docker.

 sudo systemctl disable docker
 sudo systemctl stop docker
Delete any existing Docker data.
 sudo rm -rf /var/lib/docker
Configure DeviceMapper in daemon.json.
 sudo vi /etc/docker/daemon.json
 {
   "storage-driver": "devicemapper",
   "storage-opts": [
     "dm.directlvm_device=/dev/nvme1n1",
     "dm.thinp_percent=95",
     "dm.thinp_metapercent=1",
     "dm.thinp_autoextend_threshold=80",
     "dm.thinp_autoextend_percent=20",
     "dm.directlvm_device_force=true"
   ]
 }
Start and enable Docker.
 sudo systemctl enable docker
 sudo systemctl start docker
Check the storage driver information provided by docker info.
 docker info
Run a container to verify that everything is working.
 docker run hello-world

Docker Volumes

00:14:24

Lesson Description:

Docker containers are designed so that their internal storage can be easily destroyed. However, sometimes you might need more permanent data. Docker volumes and bind mounts allow you to attach external storage to containers. In this lesson, we will discuss bind mounts and volumes. We will also demonstrate how to create them using both the --mount and -v flags. We will talk about sharing volumes between multiple containers and go over some commands you can use to manage volumes. Relevant Documentation https://docs.docker.com/storage/https://docs.docker.com/storage/bind-mounts/https://docs.docker.com/storage/volumes/ Lesson Reference Create a directory on the host with some test data.

 cd ~/
 mkdir message
 echo Hello, world! > message/message.txt
Mount the directory to a container with a bind mount.
 docker run --mount type=bind,source=/home/cloud_user/message,destination=/root,readonly busybox cat /root/message.txt
Run a container with a mounted volume.
 docker run --mount type=volume,source=my-volume,destination=/root busybox sh -c 'echo hello > /root/message.txt && cat /root/message.txt'
Use the -v syntax to create a bind mount and a volume.
 docker run -v /home/cloud_user/message:/root:ro busybox cat /root/message.txt
 docker run -v my-volume:/root busybox sh -c 'cat /root/message.txt'
Use a volume to share data between containers.
 docker run --mount type=volume,source=shared-volume,destination=/root busybox sh -c 'echo I wrote this! > /root/message.txt'
 docker run --mount type=volume,source=shared-volume,destination=/anotherplace busybox cat /anotherplace/message.txt
Create and manage volumes using docker volume commands.
 docker volume create test-volume
 docker volume ls
 docker volume inspect test-volume
 docker volume rm test-volume

Image Cleanup

00:04:10

Lesson Description:

Storage management is an important component of any system. In this lesson, we will talk about some of the tools you can use to manage Docker's disk usage, particularly when it comes to storing Docker images. We will discuss how to examine disk usage on a system, as well as how to easily clean up images that are no longer being used. Relevant Documentation https://docs.docker.com/engine/reference/commandline/system_df/ Lesson Reference Display the storage space being used by Docker.

 docker system df
Display disk usage by individual objects.
 docker system df -v
Delete dangling images (images with no tags or containers).
 docker image prune
Pull an image not being used by any containers, and use docker image prune -a to clean up all images with no containers.
 docker image pull nginx:1.14.0
 docker image prune -a

Storage in a Cluster

00:12:19

Lesson Description:

Docker volumes provide persistent storage to containers, and they can be shared between containers to allow them to interact with the same data. However, sharing volumes creates additional challenges in the context of a Swarm cluster, where containers may be running on different nodes. In this lesson, we will discuss how you can create shared volumes that work across multiple swarm nodes using the vieux/sshfs volume driver. Relevant Documentation https://docs.docker.com/storage/volumes/#share-data-among-machines Lesson Reference Install the vieux/sshfs plugin on all nodes in the swarm.

 docker plugin install --grant-all-permissions vieux/sshfs
Set up an additional server to use for storage. You can use the Ubuntu 18.04 Bionic Beaver LTS image with a size of Small. On this new storage server, create a directory with a file that can be used for testing.
 mkdir /home/cloud_user/external
 echo External storage! > /home/cloud_user/external/message.txt
On the swarm manager, manually create a Docker volume that uses the external storage server for storage. Be sure to replace the text <STORAGE_SERVER_PRIVATE_IP> and <PASSWORD> with actual values.
 docker volume create --driver vieux/sshfs 
   -o sshcmd=cloud_user@<STORAGE_SERVER_PRIVATE_IP>:/home/cloud_user/external 
   -o password=<PASSWORD> 
   sshvolume

 docker volume ls
Create a service that automatically manages the shared volume, creating the volume on swarm nodes as needed. Be sure to replace the text <STORAGE_SERVER_PRIVATE_IP> and <PASSWORD> with actual values.
 docker service create 
   --replicas=3 
   --name storage-service 
   --mount volume-driver=vieux/sshfs,source=cluster-volume,destination=/app,volume-opt=sshcmd=cloud_user@<STORAGE_SERVER_PRIVATE_IP>:/home/cloud_user/external,volume-opt=password=<PASSWORD> busybox cat /app/message.txt
Check the service logs to verify that the service is reading the test data from the external storage server.
 docker service logs storage-service

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Networking

Docker Networking

00:05:14

Lesson Description:

Containers present unique challenges when it comes to networking. Luckily, Docker includes multiple built-in solutions to these networking challenges. Docker implements container networking using a framework called the Container Networking Model. In this lesson, we will discuss the Container Networking Model and offer some high-level examples of some of the model's implementations. Relevant Documentation https://success.docker.com/article/networking/

Built-In Network Drivers

00:16:17

Lesson Description:

Docker provides multiple implementations of the Container Networking Model in the form of network drivers. In this lesson, we will explore the native network drivers that come packaged with Docker by default. We will provide a high-level overview of how these drivers work and discuss the use cases for each. We will also demonstrate how to run containers that use each driver. Relevant Documentation https://success.docker.com/article/networking/https://blog.docker.com/2016/12/understanding-docker-networking-drivers-use-cases/ Lesson Reference Host Create two containers and demonstrate communication between them using the host network driver. The ip add commands also demonstrate that the container is using the host's eth0 network interface directly.

docker run -d --net host --name host_busybox radial/busyboxplus:curl sleep 3600
docker run -d --net host --name host_nginx nginx
ip add | grep eth0
docker exec host_busybox ip add | grep eth0
docker exec host_busybox curl localhost:80
curl localhost:80
Bridge Create two containers and demonstrate that they can communicate using a custom bridge network. The ip link command can be used to explore the Linux bridge interfaces created by the bridge network driver.
ip link
docker network create --driver bridge my-bridge-net
ip link
docker run -d --name bridge_nginx --network my-bridge-net nginx
docker run --rm --name bridge_busybox --network my-bridge-net radial/busyboxplus:curl curl bridge_nginx:80
Overlay Create two services in the Docker Swarm cluster and demonstrate that they are able to communicate using a custom overlay network.
docker network create --driver overlay my-overlay-net
docker service create --name overlay_nginx --network my-overlay-net nginx
docker service create --name overlay_busybox --network my-overlay-net radial/busyboxplus:curl sh -c 'curl overlay_nginx:80 && sleep 3600'
docker service logs overlay_busybox
MACVLAN Create a MACVLAN network. Then run two containers that are able to communicate using the MACVLAN network.
docker network create -d macvlan --subnet 192.168.0.0/24 --gateway 192.168.0.1 -o parent=eth0 my-macvlan-net
docker run -d --name macvlan_nginx --net my-macvlan-net nginx
docker run --rm --name macvlan_busybox --net my-macvlan-net radial/busyboxplus:curl curl 192.168.0.2:80
None Create a container that uses the none network and demonstrate that a normal container cannot reach it.
docker run --net none -d --name none_nginx nginx
docker run --rm radial/busyboxplus:curl curl none_nginx:80

Creating a Docker Bridge Network

00:09:01

Lesson Description:

Bridge networks facilitate communication between Docker containers on the same host. In this lesson, we will explore bridge networks in a little more detail. We will discuss the commands needed to create and use bridge networks. We will also examine Docker's embedded DNS, and how to use it to communicate with containers using container names and network aliases. We will also talk about some commands that can be used to manage existing networks on a Docker host. Relevant Documentation https://docs.docker.com/network/bridge/ Lesson Reference Create a bridge network and demonstrate that two containers can communicate using the network.

 docker network create my-net
 docker run -d --name my-net-busybox --network my-net radial/busyboxplus:curl sleep 3600
 docker run -d --name my-net-nginx nginx
 docker network connect my-net my-net-nginx
 docker exec my-net-busybox curl my-net-nginx:80
Create a container with a network alias and communicate with it from another container using both the name and the alias.
 docker run -d --name my-net-nginx2 --network my-net --network-alias my-nginx-alias nginx
 docker exec my-net-busybox curl my-net-nginx2:80
 docker exec my-net-busybox curl my-nginx-alias:80
Create a container and provide a network alias with the docker network connect command.
 docker run -d --name my-net-nginx3 nginx
 docker network connect --alias another-alias my-net my-net-nginx3
 docker exec my-net-busybox curl another-alias:80
Manage existing networks on a Docker host.
 docker network ls
 docker network inspect my-net
 docker network disconnect my-net my-net-nginx
 docker network rm my-net

Deploying a Service on a Docker Overlay Network

00:04:03

Lesson Description:

Overlay networks provide transparent network connectivity to containers running in a Swarm, regardless of whether they are running on the same or different nodes. In this lesson, we will discuss overlay networks and demonstrate how to use them to connect containers in a Docker Swarm. Relevant Documentation https://docs.docker.com/network/overlay/ Lesson Reference List the networks on the host. You should be able to see the default ingress overlay network.

 docker network ls
Create an attachable overlay network.
 docker network create --driver overlay --attachable my-overlay
Create a service that uses that network, then test the network by attaching a container to it and using the container to communicate with the service.
 docker service create --name overlay-service --network my-overlay --replicas 3 nginx
 docker run --rm --network my-overlay radial/busyboxplus:curl curl overlay-service:80

Exposing Containers Externally

00:09:03

Lesson Description:

Port publishing gives you the ability to expose your containers so that outside entities can communicate with them. In this lesson, we will discuss port publishing in the context of both individual containers and services. We will demonstrate some commands that you can use to examine published ports on existing containers, and we will also discuss the ingress and host publishing modes that you can use when publishing ports for services in Docker Swarm. Relevant Documentation https://docs.docker.com/engine/reference/commandline/run/https://docs.docker.com/engine/reference/commandline/service_create/https://docs.docker.com/engine/reference/commandline/port/ Lesson Reference Run a container with a published port, then test and examine the published ports for the container.

 docker run -d -p 8080:80 --name nginx_pub nginx
 curl localhost:8080
 docker port nginx_pub
 docker ps
Create a service with a published port using the ingress publishing mode. Test the service and check which Swarm node the service's task is running on.
 docker service create -p 8081:80 --name nginx_ingress_pub nginx
 curl localhost:8081
 docker service ps nginx_ingress_pub
Create a service published using host mode. Check which node the task is running on and attempt to access it from the node where the task is running, as well as a node where it is not running.
 docker service create -p mode=host,published=8082,target=80 --name nginx_host_pub nginx
 docker service ps nginx_host_pub
Check which node the task is running on, and access the service from that node.
 curl localhost:8082
Try accessing the service from another node. It should fail.
 curl localhost:8082

Network Troubleshooting

00:09:50

Lesson Description:

Docker has many powerful features that you can use to implement a container networking setup that meets your needs, but sometimes you will need to be able to locate and diagnose issues. In this lesson, we will discuss some basic troubleshooting techniques. We will talk about some ways of accessing useful log data, and we will discuss using the netshoot image to test from within various parts of your Docker networks. Relevant Documentation https://success.docker.com/article/troubleshooting-container-networking Lesson Reference Run a container and access the container logs.

 docker run --name log-container busybox echo Here is my container log!
 docker logs log-container
Run a service and access the service logs.
 docker service create --name log-svc --replicas 3 -p 8080:80 nginx
 curl localhost:8080
 docker service logs log-svc
Check the Docker daemon logs.
 sudo journalctl -u docker
Create a custom network with a container, and attach a netshoot container to the network for troubleshooting.
 docker network create custom-net
 docker run -d --name custom-net-nginx --network custom-net nginx
Inject a netshoot container into another container's network sandbox.
 docker run --rm --network custom-net nicolaka/netshoot curl custom-net-nginx:80
 docker run --rm --network container:custom-net-nginx nicolaka/netshoot curl localhost:80
Inject a netshoot container into the sandbox of a container that uses the none network.
 docker run -d --name none-net-nginx --network none nginx
 docker run --rm --network container:none-net-nginx nicolaka/netshoot curl localhost:80

Configuring Docker to Use External DNS

00:03:30

Lesson Description:

Docker containers use a DNS server to locate external servers when using hostnames. In some scenarios, you may wish to customize which external DNS server is used by your containers. In this lesson, we will discuss how to customize your external DNS, both for the Docker daemon as a whole and for individual containers. Relevant Documentation https://docs.docker.com/v17.09/engine/userguide/networking/default_network/configure-dns/https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-dns-options Lesson Reference Edit the Docker daemon config file to set a custom DNS for the host.

 sudo vi /etc/docker/daemon.json
 {
   "dns": ["8.8.8.8"]
 }
Restart Docker.
 sudo systemctl restart docker
Test your DNS by looking up an external domain.
 docker run nicolaka/netshoot nslookup google.com
Run a container with a custom DNS and test it by doing an nslookup.
 docker run --dns 8.8.4.4 nicolaka/netshoot nslookup google.com

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Security

Signing Images and Enabling Docker Content Trust

00:10:12

Lesson Description:

Software signing is an important aspect of security. It provides us with a greater sense of confidence that the software we're running on your systems has not been tampered with after being released by the software publisher. Docker Content Trust establishes signing for Docker images. In this lesson, we will briefly discuss what Docker Content Trust is. Additionally, we will cover how to sign images, run signed images, plus how to enable, and disable Docker Content Trust. Relevant Documentation https://docs.docker.com/engine/security/trust/content_trust/#push-trusted-content Lesson Reference In order to follow along with this lesson, a Docker Hub account is required. An account can be created for free at https://hub.docker.com. First, log in to Docker Hub. Enter your Docker Hub credentials when prompted.

docker login
Generate a delegation key pair. We can enter a passphrase of our choosing, but make note of it as we will need it later on in the lesson.
cd ~/
docker trust key generate <your docker hub username>
Then we'll add ourselves as a signer to an image repository. Once again, be sure to make note of the passphrases used.
docker trust signer add --key <your docker hub username>.pub <your docker hub username> <your docker hub username>/dct-test
Create and build a simple Docker image with an unsigned tag, and then push it to Docker Hub:
mkdir ~/dct-test
cd dct-test
vi Dockerfile
FROM busybox:latest

CMD echo It worked!
docker build -t <your docker hub username>/dct-test:unsigned
docker push <your docker hub username>/dct-test:unsigned
Run the image to verify whether it can run successfully:
docker run <your docker hub username>/dct-test:unsigned
Next, enable Docker content trust and attempt to run the unsigned image again: Note: We should see it fail.
export DOCKER_CONTENT_TRUST=1
docker run <your docker hub username>/dct-test:unsigned
Build and push a signed tag to the repo. Enter the passphrase — this will be the one that was chosen earlier when running the docker trust key generate command:
docker build -t <your docker hub username>/dct-test:signed .
docker trust sign <your docker hub username>/dct-test:signed
Run it to verify that the signed image can run properly with Docker Content Trust enabled:
docker image rm <your docker hub username>/dct-test:signed
docker run <your docker hub username>/dct-test:signed
Turn off Docker Content Trust and attempt to run the unsigned image again: Note: It should work this time.
export DOCKER_CONTENT_TRUST=0
docker run <your docker hub username>/dct-test:unsigned

Default Docker Engine Security

00:06:23

Lesson Description:

Docker is built with safety in mind and comes with a wide variety of security features. In this lesson, we will discuss some important aspects of Docker's internal security that must be kept in mind when using Docker, as well as at the time of taking the Docker Certified Associate exam. Relevant Documentation https://docs.docker.com/engine/security/security/

Docker MTLS

00:07:57

Lesson Description:

In a distributed model, such as the one used by Docker Swarm, it is important to encrypt communication between nodes to prevent potential attackers from obtaining sensitive data from network communications. In this lesson, we'll discuss the two ways that Docker Swarm utilizes for securing cluster communication. We'll also cover how to encrypt overlay network communication to secure the communication between containers within the cluster, and we'll discuss how Docker Swarm uses Mutually Authenticated Transport Layer Security (MTLS) to encrypt and authenticate cluster-level communication. Relevant Documentation https://docs.docker.com/engine/swarm/how-swarm-mode-works/pki/https://docs.docker.com/v17.09/engine/userguide/networking/overlay-security-model/ Lesson Reference Create an encrypted overlay network:

docker network create --opt encrypted --driver overlay my-encrypted-net
Create two services on the encrypted overlay network and demonstrate that one service can communicate with the other:
docker service create --name encrypted-overlay-nginx --network my-encrypted-net --replicas 3 nginx
docker service create --name encrypted-overlay-busybox --network my-encrypted-net radial/busyboxplus:curl sh -c 'curl encrypted-overlay-nginx:80 && sleep 3600'
Check the logs for the busybox service, and then verify that it shows the Nginx welcome page:
docker service logs encrypted-overlay-busybox

Securing the Docker Daemon HTTP Socket

00:10:52

Lesson Description:

By default, the Docker daemon is only accessible from the machine where it is running. However, we can expose the Docker socket securely, enabling us to interact with Docker remotely. In this lesson, we'll cover how to securely set up mutual client/server certificate authentication so that we can connect to a remote Docker daemon. Relevant Documentation https://docs.docker.com/engine/security/https/ Lesson Reference Follow along with this lesson using two playground servers: Image - Ubuntu 18.04 Bionic Beaver LTSSize - Micro Generate a certificate authority and server certificates for your Docker server. Make sure you replace <server private IP> with the actual private IP of your server. The structure of this should resemble the following:

openssl genrsa -aes256 -out ca-key.pem 4096
openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem -subj "/C=US/ST=Texas/L=Keller/O=Linux Academy/OU=Content/CN=$HOSTNAME"
openssl genrsa -out server-key.pem 4096
openssl req -subj "/CN=$HOSTNAME" -sha256 -new -key server-key.pem -out server.csr
echo subjectAltName = DNS:$HOSTNAME,IP:<server private IP>,IP:127.0.0.1 >> extfile.cnf
echo extendedKeyUsage = serverAuth >> extfile.cnf
openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem 
  -CAcreateserial -out server-cert.pem -extfile extfile.cnf
Then generate the client certificates:
openssl genrsa -out key.pem 4096
openssl req -subj '/CN=client' -new -key key.pem -out client.csr
echo extendedKeyUsage = clientAuth > extfile-client.cnf
openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem 
  -CAcreateserial -out cert.pem -extfile extfile-client.cnf
Set appropriate permissions on the certificate files:
chmod -v 0400 ca-key.pem key.pem server-key.pem
chmod -v 0444 ca.pem server-cert.pem cert.pem
Configure your Docker host to use tlsverify mode with the certificates that were created earlier:
sudo vi /etc/docker/daemon.json
{
  "tlsverify": true,
  "tlscacert": "/home/cloud_user/ca.pem",
  "tlscert": "/home/cloud_user/server-cert.pem",
  "tlskey": "/home/cloud_user/server-key.pem"
}
sudo vi /lib/systemd/system/docker.service
Look for the line that begins with ExecStart and change the -H so that it looks like this:
ExecStart=/usr/bin/dockerd -H=0.0.0.0:2376 --containerd=/run/containerd/containerd.sock
sudo systemctl daemon-reload
sudo systemctl restart docker
Copy the CA cert and client certificate files to the client machine:
scp ca.pem cert.pem key.pem cloud_user@<client private IP>:/home/cloud_user
On the client machine, configure the client to securely connect to the remote Docker daemon:
mkdir -pv ~/.docker
cp -v {ca,cert,key}.pem ~/.docker
export DOCKER_HOST=tcp://<docker server private IP>:2376 DOCKER_TLS_VERIFY=1
Test the connection:
docker version

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Docker Enterprise Edition

Using Docker Enterprise Edition

Installing Docker EE

00:08:09

Lesson Description:

Docker Enterprise Edition (EE) offers many additional features on top of the standard feature set included with Community Edition. In this lesson, we will briefly discuss Docker EE, and we will demonstrate how to install Docker EE on a Linux system. Relevant Documentation https://docs.docker.com/install/linux/docker-ee/ubuntu/ Lesson Reference Provision a playground server with the Ubuntu 18.04 Bionic Beaver LTS image and a size of Large. You will need a Docker Hub account. You can create one at https://hub.docker.com. Start a Docker EE free trial: https://hub.docker.com/editions/enterprise/docker-ee-trial. Get your unique Docker EE URL from your trial: Go to https://hub.docker.com/my-content.Click Setup.Copy your url. On the playground server, set up some temporary environment variables. Enter your unique Docker EE URL for the DOCKER_EE_URL variable.

DOCKER_EE_URL=<your docker ee url>
DOCKER_EE_VERSION=18.09
Install required packages.
sudo apt-get install -y 
  apt-transport-https 
  ca-certificates 
  curl 
  software-properties-common
Add the GPG and apt repository using your Docker EE URL.
curl -fsSL "${DOCKER_EE_URL}/ubuntu/gpg" | sudo apt-key add -

sudo add-apt-repository 
  "deb [arch=$(dpkg --print-architecture)] $DOCKER_EE_URL/ubuntu 
  $(lsb_release -cs) 
  stable-$DOCKER_EE_VERSION"
Install Docker EE.
sudo apt-get update

sudo apt-get install -y docker-ee=5:18.09.4~3-0~ubuntu-bionic
Give cloud_user access to use Docker.
sudo usermod -a -G docker cloud_user
Log out of the server and log back in again, then test your Docker EE installation.
docker version

Setting up Universal Control Plane (UCP)

00:07:55

Lesson Description:

Docker Universal Control Plane (UCP) provides a robust interface for managing your Docker applications in a cluster. In this lesson, we will introduce UCP and demonstrate the process of setting up a basic UCP cluster with one manager and two worker nodes. Relevant Documentation https://docs.docker.com/ee/ucp/admin/install/https://docs.docker.com/ee/ucp/ Lesson Reference To complete this lesson, you will need three playground servers. All should have Docker EE installed. UCP Manager Ubuntu 18.04 Bionic Beaver LTSSize: Large UCP Worker with DTR Ubuntu 18.04 Bionic Beaver LTSSize: Medium UCP Worker Ubuntu 18.04 Bionic Beaver LTSSize: Small On the UCP Manager server, Pull the UCP image.

docker image pull docker/ucp:3.1.5
Set a temporary environment variable containing the private IP address of your UCP manager server.
PRIVATE_IP=<your UCP manager private IP>
Run the UCP installer.
docker container run --rm -it --name ucp 
  -v /var/run/docker.sock:/var/run/docker.sock 
  docker/ucp:3.1.5 install 
  --host-address $PRIVATE_IP 
  --interactive
When prompted, create some admin credentials. You will also be prompted for Additional Aliases. When this happens, just hit Enter to accept the default value. Once the installation is complete, access UCP in a web browser using the UCP manager's Public IP address: https://<your UCP manager public IP>. Log in to the UCP manager using the credentials you created earlier. You will be prompted to provide a license: Open another tab and go to https://hub.docker.com/my-content.Click Setup.Download the license using the License Key link.Go back to the UCP tab and click Upload License. Select the license file you just downloaded and upload it. In your browser, on the UCP dashboard, click Shared Resources, Nodes, then Add Node. Make sure the Node type is Linux and the Node role is Worker. Then, copy the docker swarm join command that appears on the page. Run the docker swarm join obtained from the UCP manager on both worker nodes. If you go to Shared Resources, then Nodes on the UCP dashboard in your browser, you should see both worker nodes appear in the list.

Security in UCP

00:07:53

Lesson Description:

Universal Control Plane offers a robust role-based access control system, allowing you complete control over what users and teams can do within your UCP cluster. In this lesson, we will discuss some of the basic concepts behind role-based access control in UCP. We will also demonstrate how to manage permissions within UCP. Relevant Documentation https://docs.docker.com/datacenter/ucp/2.2/guides/access-control/permission-levels/#roleshttps://docs.docker.com/datacenter/ucp/2.2/guides/access-control/https://docs.docker.com/datacenter/ucp/2.2/guides/admin/configure/external-auth/https://docs.docker.com/ee/ucp/user-access/cli/

Setting Up Docker Trusted Registry (DTR)

00:05:02

Lesson Description:

Docker Trusted Registry (DTR) is an enterprise-ready private registry which includes additional features that are not available with a free private registry. In this lesson, we will introduce DTR. We will also demonstrate how to set up a Docker Trusted Registry within your UCP cluster. Relevant Documentation https://docs.docker.com/ee/dtr/admin/install/https://docs.docker.com/ee/dtr/ Lesson Reference In the Universal Control Plane interface, go to admin, Admin Settings, then Docker Trusted Registry. Under UCP Node, select the worker node where you want to install DTR. Check the checkbox labeled Disable TLS verification for UCP. Copy the command provided on the page, then use a text editor to change the --ucp-url to specify the Private IP of the UCP Manager server, not the public IP. Run the modified command on the worker node where you want to install DTR. The command should look like this:

docker run -it --rm docker/dtr install 
  --ucp-node <DTR node hostname> 
  --ucp-username admin 
  --ucp-url https://<UCP Manager private IP> 
  --ucp-insecure-tls
When prompted for a password, enter your UCP admin password. Once the installation is complete, access DTR in your browser at https://<DTR server public IP>. Log in using your UCP admin credentials.

Sizing Requirements for Docker, UCP, and DTR

00:01:55

Lesson Description:

Docker Enterprise Edition components such as Universal Control Plane and Docker Trusted Registry require sufficient resources in order to perform well in a production scenario. In this lesson, we will briefly discuss the published sizing requirements for these components, which you may need to be aware for the Docker Certified Associate exam. Relevant Documentation https://docs.docker.com/ee/ucp/admin/install/system-requirements/https://docs.docker.com/ee/dtr/admin/install/system-requirements/

Configuring Backups for UCP and DTR

00:12:52

Lesson Description:

Backups are an important task when maintaining any system, including a UCP cluster. In this lesson, we will demonstrate how to back up and restore both Universal Control Plane and Docker Trusted Registry. Relevant Documentation https://docs.docker.com/engine/swarm/admin_guide/#back-up-the-swarmhttps://docs.docker.com/datacenter/ucp/2.2/guides/admin/backups-and-disaster-recovery/https://docs.docker.com/datacenter/dtr/2.3/guides/admin/backups-and-disaster-recovery/ Lesson Reference Back up UCP On the UCP server, get your UCP instance ID.

docker container run --rm 
  --name ucp 
  -v /var/run/docker.sock:/var/run/docker.sock 
  docker/ucp:3.1.5 
  id
Create an encrypted backup. Enter your UCP instance ID from the previous command for the --id flag.
docker container run 
  --log-driver none --rm 
  --interactive 
  --name ucp 
  -v /var/run/docker.sock:/var/run/docker.sock 
  docker/ucp:3.1.5 backup 
  --passphrase "secretsecret" 
  --id <Your UCP instance ID> > /home/cloud_user/ucp-backup.tar
List the contents of the backup file.
gpg --decrypt /home/cloud_user/ucp-backup.tar | tar --list
Restore UCP from Backup First, uninstall UCP on the UCP manager server.
docker container run --rm -it 
  -v /var/run/docker.sock:/var/run/docker.sock 
  --name ucp 
  docker/ucp:3.1.5 uninstall-ucp --interactive
Restore UCP from the backup.
docker container run --rm -i --name ucp 
  -v /var/run/docker.sock:/var/run/docker.sock  
  docker/ucp:3.1.5 restore --passphrase "secretsecret" < /home/cloud_user/ucp-backup.tar
Back up DTR On the DTR server, get the DTR replica ID.
docker volume ls
Look for a volume name that begins with dtr-registry-. The string of letters and numbers at the end of this volume name us your DTR replica ID. Back up the registry images.
sudo tar -zvcf dtr-backup-images.tar 
  $(dirname $(docker volume inspect --format '{{.Mountpoint}}' dtr-registry-<replica-id>))
Back up DTR metadata.
read -sp 'ucp password: ' UCP_PASSWORD; 
docker run --log-driver none -i --rm 
  --env UCP_PASSWORD=$UCP_PASSWORD 
  docker/dtr:2.6.6 backup 
  --ucp-url https://<UCP Manager Private IP> 
  --ucp-insecure-tls 
  --ucp-username admin 
  --existing-replica-id <replica-id> > dtr-backup-metadata.tar
Restore DTR Backup Stop the existing DTR replica.
docker run -it --rm 
  docker/dtr:2.6.6 destroy 
  --ucp-insecure-tls 
  --ucp-username admin 
  --ucp-url https://<UCP Manager Private IP>
Restore images.
sudo tar -xzf dtr-backup-images.tar -C /var/lib/docker/volumes
Restore DTR metadata.
read -sp 'ucp password: ' UCP_PASSWORD; 
docker run -i --rm 
  --env UCP_PASSWORD=$UCP_PASSWORD 
  docker/dtr:2.6.6 restore 
  --dtr-use-default-storage 
  --ucp-url https://<UCP Manager Private IP> 
  --ucp-insecure-tls 
  --ucp-username admin 
  --ucp-node <hostname> 
  --replica-id <replica-id> 
  --dtr-external-url <dtr-external-url> < dtr-backup-metadata.tar

DTR Security Features

00:06:32

Lesson Description:

There are a few special features of Docker Trusted Registry that you may need to be aware of for the Docker Certified Associate. In this lesson, we will briefly discuss security scanning and tag immutability in the context of DTR. We will push an image to DTR and explore these features within the DTR web interface. Relevant Documentation https://docs.docker.com/datacenter/dtr/2.4/guides/admin/configure/set-up-vulnerability-scans/https://docs.docker.com/ee/dtr/user/manage-images/prevent-tags-from-being-overwritten/ Lesson Reference Turn on vulnerability scanning. Choose Online under Select a method for installation and updates.. It will take some time to update the vulnerability database and perform scans. Trust the DTR certificate and authenticate the local daemon against DTR.

curl -k https://<DTR server private IP>:443/ca > dtr.crt
sudo mkdir -p /etc/docker/certs.d/<DTR server private IP>
sudo cp dtr.crt /etc/docker/certs.d/<DTR server private IP>/
docker login <DTR server private IP>
Pull an image, re-tag it, and push it to DTR.
docker pull alpine:latest
docker tag alpine:latest <DTR server private IP>/admin/alpine:latest
docker push <DTR server private IP>/admin/alpine:latest
Within DTR, you can now locate your new tag and perform a vulnerability scan.

Managing Certificates with UCP and DTR

00:03:27

Lesson Description:

Universal Control Plane and Docker trusted Registry have some built-in features for managing different kinds of certificates via the UI. For example, in both UCP and DTR, you can provide your own external certificates via the UI to replace the self-signed certificates generated during the installation process. You can also download client bundles in UCP, archives which contain client certificates and configuration scripts, making it easy to set up your local environment to authenticate and interact with UCP via the command line. In this lesson, we will briefly introduce these concepts and demonstrate how to access these features via the UCP and DTR web interfaces. Relevant Documentation https://docs.docker.com/ee/ucp/admin/configure/use-your-own-tls-certificates/#configure-ucp-to-use-your-own-tls-certificates-and-keyshttps://docs.docker.com/ee/dtr/admin/configure/use-your-own-tls-certificates/https://docs.docker.com/ee/ucp/user-access/cli/ Lesson Reference To upload your own certificates in UCP, go to admin, Admin Settings, then Certificates. In DTR, click System, General, then under Domain & Proxies select Show TLS settings. To generate client bundles in UCP, go to admin, My Profile, Client Bundles.

Course Conclusion

Wrap-Up

How to Prepare for the Exam

00:03:29

Lesson Description:

If you are planning to earn your Docker Certified Associate certification, you'll need to spend some time practicing for the certification exam. In this video, we'll briefly discuss the overall format of the exam. We'll also dive into some general study tips for preparing for the exam. Remember to read through all of the exam procedures on the official exam site: https://success.docker.com/certification

What's Next After Certification?

00:02:02

Lesson Description:

We have reached the end of the Docker Certified Associate course, but there is still plenty of additional content available at Linux Academy to explore! In this video, we'll introduce you to a few of the topics and courses that might be of interest after the completion of the Docker Certified Associate course.

Get Recognized

00:01:01

Lesson Description:

Remember that you can receive recognition for earning your Docker Certified Associate certification!

Practice Exam - Docker Certified Associate

Docker Certified Associate (DCA) - Practice Exam

01:30:00