Docker – Deep Dive

Course

Intro Video

Photo of Travis Thomsen

Travis Thomsen

Course Development Director in Content

I have over 17 years of experience in all phases of the software development life cycle, which includes software analysis, design, development, testing, implementation, debugging, maintenance and documentation. I am passionate about learning new technologies, methodologies, languages and automation.

Length

13:18:41

Difficulty

Intermediate

Course Details

Are you new to Docker or need a refresher on the current version? If so, then this course is for you!

This course will cover Docker 18.09.4, and you will learn:

The basics of how Docker worksHow to install the Docker Community EditionHow to manage images, containers, networks, and volumesBuild images using a DockerfileTag images and push them to Docker HubUse Docker Compose to deploy Microservices to DockerRun Docker in Swarm Mode for a more production-ready environmentUse Secrets to protect sensitive data like passwords

This course has a number of hands-on labs that will put your Docker knowledge to the test!

Interactive Diagram: https://interactive.linuxacademy.com/diagrams/DockerDeepDive.html

Syllabus

Introduction

Introduction

About This Course

00:02:27

Lesson Description:

Here we'll see who this course is geared toward, and the material that it will cover.

About the Instructor

00:01:16

Lesson Description:

Before we get started on the course, let's learn a little about who is teaching it!

Introduction to Docker

00:04:33

Lesson Description:

Introduction to Docker What is Docker? Docker: Docker, Inc. the companyDocker, the container runtime and orchestration engineDocker, the open-source project (Moby) The Company Docker, Inc.: Based in San FranciscoFounded by Solomon HykesStart as a PaaS provider called dotClouddotCloud leveraged Linux containersTheir internal tool used to manage containers was nick-named DockerIn 2013 dotCloud was rebranded as Docker The Runtime and Orchestration Engine The Docker runtime and orchestration engine: Most people are referring to the Docker EngineTwo main editions: Enterprise Edition (EE)Community Edition (CE) Both are released quarterly: CE is supported for 4 monthsEE is supported for 12 months The Open-Source Project Moby: The upstream project of DockerBreaks Docker down into more modular componentsCode is available on GitHub: https://github.com/moby/moby Why Use Docker? Docker Use Cases: Dev/Prod parity: Dev and Production environment are the sameBugs in Production can be replicated in Development Simplifying Configuration: Lets you put your environment and configuration into code and deploy itAllows the same Docker configuration to be used in a variety of environmentsDecouples infrastructure requirements from the application environment Code Pipeline Management: Build standards and repeatable processes Developer ProductivityApp IsolationServer ConsolidationDebugging CapabilitiesMulti-tenancy

Setting Up Your Environment

Using Cloud Playground

00:02:38

Lesson Description:

In this video, we'll walk you through using Cloud Playground to create the server where we will be installing Docker.

Installing Docker

00:01:57

Lesson Description:

Installing Docker In this lesson, we'll install the latest version of Docker CE. The commands used throughtout this video are below. Prerequisites Uninstall old versions: sudo yum remove -y docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine Install Docker CE Add the Utilities needed for Docker: sudo yum install -y yum-utils device-mapper-persistent-data lvm2 Set up the stable repository: sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo Install Docker CE: sudo yum -y install docker-ce Enable and start Docker: sudo systemctl start docker && sudo systemctl enable docker Add cloud_user to the docker group: sudo usermod -aG docker cloud_user

Docker Under the Hood

Docker Architecture

00:05:25

Lesson Description:

Docker Architecture In this lesson we will take a high-level look at the Docker Architecture. Architecture Overview Docker architecture: Client-server architectureClient talks to the Docker daemonThe Docker daemon handles: BuildingRunningDistributing Both communicate using a REST API: UNIX socketsNetwork interface The Docker daemon (dockerd): Listens for Docker API requests and manages Docker objects: ImagesContainersNetworksVolumes The Docker client (docker): Is how users interact with DockerSends commands to dockerd Docker registries: Stores Docker imagesPublic registry such as DockerHubLet you run your own private registry Docker objects: Images: Read-only template with instructions for creating a Docker containerImage is based on another imageCreate your own imagesUse images published to a registryUse a Dockerfile to build images Containers: Runnable instance of an imageConnect a container to networksAttach storageCreate a new image based on its current stateIsolated from other containers and the host machine Services Scale containers across multiple Docker daemonsDocker SwarmDefine the desired stateService is load-balanced Docker Swarm: Multiple Docker daemons (Master and Workers)The daemons all communicate using the Docker APISupported in Docker 1.12 and higher

The Docker Engine

00:07:06

Lesson Description:

The Docker Engine In this lesson, we will go over the Docker Engine and it's various components. Under The Hood Docker engine: Modular in design: Batteries included but replaceable Based on open-standards outline by the Open Container InitiativeThe major components: Docker clientDocker daemoncontainerdrunc The components work together to create and run containers A Brief History of the Docker Engine The first release of Docker: The Docker daemon: Monolithic binaryDocker clientDocker APIContainer runtimeImage buildsMuch more... LXC: NamespacesControl groups (cgroups)Linux-specific Refactoring of the Docker Engine LXC was later replaced with libcontainer: Docker 0.9Platform agnostic Issues with the monolithic Docker daemon: Harder to innovateSlowNot what the ecosystem wanted Docker became more modular: Smaller more specialized toolsPluggable architecture Open Container Initiative: Image specContainer runtime specVersion 1.0 release in 2017Docker Inc. heavily contributedDocker 1.11 (2016) used the specification as much as possible runc: Implemenation of the OCI container-runtime-specLightweght CLI wrapper for libcontainerCreate containers containerd: Manages container lifecycle StartStopPauseDelete Image managementPart of the 1.11 release shim: Implemenation of daemonless Containerscontainerd forks an instance of runc for each new containerrunc process exits after the container is createdshim process becomes the container parentResponsible for: STDIN and STDOUTReporting exit status to the Docker daemon Running Containers docker container run -it --name <NAME> <IMAGE>:<TAG> Creating a container: CLI use for executing a commandDocker client uses the appropriate API payloadPOSTs to the correct API endpointDocker deamon receives instructionsDocker deamon calls containerd to start a new containerDocker daemon uses gRPC (a CRUD style API)containerd creates an OCI bundle from the Docker imageTells runc to create a container using the OCI bundlerunc interfaces with the OS kernal to get the constructs needed to create a container This includes namespaces, cgroups, etc. Container process starts as a child processrunc exits once the container starts Process is complete, and container is running

Docker Images and Containers

00:03:00

Lesson Description:

Docker Images and Containers What are Docker images? Docker Images: Files comprised of multiple layersExecute code in a Docker containerBuilt from the instructionsUse images to create an instance of a container Docker images and layers Image are made of multiple layers.Each layer represents an instruction in the image’s Dockerfile.Each layer except, the very last one, is read-only.Each layer is only a set of differences from the layer before it.Layers are stacked on top of each other.Containers add new writable layers on top of the underlying layersAll changes made to a running container is made to the Container layer What are containers? A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. Container and layers Top writable layerAll changes are stored in the writable layerThe writable layer is deleted when the container is deletedThe image remains unchanged

Docker Hub

00:01:53

Lesson Description:

Docker Hub In this lesson, we will take a quick look at Docker Hub and its available features. What is Docker Hub? Docker Hub: Public Docker registryProvided by DockerFeatures: RepositoriesTeams and OrganizationsOfficial ImagesPublisher ImagesBuildsWebhooks https://hub.docker.com/signup

Learning the Basics of Docker

Docker Basics

Docker Commands

00:12:16

Lesson Description:

Docker Commands In this lesson we will start working with Docker commands. We'll see the management commands, along with the ones for managing images and containers. Get a list of all of the Docker commands: docker -h Management command were introduced in Docker engine v1.13 Management Commands: builder Manage buildsconfig Manage Docker configscontainer Manage containersengine Manage the docker engineimage Manage imagesnetwork Manage networksnode Manage Swarm nodesplugin Manage pluginssecret Manage Docker secretsservice Manage servicesstack Manage Docker stacksswarm Manage Swarmsystem Manage Dockertrust Manage trust on Docker imagesvolume Manage volumes docker image: build Build an image from a dockerfilehistory Show the history of an imageimport Import the contents from a tarball to create a filesystem imageinspect Display detailed information on one or more imagesload Load an image from a tar file or STDINls List imagesprune Remove unused imagespull Pull an image or a repository from a registrypush Push an image or a repository to a registryrm Remove one or more imagessave Save one or more images to a tar file (streamed to STDOUT by default)tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE docker container: attach Attach local standard input, output, and error streams to a running containercommit Create a new image from a container's changescp Copy files/folders between a container and the local filesystemcreate Create a new containerdiff Inspect changes to files or directories on a container's filesystemexec Run a command in a running containerexport Export a container's filesystem as a tar archiveinspect Display detailed information on one or more containerskill Kill one or more running containerslogs Fetch the logs of a containerls List containerspause Pause all processes within one or more containersport List port mappings or a specific mapping for the containerprune Remove all stopped containersrename Rename a containerrestart Restart one or more containersrm Remove one or more containersrun Run a command in a new containerstart Start one or more stopped containersstats Display a live stream of container(s) resource usage statisticsstop Stop one or more running containerstop Display the running processes of a containerunpause Unpause all processes within one or more containersupdate Update configuration of one or more containerswait Block until one or more containers stop, then print their exit codes

Creating Containers

00:04:50

Lesson Description:

Creating Containers In this lesson, we will take a deeper look into creating containers, by exploring a few of the flags that will alter it's behavior when created. docker container run: --help Print usage--rm Automatically remove the container when it exits-d, --detach Run container in background and print container ID-i, --interactive Keep STDIN open even if not attached--name string Assign a name to the container-p, --publish list Publish a container's port(s) to the host-t, --tty Allocate a pseudo-TTY-v, --volume list Mount a volume (the bind type of mount)--mount mount Attach a filesystem mount to the container--network string Connect a container to a network (default "default") Create a container and attach to it: docker container run –it busybox Create a container and run it in the background: docker container run –d nginx Create a container that you name and run it in the background: docker container run –d –name myContainer busybox

Exposing Container Ports

00:04:55

Lesson Description:

Exposing and Publishing Container Ports Building on what we've already learned, this lesson will focus on exposing ports on a container, as well as how to publish them. Exposing: Expose a port or a range of portsThis does not publish the portUse --expose [PORT] docker container run --expose 1234 [IMAGE] Publishing: Maps a container's port to a host`s port-p or --publish publishes a container's port(s) to the host-P, or --publish-all publishes all exposed ports to random ports docker container run -p [HOST_PORT]:[CONTAINER_PORT] [IMAGE] docker container run -p [HOST_PORT]:[CONTAINER_PORT]/tcp -p [HOST_PORT]:[CONTAINER_PORT]/udp [IMAGE] docker container run -P Lists all port mappings or a specific mapping for a container: docker container port [Container_NAME]

Executing Container Commands

00:04:55

Lesson Description:

Executing Container Commands In this lesson we'll see three different ways to execute commands on containers. Executing a command: DockerfileDuring a Docker runUsing the exec command Commands can be: One and done CommandsLong running Commands Start a container with a command: docker container run [IMAGE] [CMD] Execute a command on a container: docker container exec -it [NAME] [CMD] Example: docker container run -d -p 8080:80 nginx docker container ps docker container exec -it [NAME] /bin/bash docker container exec -it [NAME] ls /usr/share/nginx/html/ Nginx Dockerfile

Docker Logging

00:05:42

Lesson Description:

Container Logging In this lesson, you will learn how to view the logs of a container to get vital output of your application. You will also learn about some of the logging best practices of containerized applications. Create a container using the weather-app image. docker container run --name weather-app -d -p 80:3000 linuxacademycontent/weather-app Show information logged by a running container: docker container logs [NAME] Show information logged by all containers participating in a service: docker service logs [SERVICE] Logs need to be output to STDOUT and STDERR. Nginx Example: RUN ln -sf /dev/stdout /var/log/nginx/access.log && ln -sf /dev/stderr /var/log/nginx/error.log Debug a failed container deploy: docker container run -d --name ghost_blog -e database__client=mysql -e database__connection__host=mysql -e database__connection__user=root -e database__connection__password=P4sSw0rd0! -e database__connection__database=ghost -p 8080:2368 ghost:1-alpine Useful links:12 Factor LogsWeather App CodeRuby LoggingPython Logging

Networking and Storage

Networking

Networking Overview

00:04:26

Lesson Description:

Networking Overview In this lesson, we will go over the components and theory of how networking functions in Docker. Docker Networking 101 Docker Networking: Open-source pluggable architectureContainer Network Model (CNM)libnetwork implements CNMDrivers extend the network topologies Network Drivers: bridgehostoverlaymacvlannoneNetwork plugins Container Network Model Defines three building blocks: SandboxesEndpointsNetworks

Networking Commands

00:06:06

Lesson Description:

Networking Commands Networking Basics ifconfig List all Docker network commands: docker network -h connect Connect a container to a network create Create a network disconnect Disconnect a container from a network inspect Display detailed information on one or more networks ls List networks prune Remove all unused networks rm Remove one or more networks List all Docker networks on the host: docker network ls docker network ls --no-trunc Getting detailed info on a network: docker network inspect [NAME] Creating a network: docker network create br00 Deleting a network: docker network rm [NAME] Remove all unused networks: docker network prune Adding and Removing containers to a network Create a container with no network: docker container run -d --name network-test03 -p 8081:80 nginx Create a new network: docker network create br01 Add the container to the bridge network: docker network connect br01 network-test03 Inspect network-test03 to see the networks: docker container inspect network-test03 Remove network-test03 from br01: docker network disconnect br01 network-test03

Networking Containers

00:10:34

Lesson Description:

Networking Containers In this lesson, we will dig deeper into container networking by supplying our own subnet and gateway when creating a new network. We will then move on to networking two different containers using an internal network. This will allow one container to be publicly accessible while the other one is not. Creating a network and defining a Subnet and Gateway Create a bridge network with a subnet and gateway: docker network create --subnet 10.1.0.0/24 --gateway 10.1.0.1 br02Run ifconfig to view the bridge interface for br02: ifconfigInspect the br02 network: docker network inspect br02Prune all unused networks: docker network pruneCreate a network with an IP range: docker network create --subnet 10.1.0.0/16 --gateway 10.1.0.1 --ip-range=10.1.4.0/24 --driver=bridge --label=host4network br04Inspect the br04 network: docker network inspect br04Create a container using the br04 network: docker container run --name network-test01 -it --network br04 centos /bin/bashInstall Net Tools: yum update -y yum install -y net-toolsGet the IP info for the container: ifconfigGet the gateway info the container: netstat -rnGet the DNS info for the container: cat /etc/resolv.confAssigning IPs to a container: Create a new container and assign an IP to it: docker container run -d --name network-test02 --ip 10.1.4.102 --network br04 nginxGet the IP info for the container: docker container inspect network-test02 | grep IPAddrInspect network-test03 to see that br01 was removed: docker container inspect network-test04Networking two containers Create an internal network: docker network create -d bridge --internal localhostCreate a MySQL container that is connected to localhost: docker container run -d --name test_mysql -e MYSQL_ROOT_PASSWORD=P4sSw0rd0 --network localhost mysql:5.7Create a container that can ping the MySQL container: docker container run -it --name ping-mysql --network bridge centosConnect ping-mysql to the localhost network: docker network connect localhost ping-mysqlRestart and attach to container: docker container start -ia ping-mysqlCreate a container that can't ping the MySQL container: docker container run -it --name cant-ping-mysql centosCreate a Nginx container that is not publicly accessible: docker container run -d --name private-nginx -p 8081:80 --network localhost nginxInspect private-nginx: docker container inspect private-nginx

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Storage

Storage Overview

00:04:21

Lesson Description:

Storage Overview In this lesson, we will look a how Docker handles storage for persistent and non-persistent data. Docker Storage 101 Categories of data storage: Non-persistent Local storageData that is ephemeralEvery container has itTied to the lifecycle of the contain Persistent Volumes Volumes are decoupled from containers Non-persistent Data Non-persistent data: By default all container use local storageStorage locations: Linux: /var/lib/docker/[STORAGE-DRIVER]/Windows: C:ProgramDataDockerwindowsfilter Storage Drivers: RHEL uses overlay2.Ubuntu uses overlay2 or aufs.SUSE uses btrfs.Windows uses its own. Persistent Data Using Volumes Volumes: Use a volume for persistent data: Create the volume first, then create your container. Mounted to a directory in the containerData is written to the volumeDeleting a container does not delete the volumeFirst-class citizensUses the local driverThird party drivers: Block storageFile storageObject storage Storage locations: Linux: /var/lib/docker/volumes/Windows: C:ProgramDataDockervolumes

Volume Commands

00:03:23

Lesson Description:

Volume Commands Volumes are the preferred method of maintaining persistent data in Docker. In this lesson, we will begin learning how to use the volume subcommand to list, create, and remove volumes. Volume Basics List all Docker volume commands: docker volume -h create: Create a volume.inspect: Display detailed information on one or more volumes.ls: List volumes.prune: Remove all unused local volumes.rm: Remove one or more volumes. List all volumes on a host: docker volume ls Create two new volumes: docker volume create test-volume1 docker volume create test-volume2 Get the flags available when creating a volume: docker volume create -h Inspecting a volume: docker volume inspect test-volume1 Deleting a volume: docker volume rm test-volume Removing all unused volumes: docker volume prune

Using Bind Mounts

00:06:57

Lesson Description:

Using Bind Mounts Bind mounts have been around since the early days of Docker. They have limited functionality compared to volumes. With bind mount, a file or directory on the host machine is mounted into a container. Volumes use a new directory that is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents. Using the mount flag: mkdir target docker container run -d --name nginx-bind-mount1 --mount type=bind,source="$(pwd)"/target,target=/app nginx docker container ls Bind mounts won't show up when listing volumes: docker volume ls Inspect the container to find the bind mount: docker container inspect nginx-bind-mount1 Create a new file in /app on the container: docker container exec -it nginx-bind-mount1 /bin/bash cd target touch file1.txt ls exit Using the volume flag: docker container run -d --name nginx-bind-mount2 -v "$(pwd)"/target2:/app nginx Create /app/file3.txt in the container: docker container exec -it nginx-bind-mount2 touch /app/file3.txt ls target2 Create an nginx.conf file: mkdir nginx cat << EOF > nginx/nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } EOF Create an Nginx container that creates a bind mount to nginx.conf: docker container run -d --name nginx-bind-mount3 -v "$(pwd)"/nginx/nginx.conf:/etc/nginx/nginx.conf nginx Look at the bind mount by inspecting the container: docker container inspect nginx-bind-mount3

Using Volumes for Persistent Storage

00:06:28

Lesson Description:

Using Volumes for Persistent Storage In this lesson, we will take a deeper look into using volumes with our Docker containers. Volumes are the preferred method for maintaining persistent data. Volumes are easier to back up or migrate than bind mounts. You can manage volumes using Docker CLI commands or the Docker API. They work on both Linux and Windows containers. Volumes can be more safely shared among multiple containers. Volume drivers allow for: Storing volumes on remote hosts or cloud providersEncrypting the contents of volumesAdd other functionality New volumes can have their content pre-populated by a container. Create a new volume for an Nginx container: docker volume create html-volume Creating a volume using that volume mount: docker container run -d --name nginx-volume1 --mount type=volume,source=html-volume,target=/usr/share/nginx/html/ nginx Inspect the volume: docker volume inspect html-volume List the contents of html-volume: sudo ls /var/lib/docker/volumes/html-volume/_data Creating a volume using that volume flag: docker container run -d --name nginx-volume2 -v html-volume:/usr/share/nginx/html/ nginx Edit index.html: sudo vi /var/lib/docker/volumes/html-volume/_data/index.html Inspect nginx-volume2 to get the private IP: docker container inspect nginx-volume2 Login into nginx-volume1 and go to the html directory: docker container exec -it nginx-volume1 /bin/bash cd /usr/share/nginx/html cat index.hml Install Vim: apt-get update -y apt-get install vim -y Using a readonly volume: docker run -d --name=nginx-volume3 --mount source=html-volume,target=/usr/share/nginx/html,readonly nginx Login into nginx-volume3 and go to the html directory: docker container exec -it nginx-volume3 /bin/bash cd /usr/share/nginx/html cat index.hml Install Vim: apt-get update -y apt-get install vim -y

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Docker Images

Dockerfile

Introduction to the Dockerfile

00:04:27

Lesson Description:

Introduction to the Dockerfile In this lesson we will start learning about building images using a Dockerfile. What is the Dockerfile? Dockerfiles are instructions. They contains all of the commands used to build an image. Docker images consist of read-only layers.Each represents a Dockerfile instruction.Layers are stacked.Each layer is a result of the changes from the previous layer.Images are built using the docker image build command. Dockerfile Layers Dockerfile: FROM ubuntu:15.04 COPY . /app RUN make /app CMD python /app/app.py FROM creates a layer from the ubuntu:15.04 Docker image.COPY adds files from your Docker client’s current directory.RUN builds your application with make.CMD specifies what command to run within the container. Best Practices General guidelines: Keep containers as ephemeral as possible.Follow Principle 6 of the 12 Factor App.Avoid including unnecessary files.Use .dockerignore.Use multi-stage builds.Don’t install unnecessary packages.Decouple applications.Minimize the number of layers.Sort multi-line arguments.Leverage build cache.

Working with Instructions

00:06:49

Lesson Description:

Working with Instructions FROM: Initializes a new build stage and sets the Base Image RUN: Will execute any commands in a new layer CMD: Provides a default for an executing container. There can only be one CMD instruction in a Dockerfile LABEL: Adds metadata to an image EXPOSE: Informs Docker that the container listens on the specified network ports at runtime ENV: Sets the environment variable <key> to the value <value> ADD: Copies new files, directories or remote file URLs from <src> and adds them to the filesystem of the image at the path <dest>. COPY: Copies new files or directories from <src> and adds them to the filesystem of the container at the path <dest>. ENTRYPOINT: Allows for configuring a container that will run as an executable VOLUME: Creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers USER: Sets the user name (or UID) and optionally the user group (or GID) to use when running the image and for any RUN, CMD, and ENTRYPOINT instructions that follow it in the Dockerfile WORKDIR: Sets the working directory for any RUN, CMD, ENTRYPOINT, COPY, and ADD instructions that follow it in the Dockerfile ARG: Defines a variable that users can pass at build-time to the builder with the docker build command, using the --build-arg <varname>=<value> flag ONBUILD: Adds a trigger instruction to the image that will be executed at a later time, when the image is used as the base for another build HEALTHCHECK: Tells Docker how to test a container to check that it is still working SHELL: Allows the default shell used for the shell form of commands to be overridden To set up the environment: sudo yum install git -y mkdir docker_images cd docker_images mkdir weather-app cd weather-app git clone https://github.com/linuxacademy/content-weather-app.git src Create the Dockerfile: vi Dockerfile Dockerfile contents: # Create an image for the weather-app FROM node LABEL org.label-schema.version=v1.1 RUN mkdir -p /var/node ADD src/ /var/node/ WORKDIR /var/node RUN npm install EXPOSE 3000 CMD ./bin/www Build the weather-app image: docker image build -t linuxacademy/weather-app:v1 . List the images: docker image ls Create the weather-app container: docker container run -d --name weather-app1 -p 8081:3000 linuxacademy/weather-app:v1 List all running containers: docker container ls ADD or Copy

Environment Variables

00:07:56

Lesson Description:

Environment Variables To make new software easier to run, you can use ENV to update the PATH environment variable for the software that your container installs. Setup your environment: cd docker_images mkdir env cd env Use the --env flag to pass an environment variable when building an image: --env [KEY]=[VALUE] Use the ENV instruction in the Dockerfile: ENV [KEY]=[VALUE] ENV [KEY] [VALUE] Clone the weather-app: git clone https://github.com/linuxacademy/content-weather-app.git src Create the Dockerfile vi Dockerfile Dockerfile contents: # Create an image for the weather-app FROM node LABEL org.label-schema.version=v1.1 ENV NODE_ENV="development" ENV PORT 3000 RUN mkdir -p /var/node ADD src/ /var/node/ WORKDIR /var/node RUN npm install EXPOSE $PORT CMD ./bin/www Create the weather-app container: docker image build -t linuxacademy/weather-app:v2 . Inspect the container to see the environment variables: docker image inspect linuxacademy/weather-app:v2 Deploy the weather-dev application: docker container run -d --name weather-dev -p 8082:3001 --env PORT=3001 linuxacademy/weather-app:v2 Inspect the development container to see the environment variables: docker container inspect weather-dev Deploy the weather-app to production: docker container run -d --name weather-app2 -p 8083:3001 --env PORT=3001 --env NODE_ENV=production linuxacademy/weather-app:v2 Inspect the production container to see the environment variables: docker container inspect weather-app2 Get the logs for weather-app2: docker container logs weather-app2 docker container run -d --name weather-prod -p 8084:3000 --env NODE_ENV=production linuxacademy/weather-app:v2

Build Arguments

00:04:31

Lesson Description:

Build Arguments In this lesson, we will explore using build arguments to paramerterize an image build. Use the --build-arg flag when building an image: --build-arg [NAME]=[VALUE] Use the ARG instruction in the Dockerfile: ARG [NAME]=[DEFAULT_VALUE] Navigate to the args directory: cd docker_images mkdir args cd args Clone the weather-app: git clone https://github.com/linuxacademy/content-weather-app.git src Create the Dockerfile: vi Dockerfile Dockerfile: # Create an image for the weather-app FROM node LABEL org.label-schema.version=v1.1 ARG SRC_DIR=/var/node RUN mkdir -p $SRC_DIR ADD src/ $SRC_DIR WORKDIR $SRC_DIR RUN npm install EXPOSE 3000 CMD ./bin/www Create the weather-app container: docker image build -t linuxacademy/weather-app:v3 --build-arg SRC_DIR=/var/code . Inspect the image: docker image inspect linuxacademy/weather-app:v3 | grep WorkingDir Create the weather-app container: docker container run -d --name weather-app3 -p 8085:3000 linuxacademy/weather-app:v3 Verify that the container is working by executing curl: curl localhost:8085

Working with Non-privileged User

00:04:56

Lesson Description:

Working with Non-privileged Users In this lesson, you will learn how to use the USER instruction to create a non-privileged user. Rather than using root, we can use a non-privileged user to configure and run an application. Setup your environment: cd docker_images mkdir non-privileged-user cd non-privileged-user Create the Dockerfile: vi Dockerfile Dockerfile contents: # Creates a CentOS image that uses cloud_user as a non-privileged user FROM centos:latest RUN useradd -ms /bin/bash cloud_user USER cloud_user Build the new image: docker image build -t centos7/nonroot:v1 . Create a container using the new image: docker container run -it --name test-build centos7/nonroot:v1 /bin/bash Connecting as a privileged user: docker container start test-build docker container exec -u 0 -it test-build /bin/bash Set up the environment: cd ~/docker_images mkdir node-non-privileged-user cd node-non-privileged-user Create the Dockerfile: vi Dockerfile Dockerfile contents: # Create an image for the weather-app FROM node LABEL org.label-schema.version=v1.1 RUN useradd -ms /bin/bash node_user USER node_user ADD src/ /home/node_user WORKDIR /home/node_user RUN npm install EXPOSE 3000 CMD ./bin/www git clone https://github.com/linuxacademy/content-weather-app.git src Build the weather-app image using the non-privileged user node_user: docker image build -t linuxacademy/weather-app-nonroot:v1 . Create a container using the linuxacademy/weather-app-nonroot:v1 image: docker container run -d --name weather-app-nonroot -p 8086:3000 linuxacademy/weather-app-nonroot:v1

Order of Execution

00:02:12

Lesson Description:

Order of Execution This lesson focuses on the order that instructions are executed in when building an image. Some instructions may have unintended consequences that can cause your build to fail. Setup your environment: cd docker_images mkdir centos-conf cd centos-conf Create the Dockerfile: vi Dockerfile Dockerfile contents: # Creates a CentOS image that uses cloud_user as a non-privileged user FROM centos:latest RUN mkdir -p ~/new-dir1 RUN useradd -ms /bin/bash cloud_user USER cloud_user RUN mkdir -p ~/new-dir2 RUN mkdir -p /etc/myconf RUN echo "Some config data" >> /etc/myconf/my.conf Build the new image: docker image build -t centos7/myconf:v1 .

Using the Volume Instruction

00:02:12

Lesson Description:

Using the Volume Instruction In this lesson, we will use the VOLUME instruction to automatically create a mount point in a Docker image. When a container is created using this image, a volume will be created and mounted to the specified directory. Set up your environment: cd docker_images mkdir volumes cd volumes Create the Dockerfile: vi Dockerfile Build an Nginx image that uses a volume: FROM nginx:latest VOLUME ["/usr/share/nginx/html/"] Build the new image: docker image build -t linuxacademy/nginx:v1 . Create a new container using the linuxacademy/nginx:v1 image: docker container run -d --name nginx-volume linuxacademy/nginx:v1 Inspect nginx-volume: docker container inspect nginx-volume List the volumes: docker volume ls | grep [VOLUME_NAME] Inspect the volumes: docker volume inspect [VOLUME_NAME]

Entrypoint vs. Command

00:05:22

Lesson Description:

Entrypoint vs. Command In this lesson, we will begin working with the ENTRYPOINT instruction. Though ENTRYPOINT functions very similarly to CMD it's behaviors are vary different. ENTRYPOINT allows us to configure a container that will run as an executable.We can override all elements specified using CMD.Using the docker run --entrypoint flag will override the ENTRYPOINT instruction. Setup your environment: cd docker_images mkdir entrypoint cd entrypoint Create the Dockerfile: vi Dockerfile Dockerfile contents: # Create an image for the weather-app FROM node LABEL org.label-schema.version=v1.1 ENV NODE_ENV="production" ENV PORT 3001 RUN mkdir -p /var/node ADD src/ /var/node/ WORKDIR /var/node RUN npm install EXPOSE $PORT ENTRYPOINT ./bin/www Clone the image: git clone https://github.com/linuxacademy/content-weather-app.git src Build the image: docker image build -t linuxacademy/weather-app:v4 . Deploy the weather-app: docker container run -d --name weather-app4 linuxacademy/weather-app:v4 Inspect weather-app4: docker container inspect weather-app4 | grep Cmd docker container inspect weather-app-nonroot docker container inspect weather-app4 Create the weather-app container: docker container run -d --name weather-app5 -p 8083:3001 linuxacademy/weather-app:v4 echo "Hello World" Inspect weather-app5: docker container inspect weather-app5 Create the volumes for Prometheus: docker volume create prometheus docker volume create prometheus_data sudo chown -R nfsnobody:nfsnobody /var/lib/docker/volumes/prometheus/ sudo chown -R nfsnobody:nfsnobody /var/lib/docker/volumes/prometheus_data/ Create the Prometheus container: docker run --name prometheus -d -p 8084:9090 -v prometheus:/etc/prometheus -v prometheus_data:/prometheus/data prom/prometheus --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus/data Inspect Prometheus: docker container inspect prometheus Prometheus Dockerfile

Using .dockerignore

00:03:01

Lesson Description:

Using .dockerignore In this lesson, we'll create a .dockerignore file, so that we can exclude files we don't want copied over when building an image. Setup your environment: cd docker_images mkdir dockerignore cd dockerignore git clone https://github.com/linuxacademy/content-weather-app.git src cd src git checkout dockerignore cd ../ Create the .dockerignore file: vi .dockerignore Add the following to .dockerignore: # Ignore these files */*.md */.git src/docs/ */tests/ Create the Dockerfile: vi Dockerfile Dockerfile contents: # Create an image for the weather-app FROM node LABEL org.label-schema.version=v1.1 ENV NODE_ENV="production" ENV PORT 3000 RUN mkdir -p /var/node ADD src/ /var/node/ WORKDIR /var/node RUN npm install EXPOSE $PORT ENTRYPOINT ["./bin/www"] Build the image: docker image build -t linuxacademy/weather-app:v5 . Create the weather-app container: docker container run -d --name weather-app-ignore linuxacademy/weather-app:v5 List the contents of /var/node: docker container exec weather-app-ignore ls -la /var/node

Building and Distributing Images

Building Images

00:05:47

Lesson Description:

Building Images In this lesson, we will learn some alternate ways of building images. To build one: docker image build -t <NAME>:<TAG> . Useful flags: -f, --file string: This is the name of the Dockerfile (Default is PATH/Dockerfile).--force-rm: Always remove intermediate containers.--label list: Set metadata for an image.--rm: Remove intermediate containers after a successful build (default is true).--ulimit ulimit: This sets ulimit options (default is []). cd docker_images/weather-app cp Dockerfile Dockerfile.test docker image build -t linuxacademy/weather-app:path-example1 -f Dockerfile.test . docker image build -t linuxacademy/weather-app:path-example2 --label com.linuxacademy.version=v1.8 -f Dockerfile.test . Building image by piping the Dockerfile through STDIN: docker image build -t <NAME>:<TAG> -<<EOF Build instructions EOF Example: docker image build -t linuxacademy/nginx:stind --rm -<<EOF FROM nginx:latest VOLUME ["/usr/share/nginx/html/"] EOF Building an image using a URL: docker image build -t <NAME>:<TAG> <GIT_URL>#<REF> docker image build -t <NAME>:<TAG> <GIT_URL>#:<DIRECTORY> docker image build -t <NAME>:<TAG> <GIT_URL>#<REF>:<DIRECTORY> Example: docker image build -t linuxacademy/weather-app:github https://github.com/linuxacademy/content-weather-app.git#remote-build Building an image from a zip file: docker image build -t <NAME>:<TAG> - < <FILE>.tar.gz Example: cd docker_images mkdir tar_image cd tar_image git clone https://github.com/linuxacademy/content-weather-app. cd content-weather-app git checkout remote-build tar -zcvf weather-app.tar.gz Dockerfile src docker image build -t linuxacademy/weather-app:from-tar - < weather-app.tar.gz

Using Multi-Stage Builds

00:03:57

Lesson Description:

Using Multi-Stage Builds In this lesson, we will learn how to build smaller images using multi-stage builds. By default, the stages are not namedStages are numbered with integersStarting with 0 for the first FROM instructionName the stage by adding as to the FROM instructionReference the stage name in the COPY instruction Set up your environment: cd docker_images mkdir multi-stage-builds cd multi-stage-builds git clone https://github.com/linuxacademy/content-weather-app.git src Create the Dockerfile: vi Dockerfile Dockerfile contents: # Create an image for the weather-app using multi-stage build FROM node AS build RUN mkdir -p /var/node/ ADD src/ /var/node/ WORKDIR /var/node RUN npm install FROM node:alpine ARG VERSION=V1.1 LABEL org.label-schema.version=$VERSION ENV NODE_ENV="production" COPY --from=build /var/node /var/node WORKDIR /var/node EXPOSE 3000 ENTRYPOINT ["./bin/www"] Build the image: docker image build -t linuxacademy/weather-app:multi-stage-build --rm --build-arg VERSION=1.5 . List images to see the size difference: docker image ls Create the weather-app container: docker container run -d --name multi-stage-build -p 8087:3000 linuxacademy/weather-app:multi-stage-build

Tagging

00:02:07

Lesson Description:

Tagging In this lesson, we will talk about how to use the tag command, and best practices to keep in mind when tagging. Add a name and an optional tag with -t or --tag, in the name:tag format: docker image build -t <name>:<tag> docker image build --tag <name>:<tag> List your images: docker image ls Use our Git commit hash as the image tag: git log -1 --pretty=%H Use the Docker tag to a create a new tagged image: docker tag <SOURCE_IMAGE><:TAG> <TARGET_IMAGE>:<TAG> Get the commit hash: cd docker_images/weather-app/src git log -1 --pretty=%H cd ../ Build the image using the Git hash as the tag: docker image build -t linuxacademy/weather-app:<GIT_HASH> . Tag the weather-app as the latest using the image tagged with the commit hash: docker image tag linuxacademy/weather-app:<GIT_HASH> linuxacademy/weather-app:latest

Distributing Images on Docker Hub

00:04:19

Lesson Description:

Distributing Images on Docker Hub In this lesson, we'll walk through how to tag and push an image to Docker Hub. You will need a Docker Hub account. Create a Docker Hub account: https://hub.docker.com/ Docker Push: docker image push <USERNAME>/<IMAGE_NAME>:<TAG> Creating an image for Docker Hub: docker image tag <IMAGE_NAME>:<TAG> <linuxacademy>/<IMAGE_NAME>:<TAG> Set up your environment: cd docker_images mkdir dockerhub cd dockerhub Create the Dockerfile: vi Dockerfile Dockerfile contents: # Create an image for the weather-app using multi-stage build FROM node AS build RUN mkdir -p /var/node/ ADD src/ /var/node/ WORKDIR /var/node RUN npm install FROM node:alpine ARG VERSION=V1.1 LABEL org.label-schema.version=$VERSION ENV NODE_ENV="production" COPY --from=build /var/node /var/node WORKDIR /var/node EXPOSE 3000 ENTRYPOINT ["./bin/www"] Git the weather-app code: git clone https://github.com/linuxacademy/content-weather-app.git src Use the Git commit hash as the image tag: cd src git log -1 --pretty=%H cd ../ Build the image: docker image build -t <USERNAME>/weather-app:<HASH> --build-arg VERSION=1.5 . Tag the image before pushing it to Docker Hub: docker image tag linuxacademy/weather-app:<HASH> <USERNAME>/weather-app:<HASH> Push the image to Docker Hub: docker login docker image push <USERNAME>/weather-app:<HASH> Tag the latest image: docker image tag <USERNAME>/weather-app:<HASH> <USERNAME>/weather-app:latest Push the latest image to Docker Hub: docker login <USERNAME> docker image push <USERNAME>/weather-app:latest

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Managing Images

Image History

00:03:21

Lesson Description:

Image History In this lesson, see how to get more information about an image by looking at its history. Show the history of an image: docker image history <IMAGE> docker image history --no-trunc <IMAGE> docker image history --quiet <IMAGE> docker image history --quiet --no-trunc <IMAGE> Get the image history for Node: docker image history node:latest Get the image history for weather-app: docker image history rivethead42/weather-app:latest Get the image history weather-app:v1 with the no-truncm flag: docker image history --no-trunc linuxacademy/weather-app:v1 Save the output using the no-truncm flag to a file: docker image history --no-trunc linuxacademy/weather-app:v1 > output.txt View the contents: vi output.txt Use the quiet flag to list the image IDs: docker image history --quiet linuxacademy/weather-app:v1 Use the quiet flag to list the image IDs, then save the output to a file using the no-truncm flag: docker image history --quiet --no-trunc linuxacademy/weather-app:v1

Saving and Loading Images

00:03:00

Lesson Description:

Saving and Loading Images In this lesson, we will learn how to save an image to a tar file, and see how to load it back in. Save one or more images to a tar file: docker image save <IMAGE> > <FILE>.tar docker image save <IMAGE> -o <FILE>.tar docker image save <IMAGE> --output <FILE>.tar Load an image from a tar file: docker image load < <FILE>.tar docker image load -i <FILE>.tar docker image load --input <FILE>.tar Setup: mkdir output cd output Archive the rivethead42/weather-app:latest image: docker image save rivethead42/weather-app:latest --output weather-app-latest.tar Inspect the tar file: tar tvf weather-app-latest.tar Compress the tar file: gzip weather-app-latest.tar Delete the image: docker image rm [USERNAME]/weather-app:latest Load the weather-app image from a tar file: docker image load --input weather-app-latest.tar.gz docker image ls | grep [USERNAME]/weather-app docker image rm rivethead42/weather-app:latest docker image ls | grep rivethead42/weather-app

Beyond the Docker Basics

Container Management

Inspecting Container Processes

00:03:45

Lesson Description:

Inspecting Container Processes In this lesson, we'll take a look at a few ways we can examine the running processes in a container. Docker Top: docker container top <NAME> Docker Stats: docker container stats <NAME> Create a new CentOS container: docker container run -itd --name container_process centos /bin/bash Execute docker container top: docker container top container_process Attach to container_process: docker container exec -it container_process /bin/bash Attach to the container using attach: docker container attach container_process Restart the container: docker container start container_process Attach to the container_process container: docker container exec -it container_process /bin/bash Run top on the container: top exit Get stats on a container: docker container stats container_process

Having Containers Start Automatically

00:03:16

Lesson Description:

Having Containers Start Automatically In this lesson, we will look at how to set restart policies for containers, and how that will effect their behavior when the docker service is restarted. To configure the restart policy for a container, use the --restart flag: no: Do not automatically restart the container. (the default)on-failure: Restart the container if it exits due to an error, which manifests as a non-zero exit code.always: Always restart the container if it stops.unless-stopped: Similar to always, except that when the container is stopped, it is not restarted even after the Docker daemon restarts. Automatically Restarting a container: docker container run -d --name <NAME> --restart <RESTART> <IMAGE> Make sure a container always restarts: docker container run -d --name always-restart --restart always rivethead42/weather-app:latest Make sure a container restarts unless it's stopped: docker container run -d --name unless-stopped --restart unless-stopped rivethead42/weather-app:latest Stop and restart your Docker service: sudo systemctl restart docker List your containers: docker container ls Stop the unless-stopped container: docker container stop unless-stopped Stop and restart your Docker service: sudo systemctl restart docker List your containers: docker container ls Stop the unless-stopped container: docker container stop always-restart Stop and restart your Docker service: sudo systemctl restart docker List your containers: docker container ls

Docker Events

00:05:26

Lesson Description:

Docker Events In this lesson, we'll see how to listen for events using the events command. Get real-time events from the server: docker system events docker system events --since '<TIME_PERIOD>' Start a new CentOS container: docker container run -itd --name docker_events centos /bin/bash Listen for events: docker system events Generate Events: docker container exec docker_events /bin/bash docker container attach docker_events docker container start docker_events Filters Events: docker system events --filter <FILTER_NAME>=<FILTER> Filter for container events: docker system events --filter type=container --since '1h' Generate an event: docker container exec docker_events ls / Filter for container events: docker system events --filter type=container --filter event=start --since '1h' List / on docker_events: docker container exec docker_events ls / Filter for attach events: docker system events --filter type=container --filter event=attach Connect to docker_events using /bin/bash: docker container exec -it docker_events /bin/bash Attach to docker_events: docker container attach docker_events Connect to docker_events using /bin/bash: docker container exec -it docker_events /bin/bash Attach to docker_events: docker container attach docker_events Use multiple filters: docker system events --filter type=container --filter event=attach --filter event=die --filter event=stop Start docker_events: docker container start docker_events Attach to docker_events: docker container attach docker_events Documentation: docker events Engine API v1.24

Managing Stopped Container

00:03:12

Lesson Description:

Managing Stopped Container In this lesson, we will manage stopped containers by starting, deleting, or pruning them. Remove one or more containers: docker container rm <NAME> List the rm flags: docker container rm -h Start one or more stopped containers: docker container start <NAME> Remove all stopped containers: docker container prune List the IDs of all containers: docker container ls -a -q List all stopped containers: docker container ls -a -f status=exited List the IDs of stopped containers: docker container ls -a -q -f status=exited Get a count of all stopped containers: docker container ls -a -q -f status=exited | wc -l Get a listing of our containers: docker container ls -a -f status=exited | grep prometheus Start Prometheus: docker container start prometheus Fin stopped weather-app containers with grep: docker container ls -a -f status=exited | grep weather-app Remove stopped weather-app containers: docker container rm [CONTAINER_IDS] Prune all stopped containers: docker container prune

Managing Docker with Portainer

00:08:29

Lesson Description:

Managing Docker with Portainer In this lesson, we'll install Portainer and use it manage our Docker host. Create a volume for Portainers data: docker volume create portainer_data Create the Portainers container: docker container run -d --name portainer -p 8080:9000 --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer docker container ls The ls should output: rivethead42/weather-app:latest NODE_ENV production

Updating Containers with Watchtower

00:05:01

Lesson Description:

Updating Containers with Watchtower In this lesson, we'll see how to use Watchtower to keep a container up-to-date when its image gets updated. Clone Express app: git clone https://github.com/linuxacademy/content-express-demo-app.git watchtower cd watchtower git checkout dockerfile Build the Docker image: docker login -u [USERNAME] docker image build -t rivethead42/my-express . docker image push rivethead42/my-express Create the container: docker container run -d --name watched-app -p 80:3000 --restart always rivethead42/my-express Create Watchtower: docker container run -d --name watchtower --restart always -v /var/run/docker.sock:/var/run/docker.sock v2tec/watchtower -i 15 Add a .dockerignore file: vi .dockerignore .dockerignore contents: Dockerfile .git .gitignore Edit app.js and add a comment: vi app.js app.js contents: //This is a comment // ... Add the file newfile.js: touch newfile.js Rebuild the image: docker image build -t rivethead42/my-express --no-cache . docker image push rivethead42/my-express Check to see if the container was restarted with the new image: docker container ls Verify the changes by attaching to watched-app: docker container exec -it watched-app /bin/bash

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Docker Compose

Installing Docker Compose

00:01:44

Lesson Description:

Installing Docker Compose In this lesson, we will learn about installing Docker Compose and why we should use it. Download the latest version of Docker Compose: sudo curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose Apply executable permissions: sudo chmod +x /usr/local/bin/docker-compose Test Docker Compose: docker-compose --version

Compose Commands

00:03:50

Lesson Description:

Compose Commands In this lesson, we will start using compose by creating a compose file. Then we will create and manage the services by using the most commonly used commands: build: Build or rebuild servicesbundle: Generate a Docker bundle from the Compose fileconfig: Validate and view the Compose filecreate: Create servicesdown: Stop and remove containers, networks, images, and volumesevents: Receive real time events from containersexec: Execute a command in a running containerhelp: Get help on a commandimages: List imageskill: Kill containerslogs: View output from containerspause: Pause servicesport: Print the public port for a port bindingps: List containerspull: Pull service imagespush: Push service imagesrestart: Restart servicesrm: Remove stopped containersrun: Run a one-off commandscale: Set number of containers for a servicestart: Start servicesstop: Stop servicestop: Display the running processesunpause: Unpause servicesup: Create and start containersversion: Show the Docker-Compose version information Setup your environment: mkdir -p compose/commands cd compose/commands Create a docker-compose file: vi docker-compose.yml docker-compose.yml contents: version: '3' services: web: image: nginx ports: - "8080:80" volumes: - nginx_html:/usr/share/nginx/html/ links: - redis redis: image: redis volumes: nginx_html: {} Create a compose service: docker-compose up -d List containers created by compose: docker-compose ps Stopping a compose service: docker-compose stop Starting a compose service: docker-compose start Restarting a compose service: docker-compose restart Delete a compose service: docker-compose down

Creating a Compose File

00:05:12

Lesson Description:

Creating a Compose File In this lesson we will look at the basics of creating a compose file. Setup your environment: cd compose git clone https://github.com/linuxacademy/content-weather-app.git weather-app cd weather-app git checkout compose Create a docker-compose.yml file: vi docker-compose.yml docker-compose.yml contents: version: '3' services: weather-app: build: context: . args: - VERSION=v2.0 ports: - "8081:3000" environment: - NODE_ENV=production Create the compose container: docker-compose up -d List compose services: docker-compose ps Verify the weather-app is working: curl http://localhost:8081 Rebuild the image: docker-compose build Rebuild the image with no cache: docker-compose build --no-cache

Using Volumes and Networking with Compose

00:06:51

Lesson Description:

Using Volumes and Networking with Compose In this lesson, we will learn how to use volumes and networks in a docker compose file. Setup your environment: mkdir -p compose/ghost cd compose/ghost Create a docker-compose.yml file: vi docker-compose.yml docker-compose.yml: version: '3' services: ghost: container_name: ghost image: ghost:latest ports: - "80:2368" environment: - database__client=mysql - database__connection__host=mysql - database__connection__user=root - database__connection__password=P4SSw0rd0! - database__connection__database=ghost volumes: - ghost-volume:/var/lib/ghost networks: - ghost_network - mysql_network depends_on: - mysql mysql: container_name: mysql image: mysql:5.7 environment: - MYSQL_ROOT_PASSWORD=P4SSw0rd0! volumes: - mysql-volume:/var/lib/mysql networks: - mysql_network volumes: ghost-volume: mysql-volume: networks: ghost_network: mysql_network: Create the compose container: docker-compose up -d List compose services: docker-compose ps List the volumes: docker volumes ls List the volumes: docker network ls Docker Compose Documentation

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Docker Swarm

Introduction to Docker Swarm

00:02:49

Lesson Description:

Introduction to Docker Swarm In this lesson, we'll look at the highlights of Docker Swarm before we start working with it. Swarm 101 Swarm has two major components: An enterprise grade secure cluster: Manage one or more Docker nodes as a clusterEncrypted distributed cluster storeEncrypted networksSecure join tokens An orchestration engine for creating mircoservices: API for deploying and managing microservicesDeclarative manifest files for defining appsProvides availability to scale apps, and perform rolling updates and rollbacks Swarm was initially a separate product layered on Docker, since Docker 1.12 it has become a part of the engine. The Cluster A swarm consists of one or more Docker nodes.Nodes are either a managers or a worker.Managers: Manage the state of the clusterDispatch tasks to workers Workers: Accepts and execute tasks State is held in etcdSwarm uses Transport Layer Security (TLS): Encrypted communicationAuthenticated nodesAuthorized roles Orchestration The atomic unit of scheduling is a swarm service.The service construct adds the following to a container: scalingrolling updatesrollbackupdates A container wrapped in a service is a task or a replica.

Running Docker in Swarm Mode

00:03:47

Lesson Description:

Running Docker in Swarm Mode In this lesson, we will create two new docker servers. These servers will be used in a swarm configuration. Then we will initialize the swarm manager and have the two new nodes join the swarm. Install the Swarm Worker Node Now create two new servers in Cloud Playground that will be used as worker nodes. Prerequisites Uninstall old versions: sudo yum remove -y docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine Install Docker CE Add the Docker repository: sudo yum install -y yum-utils device-mapper-persistent-data lvm2 Set up the stable repository: sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo Install Docker CE: sudo yum -y install docker-ce Enable and Start Docker: sudo systemctl start docker && sudo systemctl enable docker Add cloud_user to the docker group: sudo usermod -aG docker cloud_user Initialize the manager: docker swarm init --advertise-addr [PRIVATE_IP] Add the worker to the cluster: docker swarm join --token [TOKEN] [PRIVATE_IP]:2377 List the nodes in the swarm: docker node ls

Managing Swarm Nodes

00:04:55

Lesson Description:

Managing Swarm Nodes: In this lesson, we learn how to manage the nodes in the swarm. Docker node commands: demote: Demotes one or more nodes from manager in the swarm inspect: Displays detailed information on one or more nodes ls: Lists nodes in the swarm promote: Promotes one or more nodes to manager in the swarm ps: Lists tasks running on one or more nodes, defaults to current node rm: Removes one or more nodes from the swarm update: Updates a node Docker swarm commands: ca: Displays and rotate the root CAinit: Initializes a swarmjoin: Joins a swarm as a node and/or managerjoin-token: Manages join tokensleave: Leaves the swarmunlock: Unlocks swarmunlock-key: Manages the unlock keyupdate: Updates the swarm Managing swarm nodes: Listing nodes: docker node ls Inspecting a node: docker node inspect [NODE_NAME] Promoting a worker to a manager: docker node promote [NODE_NAME] Demoting a manager to a worker: docker node demote [NODE_NAME] Removing a node form the swarm (node must be demoted first): docker node rm -f [NODE_NAME] Make a node leave the swarm: docker swarm leave Getting the join-token: docker swarm join-token [worker|manager] Make the node rejoin the swarm: docker swarm join --token [TOKEN] <PRIVATE_IP>:2377

Working with Services

00:06:31

Lesson Description:

Working with Services In this lesson, we'll see how to create and manage a service running in Docker Swarm. Docker service commands: create: Creates a new serviceinspect: Displays detailed information on one or more serviceslogs: Fetches the logs of a service or taskls: Lists servicesps: Lists the tasks of one or more servicesrm: Removes one or more servicesrollback: Reverts changes to a service's configurationscale: Scales one or multiple replicated servicesupdate: Updates a service Creating a service: docker service create -d --name [NAME] -p [HOST_PORT]:[CONTAINER_PORT] --replicas [REPLICAS] [IMAGE] [CMD] List services: docker service ls Inspecting a service: docker service inspect [NAME] Get logs for a service: docker service logs [NAME] List all tasks of a service: docker service ps [NAME] Scale a service up or down: docker service scale [NAME]=[REPLICAS] Update a service: docker service update [OPTIONS] [NAME] Create nginx_service: docker service create -d --name nginx_service -p 8080:80 --replicas 2 nginx:latest List the swarm services: docker service ls Inspect nginx_service: docker service inspect nginx_service Find the network: docker network ls --no-trunc | grep [NETOWRK_ID] View the running tasks for nginx_service: docker service ps nginx_service Scale nginx_service to 3 replicas: docker service scale nginx_service=3

Using Networks in Swarm Mode

00:05:55

Lesson Description:

Using Networks in Swarm Mode In this lesson, we'll look more into overlay networks and how they are used with a swarm. Create a overlay network: docker network create -d overlay [NAME] Creating a service with an overlay network: docker service create -d --name [NAME] --network [NETWORK] -p [HOST_PORT]:[CONTAINER_PORT] --replicas [REPLICAS] [IMAGE] [CMD] Add a service to a network: docker service update --network-add [NETWORK] [SERVICE] Remove a service from a network: docker service update --network-rm [NETWORK] [SERVICE] Create a overlay network: docker network create -d overlay my_overlay Create an encrypted overlay network: docker network create -d overlay --opt encrypted encrypted_overlay Inspect encrypted_overlay: docker network inspect encrypted_overlay Inspect my_overlay: docker network inspect my_overlay Create a service using my_overlay: docker service create -d --name nginx_overlay --network my_overlay -p 8081:80 --replicas 2 nginx:latest Adding the my_overlay network to nginx_service: docker service update --network-add my_overlay nginx_service Inspect nginx_service: docker service inspect nginx_service Removing the ingress network from nginx_service: docker service update --network-rm my_overlay nginx_service Inspect nginx_service: docker service inspect nginx_service Remove encrypted_overlay: docker network rm encrypted_overlay

Using Volumes in Swarm Mode

00:06:53

Lesson Description:

Using Volumes in Swarm Mode In this lesson we will start learning about plugins, and using volumes in swarm mode. The local driver only creates a volume on the node that a command is executed on. This requires using a third party driver that is specific to the environment. Add Plugins: docker plugin install [PLUGIN] [OPTIONS] List plugins: docker plugin ls Volume Plugins: HedvigPure StorageHPE Nimble StorageNutanix DVPBlockbridgeNexentaStorStorageOSRex-Ray Install the Splunk plugin: docker plugin install store/splunk/docker-logging-plugin:2.0.0 --alias splunk-logging-plugin Disable a plugin: docker plugin disable [ID] Remove a plugin: docker plugin rm [ID] Digital Ocean example: docker plugin install rexray/dobs DOBS_REGION=<DO_REGION> DOBS_TOKEN=<DIGITAL_OCEAN_TOKEN> DOBS_CONVERTUNDERSCORES=true Create a volume using a driver: docker volume create -d [DRIVER] [NAME] docker service create -d --name [NAME] --mount type=[TYPE],src=[SOURCE],dst=[DESTINATION] -p [HOST_PORT]:[CONTAINER_PORT] --replicas [REPLICAS] [IMAGE] [CMD] Create a volume on the manager: docker volume create -d local portainer_data Create a portainers service that uses a volume: docker service create --name portainer --publish 8000:9000 --constraint 'node.role == manager' --mount type=volume,src=portainer_data,dst=/data --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock portainer/portainer -H unix:///var/run/docker.sock Volume Drivers Rex-Ray Volume Driver

Deploying Stacks in Docker Swarm

00:09:50

Lesson Description:

Deploying Stacks in Docker Swarm In this lesson, we will learn how to deploy stacks to Docker Swarm using Docker Compose. Docker stack commands: deploy: Deploys a new stack or update an existing stackls: Lists stacksps: Lists the tasks in the stackrm: Removes one or more stacksservices: Lists the services in the stack Setup environment: mkdir -p swarm/prometheus cd swarm/prometheus Create the prometheus.yml file: vi prometheus.yml prometheus.yml contents: global: scrape_interval: 15s scrape_timeout: 15s evaluation_interval: 15s scrape_configs: - job_name: prometheus scrape_interval: 5s static_configs: - targets: - prometheus_main:9090 - job_name: nodes scrape_interval: 5s static_configs: - targets: - [MANAGER]:9100 - [WORKER1]:9100 - [WORKER2]:9100 - job_name: cadvisor scrape_interval: 5s static_configs: - targets: - [MANAGER]:8081 - [WORKER1]:8081 - [WORKER2]:8081 Create a compose file: vi docker-compose.yml docker-compose.yml contents: version: '3' services: main: image: prom/prometheus:latest container_name: prometheus ports: - 8080:9090 command: - --config.file=/etc/prometheus/prometheus.yml - --storage.tsdb.path=/prometheus/data volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro - data:/prometheus/data depends_on: - cadvisor - node-exporter cadvisor: image: google/cadvisor:latest container_name: cadvisor deploy: mode: global restart: unless-stopped ports: - 8081:8080 volumes: - /:/rootfs:ro - /var/run:/var/run:rw - /sys:/sys:ro - /var/lib/docker/:/var/lib/docker:ro node-exporter: image: prom/node-exporter:latest container_name: node-exporter deploy: mode: global restart: unless-stopped ports: - 9100:9100 volumes: - /proc:/host/proc:ro - /sys:/host/sys:ro - /:/rootfs:ro command: - '--path.procfs=/host/proc' - '--path.sysfs=/host/sys' - --collector.filesystem.ignored-mount-points - "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)" grafana: image: grafana/grafana container_name: grafana ports: - 8082:3000 volumes: - grafana_data:/var/lib/grafana - grafana_plugins:/var/lib/grafana/plugins environment: - GF_SECURITY_ADMIN_PASSWORD=P4ssW0rd0! depends_on: - prometheus - cadvisor - node-exporter volumes: data: grafana_data: grafana_plugins: Deploy the stack: docker stack deploy --compose-file docker-compose.yml prometheus List stacks: docker stack ls List services: docker service ls Fix volume permissions: sudo chown nfsnobody:nfsnobody -R /var/lib/docker/volumes/prometheus_data

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:30:00

Docker Security

Introduction to Docker Security

00:06:13

Lesson Description:

Introduction to Docker Security We'll begin exploring ways to secure Docker by using security features native to both the operating system and Docker itself. Docker Security 101 Security is all about layers Linux security: NamespacesControl GroupsMandatory Access Control (MAC)Seccomp Docker security: Docker SwarmDocker Content TrustDocker Security ScannerDocker secrets Namespaces Docker creates a set of namespaces and control groups for the container. Docker containers are an organized collections of namespaces. Namespaces provide isolation.Each container also gets its own network stack. Docker on Linux namespaces: Process ID (pid)network (net)Filesystem/mount (mount)Inter-process Communication (ipc)User (user)UTS (uts) Control Groups Control Groups are about setting limits for: CPURAMDisk I/O They help to mitigate denial-of-service attacks, and are important on multi-tenant platforms. Capabilities Capabilities turn the binary “root/non-root” dichotomy into a fine-grained access control system. In most cases, containers do not need “real” root privileges at all. This means root within a container has much less privileges than the real root. It also means that even if an intruder manages to escalate to root within a container, it is much harder to do serious damage, or to escalate to the host. Mandatory Access Control systems Two major MAC technologies are: SELinuxAppArmor Seccomp This limits the syscalls a container can make to the host's kernel. All new containers get a default seccomp configured Docker Swarm Swarm Mode: Cryptographic node IdsMutual authentication via TLSSecure join tokensCA configuration with automatic certificate rotationEncrypted cluster storeEncrypted networks docker swarm update --cert-expiry [INT]h Docker Secrets These store sensitive data like: PasswordsTLS CertificatesAPI Keys Secrets Workflow: A secret is created and posted to the Swarm.The secret is encrypted and stored.A service is created and the secret is attached.Secrets are stored in-flight.The secret is mounted into the container of a service.When the task is complete, the in-memory is torn down.

Working with Docker Security

00:08:05

Lesson Description:

Working with Docker Security In this lesson we will start implementing some of the Docker security practices. Seccomp Profile docker container run --security-opt seccomp=[PROFILE] [IMAGE] [CMD] Testing Seccomp: docker container run --rm -it alpine sh whoami mount /dev/sda1 /tmp swapoff -a Using a custom Seccomp profile: mkdir -p seccomp/profiles/chmod cd seccomp/profiles/chmod wget https://raw.githubusercontent.com/moby/moby/master/profiles/seccomp/default.json Remove chmod, fchmod and fchmodat from the syscalls whitelist. Syscalls starts at line 52. Applying the custom Seccomp profile: docker container run --rm -it --security-opt seccomp=./default.json alpine sh chmod +r /usr Capabilities: Dropping Capabilities: docker container run --cap-drop=[CAPABILITY] [IMAGE] [CMD] Test mknod: docker container run --rm -it alpine sh mknod /dev/random2 c 1 8 Disable mknod: docker container run --rm -it --cap-drop=MKNOD alpine sh mknod /dev/random2 c 1 8 Runtime privilege and Linux capabilities Control Groups Limiting CPU and memory: docker container run -it --cpus=[VALUE] --memory=[VALUE][SIZE] --memory-swap [VALUE][SIZE] [IMAGE] [CMD] Setting memory limits on a container: docker container run -d --name resource-limits --cpus=".5" --memory=512M --memory-swap=1G rivethead42/weather-app Inspect resource-limits: docker container inspect resource-limits Runtime constraints on resources More info on resource constraints Running Docker Bench for Security Running Docker Bench Security: docker container run --rm -it --network host --pid host --userns host --cap-add audit_control -e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST -v /var/lib:/var/lib -v /var/run/docker.sock:/var/run/docker.sock -v /usr/lib/systemd:/usr/lib/systemd -v /etc:/etc --label docker_bench_security docker/docker-bench-security Docker Bench Security

Docker Content Trust

00:05:36

Lesson Description:

Docker Content Trust In this lesson you will learn how to sign images and enable Docker Content Trust to ensure that the images you are pulling have been verified. Commands Creating a Key: docker trust key generate [NAME] Importing a Key: docker trust key load [PEM] --name [NAME] Add a signer: docker trust signer add --key [PEM] [NAME] [REPOSITORY] Remove a signer: docker trust signer remove [NAME] [REPOSITORY] Signing an image: docker trust sign [IMAGE]:[TAG] Using DCT to sign an image Tag the image that will be signed: docker image tag [USERNAME]/weather-app:latest [USERNAME]/dct:latest Create a Key: docker trust key generate [NAME] Add your signer user: docker trust signer add --key [NAME].pub [NAME] [USERNAME]/dct Sign and push your image to Docker Hub: docker trust sign [USERNAME]/dct: export DOCKER_CONTENT_TRUST=1 docker image push [USERNAME]/dct:latest Remove the Docker image [USERNAME]/weather-app: docker image rm [USERNAME]/weather-app:latest Pull [USERNAME]/weather-app: docker image pull [USERNAME]/weather-app Remove the Nginx image: docker image rm nginx:latest Pull the image: docker image pull nginx:latest Enabling DCT vi /etc/docker/daemon.json /etc/docker/daemon.json: { "content-trust": { "mode": "enforced" } }

Working with Secrets

00:11:15

Lesson Description:

Working with Secrets In this lesson, we will start working with Docker Secrets to protect sensitive data, such as passwords and API Keys. Docker secrets commands: create: Create a secret from a file or STDIN as contentinspect: Display detailed information on one or more secretsls: List secretsrm: Remove one or more secrets Creating a secret: STDIN | docker secret create [NAME] - List secrets: docker secret ls Inspecting a secret: docker secret inspect [NAME] Using secrets: docker service create --name [NAME] --secret [SECERT] [IMAGE] Deleting a secret: docker secret rm [NAME] Setup environment: mkdir Secrets cd secrets Create a secret using STDIN: openssl rand -base64 20 | docker secret create my_secret_data - Create a secret using a file: openssl rand -base64 20 > secret.txt docker secret create my_secret_data2 secret.txt Create a service using a secret: docker service create --name redis --secret my_secret_data redis:alpine Find the node the service is running on: docker service ps redis Remove secret my_secret_data2: docker secret rm my_secret_data2 Generate password files: openssl rand -base64 20 > db_password.txt openssl rand -base64 20 > db_root_password.txt Create a Wordpress Stack: vi docker-compose.yml docker-compose.yml contents: version: '3.1' services: db: image: mysql:5.7 volumes: - db_data:/var/lib/mysql networks: mysql_internal: aliases: ["db"] environment: MYSQL_ROOT_PASSWORD_FILE: /run/secrets/db_root_password MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD_FILE: /run/secrets/db_password secrets: - db_root_password - db_password wordpress: depends_on: - db image: wordpress:latest networks: mysql_internal: aliases: ["wordpress"] wordpress_public: ports: - "8001:80" environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD_FILE: /run/secrets/db_password secrets: - db_password secrets: db_password: file: db_password.txt db_root_password: file: db_root_password.txt volumes: db_data: networks: mysql_internal: driver: "overlay" internal: true wordpress_public: driver: "overlay" Deploy stack: docker stack deploy --compose-file docker-compose.yml wp

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Conclusion

Final Steps

Next Steps

00:00:35

Lesson Description:

In this video, we will take a look of where to go from here.