Using Docker-Compose to Simplify Multi-Container Builds

One of the primary advantages of containers is the ability to quickly deploy a single, discrete application with a known configuration, fully supported within the container. However, as we all know, within our enterprise organizations, most applications do not run on their own – most have to connect to and use search services, databases, and more. Let’s take a look at how to build a multi-container environment, managed with a single utility in order to simplify your deployments.

Getting Started with Docker-Compose

Installation is easy enough in most cases, as the docker-compose utility has made it to most modern distribution repositories. Once Docker itself is installed (a requirement for docker-compose), on a Red Hat or CentOS systems, you can do:

sudo yum install docker-compose

OR

sudo dnf install docker-compose

Whereas on Debian or Ubuntu systems, you do the equivalent:

sudo apt-get install docker-compose

OR

sudo apt install docker-compose

This installs the default supported version in the distribution repositories, although you can also add the vendor repository and install a later version (which may be important if you want to take advantage of features in the later version or the formatting in the YAML configuration file later).

A Basic Docker-Compose YAML File

Docker-Compose uses YAML as it’s configuration file format. This format lends itself well to the linear nature of building docker containers. YAML files are ‘outline’ formatted and allow you to organize services, actions, and options in a natural order that is easy to read. In our case, we are going to deploy an Apache web container and a Redis message broker. We will create a simple Dockerfile for our web server from which to build the container and use the official Redis container that can be pulled from Docker Hub. Assuming we have created an ‘index.html’ file in a directory for our web site, we will need to create two other files: The Dockerfile (for building our ‘web’ container) and docker-compose.yml (for detailing how we want all of our containers to be built).

Here is a simple Dockerfile that can be used to build the Apache2 web container:

FROM centos:latest
MAINTAINER admin@linuxacademy.com

RUN yum update -y
RUN yum install -y httpd
ADD ./index.html /var/www/html

ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]

EXPOSE 80

This simply pulls the latest base CentOS image (v7 at the time of writing) as a template to which to apply these other steps. We will be running an update, installing the Apache web server, copying in our ‘index.html’ file to the default web server directory, and setting an ‘entrypoint’ for the container so that unless overridden, it will start the Apache web server when a container is instantiated from this image. Finally, we expose HTTP port 80 so that it can be remapped to the host if we choose when we start the container.

Next, we will create a simple ‘docker-compose.yml’ file that tells docker-compose to build a ‘web’ and a ‘redis’ image to launch containers from locally. Although there are a TON of options you can provide within a docker-compose configuration (Docker-Compose Documentation is here), we are going to define a ‘web’ and ‘redis’ service and tell it to build one image from a provided Dockerfile (above) and another from the default repository version. As a result, our YAML file should look like:

version: '2'
services:
 web:
   build: .
   ports:
   - "8080:80"
 redis:
   image: "redis:alpine"

This will define ‘web’ and ‘redis’ as the ‘services’ for which we are building images. In the ‘web’ service, we build an image based on the Dockerfile (by default, a ‘build’ with a ‘.’ value will look for a file called Dockerfile and build the image following those instructions). When we launch a container from that image, we are exposing the containers port 80 (our Apache web server) to the underlying host port 8080 (so we can see the container web site via localhost on port 8080). Finally, we are simply pulling the ‘redis:alpine’ image for the ‘redis’ service image we are building and will launch it with whatever default entrypoint is defined by that default image.

Ok, so what do we do now?

Build Images and Launch Containers at Once

Here is where the power of docker-compose comes in. Now that we have set things up, instead of manually building or pulling each image, then configuring via command line and running containers with options, we do it all with a single, simple command:

docker-compose up

OR

docker-compose build && docker-compose up

This performs a build (if the images indicated are not already built or exist locally) and will start containers based on those images for you! You can then view your web page on localhost:8080 and, if you obtain the private IP of your Redis container, you can connect to redis via port 6379 on that IP. You made all of that available with just a few commands and easy to create and read configuration files.

Next Steps

Explore! Docker-compose is a very powerful utility that can help you deploy and manage many containers in a single configuration. That configuration can become part of a build for your entire software stack in development and test environments as a pre-cursor to something more complex like Kubernetes (which expands you deployment and management of containers across a cluster of servers).

Terrence T. Cox

A veteran of twenty years in Information Technology in a variety of roles. He has worked in development, security and infrastructure well before they merged into what we now call DevOps. He provides training in Linux, VMWare, DevOps (Ansible, Jenkins, etc) as well as containers and AWS topics.

Leave a Reply

Your email address will not be published. Required fields are marked *