Bridging the gap from Docker to something like Kubernetes can be daunting — and it might not actually be necessary for you, depending on your use case. If you want to be able to have a repeatable process for creating and orchestrating a small number of containers, then Docker Compose could be perfect for you.
Before following along, be sure to install Docker Compose.
Defining Docker Compose Services
When we start running containers using more and more of the functionality of
docker run, the commands can start to get pretty lengthy and hard to remember. Thankfully, Docker Compose allows us to write these out in YAML using specific keys that map to the options we can pass to various Docker commands. Let’s take a sufficiently complicated
docker run command to start a MySQL container configured using environment variables and turn it into YAML in a
docker-compose.yml. Here’s the command we’re starting with:
$ docker run -d \ --name mysql:5.7 \ -p 3306:3306 \ -e MYSQL_ROOT_PASSWORD=secret_password \ -e MYSQL_DATABASE=linuxacademy \ mysql
We can create this exact container as a “service” inside of a
docker-compose.yml file after we set up a little of the boilerplate. Here’s our start:
version: "3" services: mysql: image: "mysql" environment: MYSQL_ROOT_PASSWORD: "secret_password" MYSQL_DATABASE: "linuxacademy" ports: - "3306:3306"
There are a few different versions of the syntax used for
docker-compose.yml file, but we should almost always be using the newest, which is 3.x at the time of writing this. From there, we state that we want to define “services” and any of the items under here will be a container Docker Compose will create and run for us.
mysql service definition, we have keys we can set that map pretty closely to the options we used when manually using
docker run. It’s worth noting that if an option takes a list and the values don’t include an equal sign (
=), then we will use a YAML list (starting with
-), otherwise we’ll use YAML key/value pairs as we did in the
From the directory where we created the
docker-compose.yml file, we can create and run our container(s) in the background using
docker-compose up with the
-d flag to daemonize the containers:
$ docker-compose up -d Creating network "docker-compose-example_default" with the default driver Creating docker-compose-example_mysql_1 ... done
My directory is called
docker-compose-example, and Docker Compose will take that information to create a unique container name based on the directory name and the name of our service. In my case, I get
docker-compose-example_mysql_1. The trailing
_1 is there to handle the event that we want to run multiple replicas of our container.
Removing Hardcoded Environment Variables
One improvement we can make to our
mysql service would be to not hardcode the environment variables. We’ll ideally want to add our
docker-compose.yml files to source control, and we definitely don’t want to commit our credentials into git. Thankfully, Docker Compose provides a great way for us to pass in our existing environment variable values. The first thing we need to do is remove the values from our
docker-compose.yml file while keeping the keys:
version: "3" services: mysql: image: "mysql:5.7" environment: MYSQL_ROOT_PASSWORD: MYSQL_DATABASE: ports: - "3306:3306"
To test this out, we’ll want to remove the container Docker Compose created by using
$ docker-compose down Stopping docker-compose-example_mysql_1 ... done Removing docker-compose-example_mysql_1 ... done Removing network docker-compose-example_default
Next, we’ll define our environment variables in-line when we’re running
$ MYSQL_ROOT_PASSWORD=secret_password MYSQL_DATABASE=linuxacademy docker-compose up -d Creating network "docker-compose-example_default" with the default driver Creating docker-compose-example_mysql_1 ... done
We’re no longer setting these values in our file, but if we inspect the container and grep for
MYSQL_ environment variables, we can see the values are set properly.
Note: You can find the name of your container using
Here’s what I see:
$ docker inspect docker-compose-example_mysql_1 | grep MYSQL_ "MYSQL_ROOT_PASSWORD=secret_password", "MYSQL_DATABASE=linuxacademy", "MYSQL_MAJOR=5.7", "MYSQL_VERSION=5.7.25-1debian9"
Knowing how to create a container using Docker Compose gets us pretty close to being able to orchestrate multiple containers. We’ll follow the same steps by creating another object at the same level as our
mysql key. In this example, let’s create a Ghost blog container that will require and connect to our
version: "3" services: mysql: image: "mysql:5.7" environment: MYSQL_ROOT_PASSWORD: MYSQL_DATABASE: ports: - "3306:3306" blog: image: "ghost:2-alpine" ports: - "8080:2368" environment: DATABASE__CLIENT: mysql DATABASE__CONNECTION__HOST: mysql DATABASE__CONNECTION__USER: root DATABASE__CONNECTION__DATABASE: DATABASE__CONNECTION__PASSWORD: depends_on: - mysql
The big thing to note here is our
depends_on key in the
blog service. By specifying this, we’re telling Docker Compose the
mysql service needs to be started before the
blog service because the
blog service needs it to run. Additionally, when we’re configuring the database connection for our blog, we specify the “host” of our database as being
mysql. This seems very odd, but on the internal network that Docker Compose created for our containers, the
mysql container is accessible by other containers (namely our
blog container) using the fully qualified domain name (FQDN) of
We already have the
mysql container running, but we can still run
docker-compose up and it will run our additional container:
$ MYSQL_ROOT_PASSWORD=secret_password MYSQL_DATABASE=linuxacademy DATABASE__CONNECTION__PASSWORD=secret_password DATABASE__CONNECTION__DATABASE=linuxacademy docker-compose up -d Pulling blog (ghost:2-alpine)... 2-alpine: Pulling from library/ghost 169185f82c45: Pull complete 62154f231947: Pull complete acf10a8404b6: Pull complete 111312b8db58: Pull complete 5a8b3dd4622e: Pull complete 77aff150715c: Pull complete 46a1198e6d9b: Pull complete d94cdfbec967: Pull complete 0007942647d7: Pull complete Recreating docker-compose-example_mysql_1 ... done Creating docker-compose-example_blog_1 ... done
Now we can access our blog by connecting to our Docker host’s IP address on port
The last thing we’re going to take a look at is how to improve the way we store data for our blog by mounting a volume for our database to write to. This change will allow us to keep our data even if we remove the
mysql container. Docker Compose makes this easy because it considers volumes and networks to be top-level objects just like “services.” Let’s create a volume called
db-data and mount it in our
version: "3" volumes: db-data: external: false services: mysql: image: "mysql:5.7" environment: MYSQL_ROOT_PASSWORD: MYSQL_DATABASE: ports: - "3306:3306" volumes: - "db-data:/var/lib/mysql" blog: image: "ghost:2-alpine" ports: - "8080:2368" environment: DATABASE__CLIENT: mysql DATABASE__CONNECTION__HOST: mysql DATABASE__CONNECTION__USER: root DATABASE__CONNECTION__DATABASE: DATABASE__CONNECTION__PASSWORD: depends_on: - mysql
This is going to work great for us, and it will create the volume for us. The
external: false setting tells Docker Compose we didn’t create the volume ourselves “externally” to Docker Compose. To get our
mysql service to use this, we need to create a new container. An easy way to do this is to run
$ docker-compose down Stopping docker-compose-example_blog_1 ... done Stopping docker-compose-example_mysql_1 ... done Removing docker-compose-example_blog_1 ... done Removing docker-compose-example_mysql_1 ... done Removing network docker-compose-example_default
Now we can recreate our containers so the data will be written to a volume.
$ MYSQL_ROOT_PASSWORD=secret_password MYSQL_DATABASE=linuxacademy DATABASE__CONNECTION__PASSWORD=secret_password DATABASE__CONNECTION__DATABASE=linuxacademy docker-compose up -d Creating network "docker-compose-example_default" with the default driver Creating docker-compose-example_mysql_1 ... done Creating docker-compose-example_blog_1 ... done
Digging Deeper with Docker in General
Docker Compose is great, especially as a stopgap between getting started with Docker and using a more complex system like Docker Swarm or Kubernetes. If you think you’re ready to take that next step or you’d simply like to learn more, I would encourage you to check out these courses: