Skip to main content

One of the most common use cases for Docker containers these days is in continuous integration and continuous deployment. The key to CI/CD is being able to automate the entire workflow, from development and testing to staging and production. Build tools like Jenkins are used widely, while Docker and its APIs can be used to generate automated Docker images and push them to registries quickly and automatically.

Azure Container Service is a container hosting solution provided by Azure, which you can use to deploy production-ready Kubernetes, Docker Swarm, or DC/OS clusters. One can quickly create a container image from an application and upload it to Azure Container Registry, which is a Docker private registry in Azure. Azure Container Registry allows you to store and manage container images for all types of container deployments.


In this guide, we will leverage Azure Container Registry and Visual Studio Team Services (VSTS) for CI/CD for containers. The following are the prerequisites for this guide:

  1. A Git Repository (local or GitHub)
  2. An Azure Account
  3. Visual Studio Team Services (VSTS account)
  4. A VM with Docker installed
  5. Azure Container Registry
  6. Some agents installed in VSTS from Azure Marketplace
Initial Setup

If you don’t have Visual Studio Team Services account, create one for free. Sign up here. You’ll get a URL in the form of <youraccountname> Create one Team Project in that account, as shown below:


Now let’s navigate to Azure to create an Azure Container Registry and a VM to install Docker on. This VM will be used by VSTS as a Docker host to pull required images from and build a new images with, which will be pushed to Container Registry upon successful build of our project.

To create an Azure Container Registry, click on + New > Compute > Azure Container Registry.


Once done, launch a new Ubuntu 16.04 LTS virtual machine in the same resource group as Azure Container Registry. Refer to “Launching Linux-based VMs in Azure” for more details. Make sure port 2376 is open in the security group for inbound connections from VSTS to the docker daemon running on this machine.

Also configure the DNS name for the virtual machine, as shown.

[root@docker-host ~]# curl -fsSL | sudo apt-key add -

[root@docker-host ~]# sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"

2. Perform apt-get update to update the package database with Docker packages, and finally, install Docker using below command:

[root@docker-host ~]# apt-get install -y docker-ce

3. Once installed, check Docker status using:

[root@docker-host ~]# systemctl status docker

4. Add your user to the docker group to execute Docker commands without root privileges. To add a user to docker group, execute the below command:

[root@docker-host ~]# sudo usermod -aG docker ${USER}

5. Now set your machine’s hostname as the $HOST variable. Execute:

6. Let’s start by generating CA private and public keys. Enter a passphrase when prompted:

[root@docker-host ~]# openssl genrsa -aes256 -out ca-key.pem 4096
[root@docker-host ~]# openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem


Note: Provide $HOST when prompted for the Common Name (FQDN of the server)

7. Now we can create a server key and a certificate signing request. Execute the below commands:

[root@docker-host ~]# openssl gen -out server-key.pem 4096
[root@docker-host ~]# openssl req -subj "/CN=$HOST" -sha256 -new -key server-key.pem -out server.csr

8. Since connections can be made using IP addresses as well as DNS names, it is necessary to specify them when creating a certificate. Also, we will set extendedKeyUsage for server authentication. For example, to allow connections from our DNS name, private IP and localhost, execute the following command:

[root@docker-host ~]# echo subjectAltName = DNS:$HOST,IP:,IP: > extfile.cnf
[root@docker-host ~]# echo extendedKeyUsage = serverAuth >> extfile.cnf

9. Generate the key using the below command:

[root@docker-host ~]# openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf

10. Now, for client authentication, we create a client key and a client CSR:

[root@docker-host ~]# openssl genrsa -out key.pem 4096
[root@docker-host ~]# openssl req -subj '/CN=client' -new -key key.pem -out client.csr

11. Make the key suitable for client authentication by adding extendedKeyUsage=clientAuth in extfile.cnf

[root@docker-host ~]# echo extendedKeyUsage = clientAuth >> extfile.cnf

12. Now we can sign the private key using:

[root@docker-host ~]# openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem -extfile extfile.cnf

14. To verify if this worked successfully or not, use the command given below. The response should be as shown.

[root@docker-host ~]# docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem version


So all the necessary things are in place now. Let’s look at the brief workflow that we will follow for this. The execution will be as follows:

  1. Get sources from our repository.
  2. Enable a “Continuous Integration” trigger for the project for every change made in specified branch of our repo.
  3. Build a Docker image with updated sources using a Dockerfile.
  4. Push the newly-created Docker image to Azure Container Registry.
  5. Use an ARM template to launch a Container Instance/Container Group from the new image.
Build definition

1. Let’s kick this new project to VSTS by creating a new empty build definition. Click on Build and Release > Builds > + New definition > Empty.


4. Once the connection is authorized, select the repository and the branch, as shown below.


5.Now click on Add Task, select Docker, and click Add. The display name will be Build an Image and the Container Registry type will be Azure Container Registry. Under Azure Subscription, select your Azure Subscription (where ACR was created initially) and click Authorize.

6.Once the subscription is authorized, we can see the ACR created previously. Select that registry.


7.The Action will be Build an Image, and under Dockerfile, select the Dockerfile available in our repository. Change the image name as “anyname:$(Build.BuildId)”.


8. Include Latest Tag option should be checked.

9. Click on New beside Docker Host Connection, and enter the required details. The Server URL will be the DNS name of the Docker machine we created previously along with port 2376. Enter the contents of the ca.pem file into the CA certificate box, cert.pem into the Certificate field, and key.pem into the Key field and save the connection.


10. Leave other settings as default.


11. Again, click on Add Task, select Docker, and click Add. Change the display name to Push an Image.

12. Select the same Azure subscription and Azure Container Registry as the build step. The Action here will be Push an Image, and specify the image name as in Step 7. Check Include Latest Tag option, and select the same Docker Host Connection we created earlier.


13. Navigate to the Triggers tab. We will enable Continuous Integration for our project from here. This will trigger a build on any change made on the specified branch.


Release Definition

1.Click on Build and Release > Releases > + New definition > Empty and provide an environment name.

2.Click on Add artifact, select the Build definition we created, and click Add.


3.Click on Continuous Deployment trigger on Artifact box and enable it.


4.Click on Add artifact and select GitHub. Select the GitHub connection we created earlier, select the repository from dropdown, select master branch, and click Add.


5.Once GitHub is added, click on Continuous Deployment trigger and enable it by selecting master branch in it.

6.Click on Tasks and click the + sign in Agent phase. Go to Marketplace and add the Replace Tokens utility. Use the replace tokens utility in the release definition to replace tokens in JSON files (ARM templates). The root directory will be the armtemplate directory inside our GitHub artifacts.



7.Add a new task which will be Azure Deployment: Create of Update Resource Group. Select your current Azure subscription. Select Create or update resource group in Action, select the Resource group and location.


8.Under Template section, set the Template location as Linked artifact. In the Template box, specify the azuredeploy.json file and in Template parameters specify the azuredeploy.parameters.json file.


9.Navigate to Variables tab. We will create few variables here that will be replaced in the ARM template that we will use for Continuous Deployment. These variables will be username, the password of the container registry, and image. Refer the image below.


10. Save this release definition.

Now that everything is in place, our continuous integration and continuous deployment pipeline for Docker containers is ready. This pipeline will trigger a build as soon as we commit any changes in master branch.

The build will generate a Docker image using the Dockerfile with the updated sources of the application from GitHub. This image will be pushed to Azure Container Registry. Once the build succeeds, deployment will trigger with the release definition we created. This will launch an Azure Container Instance using the image from Azure Container Registry that contains our updated application.

So it’s time to kick our first build and see VSTS and Azure in action. Let’s give a git commit to the master branch.


Navigate to Build section of our project in VSTS. We can see the build will be queued without triggering it from VSTS. Let the build complete.


Once the build is successful, navigate to the Release section. You’ll notice that the release will be in progress, and upon successful completion it will trigger the deployment job.


So this is how we can create a CI/CD pipeline for Docker containers using VSTS, Azure Container Registry, and Azure Container Instances.

Navigate to the Azure Portal and open the resource group blade that we used for this guide. We can see the Container Group is deployed with almost no efforts past this pipeline. Use the public IP of the container to play with the application we deployed.



Scope of Improvement

Here, we created a simple application that was easy to deploy it on single container. What if we have an n-tier application? Well, docker-compose is at rescue. Like a Dockerfile, we can create docker-compose file and use it for our build.

Also, we used multiple artifact sources here in the release definition. Instead of using GitHub as source for our ARM templates, we can package them by adding a new step in the Build definition and leveraging Build Artifacts in the release definition. This will have added benefit of versioning our ARM templates.

Sources / Resources

This is how we can leverage VSTS and Azure to create CI/CD pipeline of containers. It automates entire build and deployment process for Dev and Test environments. With the help of this, developers can focus on optimizing the application and DevOps can focus on optimizing other workflows without having to dedicate their time in Dev and Test workloads. This can even be extended to Production environment as well.

All the required resources/file can be downloaded from my GitHub repo.

Comments are disabled for this guide.