Introduction

Infrastructure as Code (IAC) is a common DevOps regarding the ability to manage configurations and automatically provision infrastructure along with deployments. IAC is an approach of defining infrastructure and network components through descriptive or high-level code; i.e., programmable infrastructure. Various tools such as Vagrant, Ansible, Docker, Chef, Terraform, Puppet, and more, independently or when combined, make life easy by automating infrastructure provisioning and deployment automation.

Terraform is one such tool. In this guide, we will use Terraform to provision a complete infrastructure on Azure. We will create all the components from scratch including the resource group, VNet, subnet, NIC, security groups, VMs, etc.

Prerequisites

Terraform must be installed in your system. Find the instructions to install Terraform in the article “Introduction to Terraform modules” by @pippopeppe83.

For Terraform to provision resources in Azure, there are two ways to authenticate:

  1. Using an Azure PublishSettings file (ASM)
  2. Using a Client ID and Client Secret for Microsoft Azure AD (ARM)
Azure PowerShell is required to download the Azure PublishSettings file. Use the following cmdlet:
Get-AzurePublishSettingsFile
We can use this file in Terraform as credentials to provision resources in Azure (ASM).

The other option is to create a Client ID and Client Secret in Azure AD that can be used as credentials in Terraform to allow it to provision resources in Azure.

To generate a Client ID and Client Secret, follow these steps:

1. Log in to the Azure Portal and select Azure Active Directory.
2. Click on App Registrations and select New Application Registration.
user_60255_5995a6b664dd4.PNG
3. Enter Application Name, set the Application type as Web app / API, and enter any custom URLs, as shown below:
user_60255_599bdde5a9ed6.PNG
4. Now click on the newly created App, and click Keys. Enter the Description, set the Duration, and click on Save to display the key.

user_60255_599bde2f5d300.PNG_800.jpg

NOTE: If you’ve implemented the previous guide, “Leverage Azure Automation and Automatically Shut down VMs“, Azure Automation will auto-register one application. One can very well use that by clicking on that application and generating a new key. Generate the new key in the existing application as discussed in above step.

Note the Application ID and Key. This will be used as credentials in Terraform to provision resources in Azure.

When you invoke any Terraform commands that load Terraform configuration, all the files within the working directory with .tf extension are loaded in alphabetical order. Terraform configurations are declarative and thus the order of resources and variables defined in the configuration files doesn’t matter. However, it is convenient for us if we organize resources into logical groups and create separate files for them. Let’s get started.

Architecture Overview

Using Terraform, we will provision the following things:

  • A resource group with one virtual network in it
  • Two subnets within the virtual network; one to deploy the web server and the other to deploy the database server
  • Two VMs; each in a different subnet
  • Two network interfaces, one for each VM. The network interface for the web server will have a public IP
  • A DNS zone for our domain with an A record
At first, we will create variables file. In the VARIABLES.TF file, we will configure Azure Provider as well as declare all the variables that we will use in all our Terraform configurations.
VARIABLES.TF
/* Configure Azure Provider and declare all the Variables that will be used in Terraform configurations */
provider "azurerm" {
subscription_id = "${var.subscription_id}"
client_id = "${var.client_id}"
client_secret = "${var.client_secret}"
tenant_id = "${var.tenant_id}"
}

variable "subscription_id" {
description = "Enter Subscription ID for provisioning resources in Azure"
}

variable "client_id" {
description = "Enter Client ID for Application created in Azure AD"
}

variable "client_secret" {
description = "Enter Client secret for Application in Azure AD"
}

variable "tenant_id" {
description = "Enter Tenant ID / Directory ID of your Azure AD. Run Get-AzureSubscription to know your Tenant ID"
}

variable "location" {
description = "The default Azure region for the resource provisioning"
}

variable "resource_group_name" {
description = "Resource group name that will contain various resources"
}

variable "vnet_cidr" {
description = "CIDR block for Virtual Network"
}

variable "subnet1_cidr" {
description = "CIDR block for Subnet within a Virtual Network"
}

variable "subnet2_cidr" {
description = "CIDR block for Subnet within a Virtual Network"
}

variable "vm_username" {
description = "Enter admin username to SSH into Linux VM"
}

variable "vm_password" {
description = "Enter admin password to SSH into VM"
}

In the VARIABLES.TF file, we haven’t specified any default values for the variables we created. We will assign values to variables by declaring them in another file: VAR_VALUES.TFVARS.

VAR_VALUES.TFVARS
subscription_id = "YOUR_SUBSCRIPTION_ID_HERE"
client_id = "YOUR_CLIENT-ID/APPLICATION-ID_HERE"
client_secret = "YOUR_KEY_HERE(From Step-4)"
tenant_id = "AZURE_TENANT_ID"
location = "East US"
resource_group_name = "Terraform-LinuxAcademy"
vnet_cidr = "192.168.0.0/16"
subnet1_cidr = "192.168.1.0/24"
subnet2_cidr= "192.168.2.0/24"
vm_username = "arman"
vm_password = "$up3rSecur3P@ss"

When you start from scratch, first and foremost you will need a resource group. So in this file, we will create a resource group, a virtual network, and a subnet. Note that we can tag our resources in Azure, which uses a Key-Value pair. So we will add the group tag with value LinuxAcademy to all the resources we create.

NETWORKS.TF
resource "azurerm_resource_group" "terraform_rg" {
name = "${var.resource_group_name}"
location = "${var.location}"
}

resource "azurerm_virtual_network" "la_vnet" {
name = "LA-Terraform-VNet"
address_space = ["${var.vnet_cidr}"]
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"

tags {
group = "LinuxAcademy"
}
}

resource "azurerm_subnet" "la_subnet_1" {
name = "LA-Subnet-1"
address_prefix = "${var.subnet1_cidr}"
virtual_network_name = "${azurerm_virtual_network.la_vnet.name}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
}

resource "azurerm_subnet" "la_subnet_2" {
name = "LA-Subnet-2"
address_prefix = "${var.subnet2_cidr}"
virtual_network_name = "${azurerm_virtual_network.la_vnet.name}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
}

We declared the resource_group_name variable previously. So we will use that variable here to name the resource group using this code:

name = "${var.resource_group_name}"

Also, the virtual network we create has to be placed inside this resource group. So we used the following line in the azurerm_virtual_network block:

resource_group_name = "${var.resource_group.terraform_rg.name}"

We will create two subnets here. One for the front-end server and the other for the back-end (database) servers. These values have to be in above the virtual network and resource group. So, in the azurerm_subnet block, we will use:

virtual_network_name = "${azurerm_virtual_network.terraform_vnet.name}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"

Next, we need a Storage account with a container to store VHDs of Virtual Machines. Create the Terraform configuration for the Storage account and container.

STORAGE.TF
resource "azurerm_storage_account" "la_storage" {
name = "laterraform1"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
location = "${var.location}"
account_type = "Standard_LRS"

tags {
group = "LinuxAcademy"
}
}

resource "azurerm_storage_container" "la_cont" {
name = "vhds"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
storage_account_name = "${azurerm_storage_account.la_storage.name}"
container_access_type = "private"
}

The container_access_type can be Private, Blob, or Container. The storage_account_name property in the azurerm_storage_container block defines what storage account it belongs to.

The next file we will create will be for the security group for our Virtual Machine. A security group can be attached to a subnet and also to a network interface of the VM. It will contain inbound and outbound rules for traffic flow. We will create two security groups: one for the web server and the other for the database server.

SECURITY_GROUP.TF
resource "azurerm_network_security_group" "nsg_web" {
name = "LA-Terraform-Web-NSG"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"

security_rule {
name = "AllowSSH"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}

security_rule {
name = "AllowHTTP"
priority= 200
direction= "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "80"
source_address_prefix = "Internet"
destination_address_prefix = "*"
}

tags {
group = "LinuxAcademy"
}
}

resource "azurerm_network_security_group" "terraform_nsg_db" {
name = "LA-Terraform-DB-NSG"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"

security_rule {
name = "AllowMySQL"
priority= 200
direction= "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "3306"
source_address_prefix = "${var.subnet1_cidr}"
destination_address_prefix = "*"
}

tags {
group = "LinuxAcademy"
}
}

Thus, we created a rule to block internet access from the database server and only allow LA-Subnet-1 traffic to access the database server on port 3306.

Now we need one public IP address to connect to the VM from the internet. Also, this public IP has to be associated with a network interface which, in turn, is attached to a VM. We will create the network interface for both servers. Ideally, the public IP should only be for the web server. However, we will create public IP for database server too for we want to configure it using VM Extension. Best practices include a NAT instance or jump host in place which can be used to enable database server to access Internet. The public IP address and private IP address can be static as well as dynamic. Ideally, public IP for the web server should be static if not placed under a load balancer. If the public IP address isn’t associated, we will not be able to connect to the VM remotely, post deployment.

PIP_NIC.TF
resource "azurerm_public_ip" "la_pip" {
name = "LA-Terraform-PIP"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
public_ip_address_allocation = "static"

tags {
group = "LinuxAcademy"
}
}

resource "azurerm_network_interface" "public_nic" {
name = "LA-Terraform-Web"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
network_security_group_id = "${azurerm_network_security_group.nsg_web.id}"

ip_configuration {
name = "LA-Terraform-WebPrivate"
subnet_id = "${azurerm_subnet.la_subnet_1.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id= "${azurerm_public_ip.la_pip.id}"
}
tags {
group = "LinuxAcademy"
}
}

resource "azurerm_public_ip" "la_db_pip" {
name = "LA-Terraform-DB-PIP"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
public_ip_address_allocation = "static"

tags {
group = "LinuxAcademy"
}
}

resource "azurerm_network_interface" "private_nic" {
name = "LA-Terraform-DB"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
network_security_group_id = "${azurerm_network_security_group.terraform_nsg_db.id}"

ip_configuration {
name = "LA-Terraform-DBPrivate"
subnet_id = "${azurerm_subnet.la_subnet_2.id}"
private_ip_address_allocation = "static"
private_ip_address = "192.168.2.5"
public_ip_address_id = "${azurerm_public_ip.la_db_pip.id}"
}
tags {
group = "LinuxAcademy"
}
}

Almost all resources that are required to provision a VM are in place. We can now write the Terraform configuration to provision a VM, along with an OS disk in the Storage account created above. Note that the data disk is optional, and we are not using Azure managed disks.

VM_PROP.TF
resource "azurerm_virtual_machine" "la_web" {
name = "LA-Terraform-WebVM"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
network_interface_ids = ["${azurerm_network_interface.public_nic.id}"]
vm_size = "Standard_DS1_v2"

#This will delete the OS disk automatically when deleting the VM
delete_os_disk_on_termination = true

storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}

storage_os_disk {
name = "osdisk-1"
vhd_uri = "${azurerm_storage_account.la_storage.primary_blob_endpoint}${azurerm_storage_container.la_cont.name}/osdisk-1.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}

os_profile {
computer_name = "ubuntuweb"
admin_username = "${var.vm_username}"
admin_password = "${var.vm_password}"
}

os_profile_linux_config {
disable_password_authentication = false
}

tags {
group = "LinuxAcademy"
}
}

resource "azurerm_virtual_machine" "la_db" {
name = "LA-Terraform-DBVM"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
network_interface_ids = ["${azurerm_network_interface.private_nic.id}"]
vm_size = "Standard_DS1_v2"

#This will delete the OS disk and data disk automatically when deleting the VM
delete_os_disk_on_termination = true
#delete_data_disks_on_termination = true

storage_image_reference {
publisher = "Canonical"
offer = "UbuntuServer"
sku = "16.04-LTS"
version = "latest"
}

storage_os_disk {
name = "osdisk-2"
vhd_uri = "${azurerm_storage_account.la_storage.primary_blob_endpoint}${azurerm_storage_container.la_cont.name}/osdisk-2.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}

# Optional data disks
storage_data_disk {
name = "datadisk-2"
vhd_uri = "${azurerm_storage_account.la_storage.primary_blob_endpoint}${azurerm_storage_container.la_cont.name}/datadisk-2.vhd"
disk_size_gb = "100"
create_option = "Empty"
lun = 0
}

os_profile {
computer_name = "ubuntudb"
admin_username = "${var.vm_username}"
admin_password = "${var.vm_password}"
}

os_profile_linux_config {
disable_password_authentication = false
}

tags {
group = "LinuxAcademy"
}
}

Now to install the Apache2 web server in LA-Terraform-WebVM, we can make use of the Custom Script Extension. This extension will download and execute a script on a VM. The script can be uploaded to an Azure Storage account or GitHub. We will upload the script in the Azure Storage account and pass that link here in the Terraform configuration for Virtual Machine Extension. Similarly, we will use the Custom Script Extension to install MySQL server in LA-Terraform-DBVM.

VM_EXT.TF
resource "azurerm_virtual_machine_extension" "cs_apache" {
name = "custom_apache"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
virtual_machine_name = "${azurerm_virtual_machine.la_web.name}"
publisher = "Microsoft.OSTCExtensions"
type = "CustomScriptForLinux"
type_handler_version = "1.2"

settings = <<SETTINGS
{
"fileUris": [
"LINK-TO-FILE-IN-AZURE-STORAGE-ACCOUNT"
],
"commandToExecute": "sh apache_php.sh"
}
SETTINGS

tags {
group = "LinuxAcademy"
}
}

resource "azurerm_virtual_machine_extension" "cs_mysql" {
name = "custom_mysql"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
virtual_machine_name = "${azurerm_virtual_machine.la_db.name}"
publisher = "Microsoft.OSTCExtensions"
type = "CustomScriptForLinux"
type_handler_version = "1.2"

settings = <<SETTINGS
{
"fileUris": [
"LINK-TO-FILE-IN-AZURE-STORAGE-ACCOUNT"
],
"commandToExecute": "sh mysql_install.sh"
}
SETTINGS

tags {
group = "LinuxAcademy"
}
}

The apache_php.sh script will do the following tasks:

  • Update the system
  • Download and install the Apache2 web server
  • Download and install PHP and other PHP modules
  • Download and copy demo.php file to /var/www/html
  • Change ownership of /var/www to www:data
The mysql_install.sh script will do the following tasks:

  • Update the system
  • Download and install MySQL packages (mysql-server and mysql)
  • Create a database

The scripts can be downloaded from the GitHub links provided at the end. Upload these scripts into one of the Azure Storage account and update those links in Terraform scripts. Also make sure to update demo.php file’s link in apache_php.sh script.

We will now create a DNS Zone to create an A record for the LA-Terraform-WebVM, so that if you type a domain, you’ll be redirected to the above website. For this to work, you should own a domain and do some ground work by verifying your domain and adding Azure’s name servers in your domain provider’s control panel.

DNS_ZONE.TF
resource "azurerm_dns_zone" "la_dns" {
name = "yourdomain.com"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
}

resource "azurerm_dns_a_record" "la_a" {
name = "LA-A_Record"
zone_name = "${azurerm_dns_zone.la_dns.name}"
resource_group_name = "${azurerm_resource_group.terraform_rg.name}"
ttl = 300
records = ["${azurerm_public_ip.la_pip.ip_address}"]
}
TERRAFORM EXECUTIOn

Once all the files are created, place them in a single directory and navigate to that directory from PowerShell (Windows users) and execute the following command:

terraform plan –var-file var_values.tfvars

Once the terraform plan run is successful, you’ll see the summary, as shown below. You can notice the number of resources to add, change, or destroy.

user_60255_599bec96c2ce4.PNG

user_60255_599beca01a250.PNG

You can now run the command:

terraform apply –var-file var_values.tfvars

Wait for few mins for the process to complete. Once it is completed, navigate to your Azure Portal and check for the resource group and resources we created. Note down the public IP of the web server and browse to the following URL from your browser:

PUBLIC-IP/demo.php

Play around with that page, enter some data, and see if they’re getting inserted into the database or not. To delete the infrastructure simply run:

terraform destroy –var-file var_values.tfvars
Sources / Resources

This is how we can leverage Terraform to manage our infrastructure as code. It makes repetitive provisioning tasks easy. If you have some modifications in your infrastructure, you can make changes in the respective file and proceed with terraform plan. It’ll show you what has changed from the previous state and only make those changes to the environment when you execute terraform apply.

All the Terraform configurations and required scripts/files can be downloaded from my GitHub repo.

Comments are disabled for this guide.