April Release Confetti
70+ Courses, Challenges, and Learning Activities
Coming Out This Month!
Learn More

How to: Introduction to Terraform Modules

What are terraform modules?

Modules in the Terraform ecosystem are a way to organize the code to: 
  • be more reusable
  • avoid code duplication
  • improve the code organization and its readability
If we compare them to a programming language, writing a Terraform module is the equivalent of writing a function and calling it many times with different parameters.

Why use Terraform modules

By using modules, you will save time because you write the code once, test it and reuse it many times with different parameters. You can also use modules written by other people. In this git repo you can find some example community modules: https://github.com/terraform-community-modules

When you want to change something you change it only in the module and all the infrastructure parts where the module is used will reflect the change when the next “terraform apply” will be run.

On the other side you need to be very careful when you change something or when you use a module maintained by another person and you want to update to a new version.

How to install Terraform in 4 commands

Installing Terraform is pretty easy. You can find the download page of the latest version here: https://www.terraform.io/downloads.html. At the moment the 0.78 version is available. In order to install it in Linux these 4 commands are enough:

wget https://releases.hashicorp.com/terraform/0.7.8/terraform_0.7.8_linux_amd64.zip
unzip terraform_0.7.8_linux_amd64.zip
sudo mv terraform /usr/bin/

Test if it is ok, with the command version you should obtain something like this:

$ terraform version
Terraform v0.7.8

Using an advanced code editor

You can of course edit terraform code using Gedit or Vim or Notepad but the code will grow and having a syntax checker and color highlighter will surely simplify your life. My advice for a Terraform code editing tool is Atom.

You can download it for free from the main website https://atom.io/ and you will also need to add the additional package for Terraform published on https://github.com/GiantToast/language-terraform

After installing Atom, in order to add the package in Linux it is necessary only to run the command

apm install language-terraform

For the purpose of this tutorial you can also use Vim or another editor because it is a proof of concept and fairly simple example.

Possible modules repositories

A module is an independent piece of code and you can get it from many sources. There is a full list of possible sources on the official page https://www.terraform.io/docs/modules/sources.html, and you can get one of them and save your code from four kinds of repositories:

  1. Local File System
  2. AWS S3
  3. HTTP URLs
  4. Standard code repos mercurial/git, and there are also direct connections to BitBucket/GitHub
How to create your first basic module

Let’s create our first module to see this useful Terraform feature in action. We will use AWS as the provider. In the main Terraform code we will create a security group, call a module, and use the security group as a parameter. In the next paragraph we will see a more complex module but for our first steps in the Terraform module world this example module is complex enough.

We will do this twice using two kinds of repositories: “Local File System” and “Standard git repos”.

The module code

For the module using the “Local File System” as the repo, we create a directory and place the code inside:

cd ~
mkdir tf-module
cd tf-module
touch main-module.tf
touch variables.tf

The main-module.tf code is:

resource "aws_elb" "mybalancer" {
name = "${var.modname}"

subnets = ["${var.SubnetId}"]
security_groups = ["${var.SecGroupId}"]

listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"


The variables.tf code is (find a subnet id in your default vpc for the chosen region):

variable modname {}
variable SubnetId {
default = "subnet-xxxxxxx"
description = "The subnet id of a default vpc in the working region"
variable SecGroupId {
description = "this variable doesn't have a default value but it is passed from the main code"

Run a Terraform validate to check for possible mistakes

The main Terraform code

For our Terraform project let’s create a directory and create the file resources.tf. Please remember that all the .tf files present in the directory (but not in the subdirectory) will be considered by the command line utility:

cd ~
mkdir test-tf-project
cd test-tf-project
touch resources.tf
touch variables.tf

Insert in the resource.tf this code:

provider "aws" {
shared_credentials_file = "/home/vagrant/.aws/credentials"
region = "${var.region}"
profile = "default"

resource "aws_security_group" "ForBalancer" {
name = "For_Balancer"
description = "A security group that will apply to the balancer"
vpc_id = "${var.myvpcid}"

# Allow HTTP from anywhere
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [""]

module "balancer1" {
source = "../tf-module"
modname = "Balancer1"
SecGroupId = "${aws_security_group.ForBalancer.id}"

Note that your shared_credentials_file may be in a different directory. Check ~/.aws/credentials on Linux, OS X, or Unix. Check C:\Users\USERNAME\.aws\credentials on Windows. We’ll take a look at creating this file in a moment if you don’t have one already.

Choose a region (I chose Mumbai) and use the default VPC ID for that region. Insert in the variables.tf this code:

variable region {
default = "ap-south-1"
description = "the aws region where we want create the resources"

variable myvpcid {
default = "vpc-xxxxxxx"
description = "you can choose the default vpc of the chosen region"

Run a Terraform validate to check for mistakes.

You also need to configure the Amazon credentials file and make sure you have the rights to creating resources in the Aws account:

cat ~/.aws/credentials
aws_access_key_id = XXXXXXXXXX
aws_secret_access_key = YYYYYYYYYYYYYYYYYYYY

If you don’t have access key credentials, here’s how you can create them.

If you check the directory at this point with an ls -a command you will notice that there aren’t hidden directories and if you run the Terraform apply you will get an error like this:

terraform apply

Error downloading modules: module balancer: not found, may need to be downloaded using 'terraform get'

This happens the first time you use the module in this Terraform project because you haven’t imported the module yet.

This is what happens when you run the Terraform get command for the first time:

terraform get
Get: file:///home/vagrant/mod-example/tf-module

test-tf-project> ll -a
total 20K
drwxrwxrwx 3 vagrant vagrant 4.0K Nov 10 14:24 ./
drwxrwxrwx 4 vagrant vagrant 4.0K Nov 10 14:08 ../
-rwxrwxrwx 1 vagrant vagrant 216 Nov 10 14:18 resources.tf*
drwxrwxrwx 3 vagrant vagrant 4.0K Nov 10 14:24 .terraform/
-rwxrwxrwx 1 vagrant vagrant 116 Nov 10 14:11 variables.tf*

test-tf-project> cd .terraform/modules

test-tf-project/.terraform/modules> ls

test-tf-project/.terraform/modules> ll
total 0
lrwxrwxrwx 1 vagrant vagrant 35 Nov 10 14:25 05050e8862d9145225af71de66085340 -> /home/vagrant/mod-example/tf-module/

A new directory “.terraform/modules” is created and inside of it there will be a symlink to the module: this happens because we are using a module in the same file system. If you are running in a mounted file system where you don’t have the rights to create symlinks you will get an error like this:

Error loading Terraform: Error downloading modules: error downloading 'file:///vagrant/terraform/mod-example/tf-module': symlink /vagrant/terraform/mod-example/tf-module .terraform/modules/05050e8862d9145225af71de66085340: protocol error

Now you can run the module and you will obtain something like:

> terraform apply
aws_security_group.ForBalancer: Creating... description: "" => "A security group that will apply to the balancer"
module.balancer1.aws_elb.mybalancer: Creation complete

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

As you can see a new file was stored in the directory called terraform.tfstate. This file contains information regarding the state of the infrastructure you just created. If you do not change the files and the infrastructure in AWS and you run the Terraform apply command again you will see that nothing changes but a new file called terraform.tfstate.backup will appear. 

test-tf-project> terraform apply
aws_security_group.ForBalancer: Refreshing state... (ID: sg-6ee67896)
module.balancer1.aws_elb.mybalancer: Refreshing state... (ID: Balancer1)

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

This happens because, just like CloudFormation, Terraform tries to modify the infrastructure for it to be equal to the one described by the .tf files and if it already matches it will not perform any change.

The last two copies of the tfstate file will be kept in the future runs as well.

If you check in your AWS console you can find two resources: the Security Group and the Elastic Load Balancer.

Now that we have a working module let’s clarify some points:

  • You can pass parameters from the command line to the main Terraform code. In a similar way you can also pass values from the main Terraform code to the module
  • If you specify a default value in the variables file you can omit to pass a value when you name the module or the function, or you can choose to overwrite that default value

How to use git repos to store the modules

Let’s test the same module using a git repos. For this purpose, I have used AWS CodeCommit but for Terraform there is no difference between CodeCommit and a standard git repo.

Commit the two module files (main-module.tf and variables.tf) in your personal git repo.

Setup a passwordless connection from your machine and your git repo. More info on using git  https://linuxacademy.com/cp/modules/view/id/88

Modify the resources.tf this way:

provider "aws" {
shared_credentials_file = "/home/vagrant/.aws/credentials"
region = "${var.region}"
profile = "default"

resource "aws_security_group" "ForBalancer" {
name = "For_Balancer"
description = "A security group that will apply to the balancer"
vpc_id = "${var.myvpcid}"

# Allow HTTP from anywhere
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [""]

module "balancer1" {
source = "git::ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/test-module"
modname = "Balancer1"
SecGroupId = "${aws_security_group.ForBalancer.id}"

module "balancer2" {
source = "git::ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/test-module"
modname = "Balancer2"
SecGroupId = "${aws_security_group.ForBalancer.id}"

Replace both “source” values with your own repository source.

I have declared the module twice and the only difference is the source line.

Now run the Terraform update, this time with the --update option. It is convenient to use this option because if in the future the module will be updated you can download the latest version and replace the one you have saved:

test-terraform> terraform get --update
Get: git::ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/test-module (update)
Get: git::ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/test-module (update)

If you look inside the .terraform/modules directory you will notice two subdirectories instead of one because the module was used twice and there is no symbolic link because the module source is not in the same file system.

ls .terraform/modules/
2cc681f1005f9bfdec3c198e910ab2e5/ be314b5008b8dd431e1f85b19ce13ecb/

Now by running the Terraform apply command three resources instead of two will be created:

/test-terraform> terraform  apply
aws_security_group.ForBalancer: Creating…
module.balancer2.aws_elb.mybalancer: Creation complete
module.balancer1.aws_elb.mybalancer: Creation complete

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Terraform destroy

Don’t forget to destroy all the resources at the end of the tutorial so the cost of this exercise will be close to zero dollars:

test-tf-project> terraform destroy
Do you really want to destroy?
Terraform will delete all your managed infrastructure.
There is no undo. Only 'yes' will be accepted to confirm.

Enter a value: yes

aws_security_group.ForBalancer: Refreshing state... (ID: sg-6eed6a07)
module.balancer1.aws_elb.mybalancer: Refreshing state... (ID: Balancer1)
aws_elb.mybalancer: Destroying...
aws_elb.mybalancer: Destruction complete
aws_security_group.ForBalancer: Destroying...
aws_security_group.ForBalancer: Still destroying... (10s elapsed)
aws_security_group.ForBalancer: Destruction complete

Apply complete! Resources: 0 added, 0 changed, 2 destroyed.


As you can understand, if you extend the example with these module features you can create a complex module and use it many times with different parameters. The code will become shorter and simpler to manage compared to repeating the same code many times.

A more complex example

In this example you can see more complex code which uses a Terraform module.

Inside the module you have an AutoScaling Group, a Launch Configuration, an Elastic Load Balancer and a Route53 record definition and all these components assume different values every time you edit the module with different parameters.

This is the code which names the module from the main Terraform code part:

module "BoxBackEnd" {
source = "git::ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/mybox"
boxname = "Box1"
vpcid = "${var.destination-vpc}"
SubnetPrivate1 = "${aws_subnet.PrivateAZB.id}"
SubnetPrivate2 = "${aws_subnet.PrivateAZC.id}"
AMI = "${var.AmiBox1BackEnd}"
Machine_Key_Name = "${var.key_name}"
Max_Size = "3"
Min_Size = "2"
ZoneID = "${aws_route53_zone.main.zone_id}"
RecordName = "box1.mydomain.com"
instance_port = "8080"
instance_protocol = "tcp"
load_balancer_port = "80"
load_balancer_protocol = "tcp"
Health_check_Target = "TCP:8080"
inst_type = "t2.micro"
secGroup-ELB = "${aws_security_group.box1-ELB.id}"
secGroup-AUTOSCALE = ${aws_security_group.box1-AUTOSCALE.id}"
userdata = <<HEREDOC
docker restart logstash

This is the module code in variables.tf:

variable boxname {}
variable vpcid {}
variable SubnetPrivate1 {}
variable SubnetPrivate2 {}
variable AMI {}
variable Machine_Key_Name {}
variable Max_Size {}
variable Min_Size {}
variable ZoneID {}
variable RecordName {}
variable instance_port {}
variable instance_protocol {}
variable load_balancer_port {}
variable load_balancer_protocol {}
variable Health_check_Target {}
variable inst_type {}
variable secGroup-ELB {}
variable secGroup-AUTOSCALE {}
variable userdata {}

This is the module code you can put in any file with .tf extensions:

resource "aws_elb" "ELB" {
name = "${var.boxname}${var.vpcid}"
subnets = [ "${var.SubnetPrivate1}", "${var.SubnetPrivate2}"]
security_groups = ["${var.secGroup-ELB}"]
internal = true

listener {
instance_port = "${var.instance_port}"
instance_protocol = "${var.instance_protocol}"
lb_port = "${var.load_balancer_port}"
lb_protocol = "${var.load_balancer_protocol}"

health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "${var.Health_check_Target}"
interval = 30

resource "aws_launch_configuration" "LaunchConfiguration" {
name = "${var.boxname}${var.vpcid}"
image_id = "${var.AMI}"
instance_type = "${var.inst_type}"
security_groups = ["${var.secGroup-AUTOSCALE}"]
key_name = "${var.Machine_Key_Name}"
user_data = "${var.userdata}"

resource "aws_autoscaling_group" "AutoScaling" {
name = "${var.boxname}${var.vpcid}"
max_size = "${var.Max_Size}"
min_size = "${var.Min_Size}"
force_delete = true
launch_configuration = "${aws_launch_configuration.LaunchConfiguration.name}"
load_balancers = ["${aws_elb.ELB.name}"]
vpc_zone_identifier = [ "${var.SubnetPrivate1}", "${var.SubnetPrivate2}"]
tag {
key = "Name"
value = "${var.boxname}-AutoScaling"
propagate_at_launch = "true"

resource "aws_route53_record" "www" {
zone_id = "${var.ZoneID}"
name = "${var.RecordName}"
type = "CNAME"
ttl = "300"
records = ["${aws_elb.ELB.dns_name}"]

Feel free to play with this more advanced example to get an understanding of how the different pieces work together, based on what we learned with our basic example.

That concludes it for this guide. Thank you for reading!

  • post-author-pic
    Sean G

    Nice, great guide!

  • post-author-pic
    Sathish Kumar D

    Nice guide !!!

  • post-author-pic
    Vic K

    Does Terraform have an "ls" equivalent? What I mean is display the existing infrastructure in Terraform format

  • post-author-pic
    Giuseppe B

    hi maybe what you are looking for is the import feature https://www.terraform.io/docs/import/index.html , personally I don't believe in this aproach so I didn't spend time on that, also CloudFormation has this CloudFormer but both are reverser enginering process and it is like obtain source code from a binary .  

  • post-author-pic
    George T

    Interesting, thanks for this guide, it's very helpful!
    You have some typos, for example you write 

    We will use as AWS as the provider.

  • post-author-pic
    Giuseppe B

    Hi George Tasiolis , I corrected the mistake "as AWS as" if you find some other typos let me know so I can fix it. Thank you 

  • post-author-pic
    Abhishek S

    Awesome guide.. Can you please guide how to add rebbitmq and crawler of any type in private subnet in a VPC.

  • post-author-pic
    Giuseppe B

    Hi I'm not sure to have full understood your question. Anyway You can find info about private subnets and terraform in my other article https://linuxacademy.com/cp/socialize/index/type/community_post/id/13922 . If you don't find what are looking for write here again with more details

  • post-author-pic
    John Reilly P

    Nice primer!

  • post-author-pic
    Arman K

    Hi  @pippopeppe83 ,

    Suppose that I have created 4 modules: IAM, VPC, and two EC2 for MongoDB and ELK.

    Now I have used userdata to install and configure MongoDB and ELK stack in EC2 instances. IAM module creates IAM role and profile for

    S3 access that can be attached to EC2.

    So how can I use that IAM module in my EC2 modules? Normally we refer to some resources like this:

    resource "aws_instance" "terraformreference" {
    iam_instance_profile = "${aws_iam_instance_profile.apigateway_s3-ec2_profile.name}"

    So in iam_instance_profile how can we refer other module?

  • post-author-pic
    Giuseppe B

    Hello  @armank , it is convenient use the module to avoid code repetition and to group similar resource.  From what you write you should create the role in the main terraform and pass as parameter to the module. I'm not sure is possible what you want to do , create a resource in one module and use in another. I hope I have understood your questions.  

  • post-author-pic
    Arman K

    Hi @pippopeppe83

    Yes. You understood my question. Have to search for some work around then using terraform data provider, if any.

    I guess I found a workaround. How about this?
    In one of our module, let's say IAM module (iam-mod) we give output like this:

    output "iam_role_out" {
    value = "${aws_iam_instance_profile.example.name}"

    And in our EC2 module, we use it like this:

    resource "aws_instance" "test_instance" {
    iam_instance_profile = "${module.iam-mod.iam_role_out}"

    Something of the form


  • post-author-pic
    George T

     @armank this should work exactly as you have indicated. Are you seeing some error when you try this?

  • post-author-pic
    Arman K

     @georgetasioulis I did that and I am getting resource 'aws_instance.arman-vault' config: unknown module referenced: iam
    The directory structure is as follows:
    - ProjectDir

        - Environment
               - Stage (main.tf, outputs.tf, vars.tf, creds.tfvars)
         - Modules
                - IAM (main.tf, outputs.tf)
                - Vault (main.tf, outputs.tf, vars.tf)

    I am running terraform get in Stage directory to import all modules. My main.tf of Stage directory is as follows:

    module "iam" {
    source = "../modules/iam"

    module "vault" {
    source = "../modules/vault"
    region = "${var.region}"
    subnet_id = "${element(var.subnet_ids, 0)}

    Done. I was doing a noob mistake of not importing IAM module in my Vault module and that's why it was failing. Thanks!

    Any idea how can I avoid duplicate creation of IAM roles/profiles? Let's say I have another modules mod-2 and mod-3 where I am using same IAM role. So I'll import the module to main.tf of those modules too. So during terraform plan it'll show duplicate IAM resources. How can I achieve that? There can be one way of creating a global directory and have IAM module there and have remote state configured. But how can I achieve with single shot terraform apply?

  • post-author-pic
    George T


     the recommended way to achieve this is using terraform remote state: https://www.terraform.io/docs/providers/terraform/d/remote_state.html 

    You can use something like an S3 bucket for your state and then simply grab your "output" any place you need it in your terraform code.

  • post-author-pic
    Giuseppe B


      The way to avoid duplicate resource is to use a variable for naming ,if you take a look to the latest example in the article is the one called ${var.boxname} 

  • post-author-pic

    I am getting below error when trying to create SG using first part of the code

    * aws_security_group.ForBalancer: Error creating Security Group: UnauthorizedOperation: You are not authorized to perform this operation.

    status code: 403, request id: 5b809465-ed6e-400d-b65e-fff25bcf89ef

    I have mentioned credentails under ~/.aws/credentails

  • post-author-pic
    Giuseppe B

    You need to use the expand path function Or use an absolute path like /home/giuseppe/.aws/credentials

  • post-author-pic
    Van J

    Thank you for doing this.  The advanced module is quite helpful.

  • post-author-pic
    Greg M

    Well done. Thanks...

  • post-author-pic

    Thank you

Looking For Team Training?

Learn More