Skip to main content
Puppet is one of the leading automation tools in the industry. Its success lies in its ability to ensure predictability and reliability through configuration management. It also enables engineers to perform application deployments and orchestration in environments from development all the way down the pipeline to production. It is widely used in organizations from startups to Fortune 500 companies and it is perhaps one of the most sought after skill in the DevOps industry.
In addition to all that, it can also be used as a provisioning tool to create infrastructure – both private and public. In this guide, we will demonstrate how you can deploy Amazon Web Services EC2 instances using the free puppetlabs-aws module from PuppetLabs, which can be found here:

At the moment, you should have the following skill sets in order to follow this guide:
  • Managing Linux or Unix-like systems.
  • Managing AWS infrastructure.
  • Writing and deploying Puppet.
Feel free to take advantage of the resources available from Linux Academy to get up to speed. Those resources are listed here:

  • Linux Essentials Certification –
  • AWS Certified Solutions Architect – Associate Level –
  • Learning Puppet DevOps Deployment (Puppet Professional Cert) –
In addition, you should have on hand the following:
  • Access to Linux or Mac terminal shell environment.
  • A locally installed copy of the Puppet agent (we strongly recommend version 4.6.2).
  • AWS credentials with access to provision instances.
Assuming that you have all that, you may proceed.
To get started, verify that you have installed a puppet agent locally on your machine:
stardust:~ rilindo$ which puppet
Next, verify the version of puppet (again, we recommend version 4.6.2):
stardust:~ rilindo$ puppet -V
stardust:~ rilindo$
With both done and verified, you can now install puppetlabs-aws module. Before that, though, you need to install the aws-sdk-core and retries gems as root (or superuser):
sudo /opt/puppetlabs/puppet/bin/gem install aws-sdk-core retries
It will return the following output:
Fetching: jmespath-1.3.1.gem (100%)
Successfully installed jmespath-1.3.1
Fetching: aws-sdk-core-2.6.0.gem (100%)
Successfully installed aws-sdk-core-2.6.0
Parsing documentation for aws-sdk-core-2.6.0
Installing ri documentation for aws-sdk-core-2.6.0
Parsing documentation for jmespath-1.3.1
Installing ri documentation for jmespath-1.3.1
Done installing documentation for aws-sdk-core, jmespath after 12 seconds
Fetching: retries-0.0.5.gem (100%)
Successfully installed retries-0.0.5
Parsing documentation for retries-0.0.5
Installing ri documentation for retries-0.0.5
Done installing documentation for retries after 0 seconds
3 gems installed
With both requirements satisfied, you can install the puppetlabs-aws module. Install the module by running the following your normal login:
stardust:basic rilindo$ /opt/puppetlabs/bin/puppet module install puppetlabs-aws
The output will return the path to the installed module at the end to indicate that the installation is successful:
Notice: Preparing to install into /Users/rilindo/.puppetlabs/etc/code/modules ...
Notice: Created target directory /Users/rilindo/.puppetlabs/etc/code/modules
Notice: Downloading from ...
Notice: Installing -- do not interrupt ...
└── puppetlabs-aws (v1.4.0)
As an aside, puppet will look at modules installed in your home directory first before the modules in the puppet installation directory. In the future, you can verify the configured path by running:
stardust:~ rilindo$ puppet config print modulepath
It will return the currently configured module path (which, in this author’s machine, is under /User/rilindo/.puppetlabs) home directory):
stardust:~ rilindo$
With all that done, you will now need to export your AWS credentials in your shell (if you have not already done so) in order for the Puppet module to be able to communicate with AWS. This is done by running the following commands in the terminal:
export AWS_ACCESS_KEY_ID=your_access_key_id
export AWS_SECRET_ACCESS_KEY=your_secret_access_key
You can also put your AWS keys in your credentials file. The credential file is usually located here in your home directory under .aws:
stardust:~ rilindo$ cat ~/.aws/credentials 
aws_access_key_id = your_access_key_id
aws_secret_access_key = your_secret_access_key
stardust:~ rilindo$
Finally, to verify that the setup is complete and configured, run puppet resource ec2_instance:
stardust:~ rilindo$ puppet resource ec2_instance
A properly configured setup will return the list of instances (both running or shutdown):
ec2_instance { 'server01':
ensure => 'running',
availability_zone => 'us-west-1b',
block_devices => [{'delete_on_termination' => 'true', 'device_name' => '/dev/sda1'}],
ebs_optimized => 'false',
hypervisor => 'xen',
iam_instance_profile_arn => 'arn:aws:iam::0123456789:instance-profile/aws-ec2-read-only',
image_id => 'ami-df6a8b9b',
instance_id => 'i-ae93fc65',
instance_type => 't2.micro',
key_name => 'mykey',
monitoring => 'true',
private_dns_name => '',
private_ip_address => '',
public_ip_address =>',
region => 'us-west-1',
security_groups => ['app_internal', 'Access'],
subnet => 'Public',
tags => {'role' => 'bastion'},
virtualization_type => 'hvm',
At this point, you are ready to start writing code.
From your home directory, create the src/puppet/modules directory and change over to that location:
stardust:~ rilindo$ mkdir –p src/puppet/modules
stardust:~ rilindo$ cd src/puppet/modules/
Then create the aws_demo directory:
stardust:modules rilindo$ mkdir aws_demo
In the aws_demo directory, start a new file called create.pp:
stardust:modules rilindo$ vi aws_demo/create.pp
And insert the following puppet code:
  ec2_instance { 'myinstancename':
ensure => present,
region => 'us-west-1',
image_id => 'ami-48db9d28',
instance_type => 't2.micro',
security_groups => ['Access'],
subnet => 'Public',
Save the file, then verify that there are no syntax issues with the code by running puppet parser validate against the file. A manifest with no syntax errors will return you to the command prompt with no messages:
stardust:modules rilindo$ puppet parser validate aws_demo/create.pp 
stardust:modules rilindo$
Let us review the components that went into the code.
In this manifest, we are using a custom resource called ec2_instance (made available by the module we just installed earlier). To deploy the instance, it uses the value in the namevar to tag the instance with the name myinstancename and then apply the following parameters:
ensure: This either creates (present) or remove the instance (absent)
region: where the instance will be provisioned (and deprovisioned). In this code, we will deploy into us-west-1
image_id: the name of ami ec2_instance will be using. The ami in question is the official Ubuntu 14.04 AMI with the AMI id of ami-48db9d28.
instance_type: the instance type, which could be anything from bustable instances to compute optimized. In this demonstration, we will use t2.micro, which is an inexpensive instance type perfect for testing.
That is generally all you needed with the code.
However, since you are a Linux Academy user (like this author), you likely would have set up a VPC with a public and private subnet, as well as customized security groups. Hence, in our case, it is necessary for us to add the last two parameters:
security_groups: the name of the security group that you will attach to your instance. Multiple security groups can be added into that parameter as an array. It is very important to specify the name of the security group, not the security group ID, or it will fail. Here, we are using the ACCESS security group as a single element in an array.
subnet: the name of the subnet you will provision your instance to. Like the security group, you will need to use the name (not the id) of the subnet, so you will need to tag it. In this case, we are using the Public subnet.
Incidentally, a walkthrough of a non-default AWS VPC creation can be found at the following address:

At this point, we are ready to deploy the instance.

Deploying the Instance
From the command, run puppet apply aws_demo/create.pp:
stardust:modules rilindo$ puppet apply aws_demo/create.pp 
It will return the following output:
Notice: Compiled catalog for stardust.local in environment production in 0.28 seconds
Notice: /Stage[main]/Main/Ec2_instance[myinstancename]/ensure: changed absent to running
Login to your AWS Web console and go to EC2, the Instances. You will see your instance being created:
At this point, you have successfully created an instance through Puppet.
Terminating the Instance
Now that you have created a new instance, it is time now for you to terminate the instance. Copy the file aws_demo/create.pp to aws_demo/destroy.pp
cp aws_demo/create.pp aws_demo/destroy.pp
Then in the file aws_demo/destroy.pp, change ensure from present to absent:
  ec2_instance { 'myinstancename':
ensure => absent,
region => 'us-west-1',
image_id => 'ami-48db9d28',
instance_type=> 't2.micro',
security_groups => ['Access'],
subnet => 'Public',
Once again, verify that there are no syntax issues with the code:
stardust:modules rilindo$ puppet parser validate aws_demo/destroy.pp 
stardust:modules rilindo$
Then run it:
stardust:modules rilindo$ puppet apply aws_demo/destroy.pp
It will return with:
Notice: Compiled catalog for stardust.local in environment production in 0.30 seconds
Log back into the AWS console and verify that it is shutting down:
This will take some time. When it is done. It will look something like this:
stardust:modules rilindo$ puppet apply aws_demo/destroy.pp
Notice: Compiled catalog for stardust.local in environment production in 0.30 seconds
Notice: /Stage[main]/Main/Ec2_instance[myinstancename]/ensure: changed running to absent
Notice: Applied catalog in 63.87 seconds
That is how you deploy and terminate EC2 instances with Puppet.
Advanced Usage
You can easily customize your Puppet code in order to streamline your EC2 deployments. For instance, you can use Hiera to separate your data from your code, allowing you to ensure that your manifests can be reusable. Here is a demonstration on how that can be done.
Setup with Hiera
Assuming that you are still in ~/src/puppet/modules, change over to ~/src/puppet and create the directory hieradata:
stardust:modules rilindo$ cd ..
stardust:puppet rilindo$ mkdir hieradata
Within the hieradata, create a file called common.yaml:

stardust:puppet rilindo$ vi hieradata/common.yaml
And insert the following attributes:
ami: ami-48db9d28
region: us-west-1
Now we need to make sure that puppet knows where to find it, so create the file hiera.yml under ~/src/puppet and insert the following:
- common
- yaml
:datadir: 'hieradata'
Save the file.
From that directory, copy modules/aws_demo/create.pp to modules/aws_demo/create_with_hiera.pp:
stardust:puppet rilindo$ cp modules/aws_demo/create.pp modules/aws_demo/create_with_hiera.pp
stardust:puppet rilindo$
At this point, open modules/aws_demo/create_with_hiera.pp and change the values in the region and image_id parameters to lookup the data from Hiera:
  ec2_instance { 'myinstancename_withhiera':
ensure => present,
region => hiera('region'),
image_id => hiera('ami'),
instance_type => 't2.micro',
security_groups => ['Access'],
subnet => 'Public',
Save the file and verify that there are no syntax errors:
stardust:puppet rilindo$ puppet parser validate modules/aws_demo/create_with_hiera.pp 
stardust:puppet rilindo$
Finally, execute the code with the yaml file by adding –hiera_config and the name of the hiera file at the end of the puppet command string:
stardust:puppet rilindo$ puppet apply modules/aws_demo/create_with_hiera.pp --hiera_config hiera.yaml
It should return with:
Notice: Compiled catalog for stardust.local in environment production in 0.29 seconds
Notice: /Stage[main]/Main/Ec2_instance[myinstancename_withhiera]/ensure: changed absent to running
Notice: Applied catalog in 2.56 seconds
Log into your AWS Console. You will be able to verify that the instance has been created:
This is just one way you can customize your Puppet AWS deployments. Please review the examples included with PuppetLab’s AWS module to get an understanding on how you can further customize your deployments as well as deploy other EC2 resources such as ELB or Security Groups.
Comments are disabled for this guide.