Skip to main content

Using Join and Count to Create Multiple S3 Buckets

Hands-On Lab

 

Photo of Travis Thomsen

Travis Thomsen

Course Development Director in Content

Length

01:00:00

Difficulty

Intermediate

After building a Terraform module to deploy a single S3 bucket, our manager has asked us to modify it so we can deploy multiples. We've also got to output the name of each bucket deployed, but rather then output them all as a plain list, our manager wants them in a comma separated list.

What are Hands-On Labs?

Hands-On Labs are scenario-based learning environments where learners can practice without consequences. Don't compromise a system or waste money on expensive downloads. Practice real-world skills without the real-world risk, no assembly required.

Using Join and Count to Create Multiple S3 Buckets

The Scenario

After building a Terraform module to deploy a single S3 bucket, our manager has asked us to modify it so we can deploy multiples. We've also got to output the name of each bucket deployed, but rather then output them all as a plain list, our manager wants them in a comma separated list.

Logging In

Use the IP address and credentials provided on the hands-on lab overview page, and log in with SSH to the server.

There's a fair bit of creating and editing files in this lab. All of the examples here involve using Vim, but use whichever text editor is most comfortable.

Update the Terraform Files

If we run ls right off, we'll see a lab directory. That's where we'll be doing a lot of the work, so let's get in there:

cd lab

ls will show us the Terraform files we're working with.

Update main.tf:

vi main.tf

main.tf contents:

provider "aws" {
  region = "${var.aws_region}"
}

resource "random_id" "tf_bucket_id" {
  count       = "${var.number_of_instances}"
  byte_length = 2
}

resource "aws_s3_bucket" "tf_code" {
  count         = "${var.number_of_instances}"
  bucket        = "${var.project_name}-${random_id.tf_bucket_id.*.dec[count.index]}"
  acl           = "private"
  force_destroy =  true

  tags {
    Name = "tf_bucket${count.index+1}"
  }
}

Update variables.tf:

vi variables.tf

variables.tf contents:

variable "aws_region" {
  default = "us-east-1"
}

variable "number_of_instances" {
  default = "2"
}

variable "project_name" {
  default = "la-terraform"
}

Update outputs.tf:

vi outputs.tf

outputs.tf contents:

output "bucketname" {
  value = "${join(", ", aws_s3_bucket.tf_code.*.id)}"
}

Deploy the S3 Bucket

Initialize Terraform:

terraform init

Validate the files:

terraform validate

Deploy the S3 bucket:

terraform apply -auto-approve

Conclusion

The last bit of output, before we landed in another command prompt, was a comma-delimited list of bucket names. So we're done. To actually check on the AWS side, log into the portal, navigate to the S3 section, And we'll see two buckets whose names correspond with what we saw in the output from our apply command. We can peek into each one and look at it's label. Those will correspond with what we set for a name in main.tf, like tf_bucket1 and tf_bucket2. Congratulations!