Skip to main content

Configure and Test a Placement Group on AWS

Hands-On Lab

 

Photo of Tia  Williams

Tia Williams

AWS Training Architect II in Content

Length

01:00:00

Difficulty

Intermediate

In this learning activity we will create a spread placement group with an instance in two different availability zones. We are going to verify that enhanced networking is enabled on that type of instance, and then we will use aws configure to review the Placement Group from the CLI. Our final step will be using iperf3 to benchmark network throughput speeds between the Linux instances. Let's get our hands dirty and jump into this Hands-On Lab.

What are Hands-On Labs?

Hands-On Labs are scenario-based learning environments where learners can practice without consequences. Don't compromise a system or waste money on expensive downloads. Practice real-world skills without the real-world risk, no assembly required.

Configure and Test a Placement Group on AWS

Welcome. Get comfortable, because we're going to be creating what's called a spread placement group. It will have instances in more than one availability zone. We'll make sure enhanced networking is enabled on the instances, and testing connections speeds with Iperf.

Let's grab the credentials from the lab's home page and get logged in. Then once we're logged in, let's make sure we are sitting in the us-east-1 region.

Existing configuration

VPC

In the lab environment, we've already been provisioned a VPC. We've also got two subnets (PG1Public in us-east1a, and PG2Public in us-east1b), and those two routes are associated with a route table, PublicRT. That route table is publicly accessible. Finally, there is an internet gateway attached to the VPC.

EC2 instances

There's one up and running already, a t2.micro type sitting in us-east-1a. This is going to be our bastion host. We need to set up the other two instances though, once in each AZ.

Create a Spread Placement Group

In the EC2 Dashboard, find Placement Groups in the left hand menu, and let's click the big blue Create Placement Group button. We're going to name it MySpreadPlacementGroup, pick Spread from the Strategy dropdown, then click Create.

Now, click on Instances (again, in the left-hand menu), then Launch Instances. We're going to click Select next to the Amazon Linux 2 AMI (HVM) choice (making sure not to grab the one with a similar name).

In the Instance Type screen, we're going to use t3.medium, then click Next: Configure Instance Details. On the Configure Instance Details page, leave Network set to our PGVPC, leave Subnet set to our PG1Public, and set Auto-assign Public IP to Enable. Check the box next to Add instance to placement group. A dropdown will appear, where we can select MySpreadPlacementGroup.

Click Next: Add Storage. Everything is fine here, so just click Next: Add Tags. On the Add Tags page, add the following tag:

  • Key: Name
  • Value: PGInst1

Click Next: Configure Security Group. In here, click Select an existing security group, and then select the one that is not named default from the table below. It has a Description of bastion-hosts. Click Review and Launch, and then Launch.

In the key pair popup, select Create a new key pair and give it a Key pair name of mypgkeypair. Click Download Key Pair, and then Launch Instances.

Click View Instances, and we'll see it firing up. We've got to launch another instance though. Go through all of those "launch" steps again, with these differences:

  • Make sure it's in the PG2Public subnet that's sitting in us-east-1b.
  • When we make a tag, the Value is going to be PGInst2
  • At the key pair screen, select Choose an existing pair, since we got one during the last EC2 instance launch. Check the I acknowledge... box

Click View Instances, and hit refresh every so often until the two we created are in a running state. While we're there, we ought to name that third EC2 instance (the one that was already sitting there when we showed up) Bastion, just so we know at a glance when we're looking through things.

Use the CLI to Validate the Placement Group

We need to log into the bastion host, but for that we don't use the key we downloaded earlier, we use what was in the lab credentials. Grab the cloud_user password from there, and log in (as cloud_user) with SSH to the bastion host's public IP.

Once we're in, the OS is telling us it needs some updates. Let's run those:

[cloud_user@host]$ sudo yum update

When that's done, we can set up the AWS CLI program by running aws configure and setting these when we're prompted:

  • AWS Access KEY ID: Leave it blank
  • AWS Secret Access KEY: Leave it blank
  • Default region name: us-east-1
  • Default output format: Leave it blank

Now let's check if we can see the placement group from the CLI. Run aws ec2 describe-placement-groups, and we should see the MySpreadPlacementGroup in the output. We can also run aws ec2 describe-instances --instance-id &ltINSTANCE ID>. We'd have had to grab one of the instance's ID ahead of time, back in the browser window that had the EC2 Dashboard open. This will show us information about the instance itself.

Ok, now log out of the bastion host, and let's open up a second terminal. We've got to log into both of the newer instances, and it's better to have them open side-by-side when we're running commands. Before we can log in though, we've got to change permissions on that key we downloaded earlier:

[cloud_user@host]$ chmod 400 mypgkeypair.pem

Now we can log into each of them:

[cloud_user@host]$ ssh -i "mypgkeypair.pem" ec2-user@&ltPGInst1 IP ADDRESS>

and...

[cloud_user@host]$ ssh -i "mypgkeypair.pem" ec2-user@&ltPGInst2 IP ADDRESS>

Run a sudo yum -y update on each EC2 instance, to get them up-to-date as far as software packages go. Afterward, we need to install a new software repository on each, then Iperf. Let's run these two commands on each instance:

[ec2-user@PGInst*]$ yum -y install  https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
[ec2-user@PGInst*]$ yum -y install iperf3 --enablerepo epel

Testing Iperf

PGInst1 is going to be our Iperf server, so let's make sure we're in that terminal, then fire up Iperf with this:

[ec2-user@PGInst1]$ sudo iperf3 -s -p 5201

Now, on PGInst2, we'll start the client:

[ec2-user@PGInst2]$ sudo iperf3 -c "&ltPRIVATE PGInst1 IP ADDRESS" -P 10 -i 1 -t 30 -V

PGInst1's private IP address is over in the EC2 Dashboard. We can just copy it and paste it into this command.

Conclusion

With that command output, we can see that the two instances are talking to each other fine, and quickly. We're using enhanced networking, and getting the almost 5 Gbits of throughput speed that we want. We're done. Congratualtions!