Skip to main content

AWS Security Essentials – VPC Endpoints and Securing S3

Hands-On Lab

 

Photo of

Training Architect

Length

01:00:00

Difficulty

Intermediate

AWS S3 and DynamoDB are fantastic managed services. They allow you to focus on what's important while AWS focuses on the backend processes. Unfortunately, due to these services being managed by AWS, they require traffic to leave your protected VPC to be accessed. Enter VPC Endpoints! VPC Endpoints allow you to create endpoints within your VPC that keep the traffic on a private link between your VPC resources and these AWS Services. Accessing DynamodDB and S3 privately using your own VPC CIDR range is crucial to maintaing a secure environment that is resistant to hackers and data thieves. In this learning activity, you will configure a VPC Endpoint and utilize encryption to ensure your data is secure.

What are Hands-On Labs?

Hands-On Labs are scenario-based learning environments where learners can practice without consequences. Don't compromise a system or waste money on expensive downloads. Practice real-world skills without the real-world risk, no assembly required.

VPC Endpoints and Securing S3

In this lab, we learn how to create VPC endpoints for S3 and DynamoDB; we will also practice securing and encrypting files in S3.

Before getting started, make sure you're logged in to the AWS web console and have selected the N. Virginia region (us-east-1).

Securing S3

To begin, navigate to S3 from the services menu. We'll see two buckets: one whose title includes "companywebsite" and one whose title includes "companydata". Select the companydata bucket and click the Permissions tab at the top of the page.

The owner of the bucket has "list," "write," and "read" permissions. As such, each file in the bucket inherits these permissions as well.

Click the Properties tab at the top of the page. Click Versioning, select Enable versioning, and click Save. Click Default encryption, select AWS-KMS, choose aws/s3 from the dropdown menu, and click Save.

Since we only enabled default encryption, the files currently in our bucket will not be encrypted. Return to the Overview tab, select the my_account_number.txt file, and go to the Properties tab for this object. Click Encryption, select AWS-KMS, choose aws/s3 from the dropdown menu, and click Save.

Go back to the bucket overview page and follow the same steps to enable encryption on the other file, personal_identification_in_filename.txt.

Next, go back to the main S3 page and select the companywebsite bucket. This bucket has the same permissions as the companydata bucket, as do each of its files.

Click the Properties tab at the top of the page. Click Versioning, select Enable versioning, and click Save.

Since this bucket includes website files, we'll also enable static web hosting. Click *Static website hosting and select Use this bucket to host a website. Enter "index.html" as the index document and "error.html" as the error document. Click Save**.

We want users of the site able to view its pages, so we need to make each file public. From the bucket overview page, select the index.html document and click Make public. Return to the bucket, select error.html, and click Make public for it as well.

Configure EC2

Navigate to EC2 from the services menu. Select BastionHost1 from the list of instances, and copy its IPv4 Public IP address to your clipboard. Open a terminal window and connect to the instance via SSH:

$ ssh cloud_user@12.34.56.78

Be sure to substitute the instance's IP address for the one in the command. The instance login password can be found on the lab page with other credentials for this lab.

On the EC2 dashboard, select the AppServer1 instance and copy its private IP address to your clipboard. We'll return to the terminal, where we're logged in to the bastion host, and connect to the app server via SSH:

$ ssh cloud_user@10.0.0.1

Be sure to substitute the instance's IP address for the one in the command. The instance login password can be found on the lab page with other credentials for this lab.

Check Networking

Navigate to VPC from the services menu and select Subnets from the menu on the left side of the page. We can see that the AppLayer1 subnet, which contains the AppServer1 instance, has a route to the internet by selecting it from the list and clicking Route Table in the lower panel. Here, we should see an ID for a NAT Gateway listed in the table.

This means we should be able to reach our S3 buckets using the CLI from our app server.

Return to the terminal window, where we're logged in to the AppServer1 instance and run:

$ aws configure

At the first two prompts, enter the keys provided on the lab page. At the third prompt (Default region name), enter "us-east-1".

We can now check connectivity from this instance to our S3 buckets. In the terminal, run:

$ aws s3 ls

This will output the names of our S3 buckets. Copy the name of the companydata bucket and run:

$ aws s3 ls xxxxxxxxxxxx-companydata-xxxxx

In this command, substitute the name of your companydata S3 bucket. In the output, we'll see the list of files that bucket contains. This can be insecure if, like in the lab example, we use file names that contain sensitive information.

Removing the NAT Gateway

Return to the VPC dashboard and select NAT Gateways from the menu on the left side of the page. Select the NAT Gateway from the list, click Actions at the top of the page, and select Delete NAT Gateway. Confirm this action by clicking Delete NAT Gateway.

Select Route Tables from the menu on the left side of the page. Select the route table that has four subnets associated with it, and select the Routes tab from the bottom panel. We'll see the route that used to be associated with the NAT Gateway is now listed as a "Black hole."

Since the route is no longer needed, we can remove it. Click Edit and click the X next to the route associated with the NAT Gateway to remove it. Click Save.

Back in the terminal, run the S3 CLI commands from the previous section again. This time, the commands will fail because the instance is in a public subnet with no route to the internet.

Create a Secure Endpoint

When it comes to security, we don't have to choose between having open access or no access at all; we can create a VPC endpoint to give us the same accessibility without the risk.

In the VPC dashboard, select Endpoints from the menu on the left side of the page and click Create Endpoint. From the service list, select com.amazonaws.us-east-1.s3 and select the SecurityEssentials VPC from the VPC dropdown menu. In the list of route tables, select the one associated with four subnets (this is the one we deleted the NAT Gateway route from previously). Click Create Endpoint.

Go back to the Route Tables section by selecting it from the menu on the left. Choose the route table with four associated subnets and wait several minutes for the endpoint to be detected and added to the route table.

Finally, return to the terminal and run the same S3 CLI commands as before:

$ aws s3 ls
$ aws s3 ls xxxxxxxxxxxx-companydata-xxxxx

This time, the commands will work. However, the request will be sent (and the response returned) through the Amazon private network rather than over the public internet.

Review

In this lab, we learned to securely use the S3 CLI by provisioning a VPC endpoint. This solution prevents traffic from being read by unintended parties when it's sent over the public internet; instead, it allows us to access our AWS resources over a private network.

Congratulations! You've completed the lab on VPC endpoints and securing S3!