Skip to main content

Automating AWS with Lambda, Python, and Boto3

Course

Intro Video

Photo of Mark Richman

Mark Richman

AWS Training Architect II in Content

Length

11:29:44

Difficulty

Advanced

Videos

36

Hands-on Labs

7

Course Details

This course will explore AWS automation using Lambda and Python. We'll be using the AWS SDK for Python, better known as Boto3.

You will learn how to integrate Lambda with many popular AWS services, such as EC2, S3, SQS, DynamoDB, and more.

Once we cover the basics, we'll dive into some more advanced use cases to really uncover the power of Lambda.

To chat with the instructor and other students, check out the Linux Academy community Slack and join the #aws channel.

Check out the Interactive Diagram here: https://interactive.linuxacademy.com/diagrams/ThePontoporiaToolbox.html

Syllabus

Automating AWS with Lambda, Python, and Boto3

Introduction

Course Overview

00:01:00

Lesson Description:

This video introduces the Automating AWS with Lambda, Python, and Boto3 course.

About the Training Architect

00:00:58

Lesson Description:

Get to know a little bit more about me, the instructor for this course!

Course Prerequisites

00:01:16

Lesson Description:

This video covers the necessary prerequisites for this course.

Working with the Interactive Diagrams

00:02:14

Lesson Description:

In this video, we'll talk about how to access, navigate, and use the interactive diagram for this course. Interactive Diagram: The Pontoporia Toolbox

Getting Started

Overview of AWS Lambda

00:14:59

Lesson Description:

In this video, we will go over a variety of essential Lambda concepts as well as learn how to create our own Lambda functions in the AWS Management Console. This will allow you to start developing with AWS Lambda and take advantage of its availability, fault tolerance, elasticity, and cost efficiency. We will also learn about the handler, event triggers, event data, the context object, and logging in Amazon CloudWatch. By the end of this lesson, you will be able to create your own simple Lambda functions in the AWS Console and feel confident that you understand the basics of Lambda and serverless compute options. You will know how to modify a Lambda function's code to take advantage of event data and the context object, and you will be able to add events to a Lambda function using the AWS Console.

Introduction to Boto3

00:13:01

Lesson Description:

Boto3 is the AWS SDK for Python. It enables Python developers to create, configure, and manage AWS services, such as EC2 and S3. Boto provides an easy-to-use, object-oriented API, as well as low-level access to AWS services. Installing the AWS CLI and Boto3 Amazon Linux 2 The AWS CLI is already installed on Amazon Linux 2. Install Python 3:

sudo yum install -y python3-pip python3 python3-setuptools
Install Boto3:
pip3 install boto3 --user
macOS Install Python3 using Homebrew:
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Install Python 3:
brew install python
Insert the Homebrew Python directory at the top of your PATH environment variable:
export PATH="/usr/local/opt/python/libexec/bin:$PATH"
Verify you are using Python 3:
python --version
Install the AWS CLI and Boto3:
pip install awscli boto3 --upgrade --user
The --upgrade option tells pip to upgrade any requirements that are already installed. The --user option tells pip to install the program to a subdirectory of your user directory to avoid modifying libraries used by your operating system. You may need to add the user directory to your PATH, for example, ~/Library/Python/3.7/bin. Run the following command at the terminal, and add it to your .bashrc, .zshrc, or other shell configuration file: export PATH=~/Library/Python/3.7/bin:$PATH Configuring your AWS environment Obtain your AWS access key and secret access key from the AWS Management Console. Run the following command:
aws configure
This sets up a text file that the AWS CLI and Boto3 libraries look at by default for your credentials: ~/.aws/credentials. The file should look like this:
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Test Your Credentials AWS CLI Run the following command:
aws sts get-caller-identity
The output should look like this:
{
    "UserId": "AIDAJKLMNOPQRSTUVWXYZ",
    "Account": "123456789012",
    "Arn": "arn:aws:iam::123456789012:user/devuser"
}
Boto3 Run python3 interactively:
python3
Run the following commands:
>>> import boto3
>>> sts = boto3.client('sts')
>>> sts.get_caller_identity()
The output should look like this:
{'UserId': 'AIDAJKLMNOPQRSTUVWXYZ', 'Account': '123456789012', 'Arn': 'arn:aws:iam::123456789012:user/devuser', 'ResponseMetadata': {'RequestId': '3e310806-50c9-11e9-93ae-dbac86675630', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '3e310806-50c9-11e9-93ae-dbac86675630', 'content-type': 'text/xml', 'content-length': '404', 'date': 'Wed, 27 Mar 2019 19:48:06 GMT'}, 'RetryAttempts': 0}}
Quit using exit() or Ctrl+D:
>>> exit()
ResourcesBoto3 Documentation: https://boto3.amazonaws.com/v1/documentation/api/latest/index.htmlBotocore Documentation: https://botocore.amazonaws.com/v1/documentation/api/latest/index.htmlAWS Community Python Forum: https://forums.aws.amazon.com/forum.jspa?forumID=132

Elastic Compute Cloud (EC2)

Stopping EC2 Instances Nightly

00:08:14

Lesson Description:

In this video, we'll look at an effective cost-saving technique: shutting down EC2 instances on a nightly basis. If you consider a development team of around 130 people, each using m4.large instances, this can save you well over $2,000 per month in usage fees. Download the source code for this lesson here.

Backing Up EC2 Instances

00:11:35

Lesson Description:

In this video, we'll demonstrate an alternative to the EBS Lifecycle Manager by creating an EBS backup solution from scratch. This solution consists of a pair of Lambda functions: one to perform the EBS snapshots, and the other to prune stale snapshots. Download the full source code for this lesson here.

Removing Unattached EBS Volumes

00:08:06

Lesson Description:

Deleting detached EBS volumes on a regular basis can help lower your AWS bill. In this lesson, we'll demonstrate a technique for deleting unattached EBS volumes across regions with Lambda, Boto3, and a CloudWatch rule. Download the full source code for this lesson here. Changing the root volume of an EC2 instance to persist:

aws ec2 modify-instance-attribute --instance-id i-1234567890abcdef0 --block-device-mappings file://mapping.json`
JSON file (remember to replace the value for DeviceName with the correct value for your device name):
[{
  "DeviceName": "/dev/xvda",
  "Ebs": {
    "DeleteOnTermination": false
  }
}]

Deregistering Old AMIs

00:06:15

Lesson Description:

Deregistering old, unused AMIs on a regular basis can help lower your AWS bill. In this lesson, we'll demonstrate a technique for deregistering AMIs older than a certain age across regions with Lambda, Boto3, and a CloudWatch rule. Download the full source code for this lesson here.

AWS Instance Scheduler

00:11:36

Lesson Description:

The AWS Instance Scheduler is a solution that automates the starting and stopping of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) instances. The Instance Scheduler leverages AWS resource tags and AWS Lambda to automatically stop and restart instances across multiple AWS regions and accounts on a customer-defined schedule. The solution is easy to deploy and can help reduce operational costs. For example, an organization can use the Instance Scheduler in a production environment to automatically stop instances every day outside of business hours. For customers who leave all of their instances running at full utilization, this solution can result in up to 70% cost savings for those instances that are only necessary during regular business hours (weekly utilization reduced from 168 hours to 50 hours). Deployment Sign in to the AWS Management Console and click the button below to launch the AWS CloudFormation template. You can also download the template as a starting point for your own implementation.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

DynamoDB

Working with DynamoDB Tables

00:17:18

Lesson Description:

In this video, we will talk about working with DynamoDB tables. We will be using DynamoDB in several lessons throughout this course, so it’s important for us to establish a good foundation before we move on. If you choose to use Cloud9, there are a couple of extra steps to get Python 3 and Boto3 working. Under Settings, go to Python Support > Python Version and select Python 3. Installing Boto3 for Python 3 in Cloud9: sudo pip-3.6 install boto3 Clone the GitHub repository for this course: git clone https://github.com/linuxacademy/content-lambda-boto3.git Boto3 DynamoDB documentation: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/dynamodb.html

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Simple Storage Service (S3)

Resizing Images

00:13:00

Lesson Description:

In this video, we'll go over how to create a Lambda function that will automatically resize images uploaded to an S3 bucket. Pillow - Python Imaging Library (Fork) https://pypi.org/project/Pillow/ Download the Pillow-5.4.1-cp37-cp37m-manylinux1_x86_64.whl file, and extract the wheel file in the same folder as lambda_handler.py:

unzip Pillow-5.4.1-cp37-cp37m-manylinux1_x86_64.whl
The Pillow-5.4.1.dist-info isn't needed:
rm -rf Pillow-5.4.1.dist-info
Zip the PIL directory along with lambda_handler.py:
zip -r9 lambda.zip PIL lambda_handler.py
Upload lambda.zip to AWS Lambda.

Importing CSV Files into DynamoDB

00:11:34

Lesson Description:

In this lesson, we'll talk about how to bulk import data from CSV files into DynamoDB. We will create a trigger from an S3 bucket, invoking a Lambda function on upload. The Lambda function will parse the CSV data, and using Boto3, import this data into DynamoDB.

Transcribing Audio

00:13:55

Lesson Description:

In this lesson, we will learn how to automate Amazon Transcribe, parsing out its raw JSON data and storing transcription results in S3. This solution consists of two Lambda functions and a CloudWatch event.

Detecting Faces with Rekognition

00:08:06

Lesson Description:

In this lesson, we will learn how to build our own facial recognition service by combining the capabilities of Amazon Rekognition with Lambda, S3, and DynamoDB.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Simple Queue Service (SQS)

Triggering Lambda from SQS

00:13:24

Lesson Description:

In this lesson, we'll look at a common use case where we invoke a Lambda function that is triggered by SQS. Whenever a message is placed in the queue, the Lambda function will be triggered, reading the contents of that message and inserting its data as a record into DynamoDB. Create a DynamoDB Table

aws dynamodb create-table --table-name Message 
  --attribute-definitions AttributeName=MessageId,AttributeType=S 
  --key-schema AttributeName=MessageId,KeyType=HASH 
  --billing-mode=PAY_PER_REQUEST
Create an SQS Queue
aws sqs create-queue --queue-name Messages
Sending Messages to SQS Run the provided script send_message.py to send messages to SQS. Example: Send a message containing random text to the Messages queue every 0.1 second (10 messages per second): ./send_message.py -q Messages -i 0.1 Press Ctrl+C to quit. Lesson Links Send Messages to SQS: https://github.com/linuxacademy/content-lambda-boto3/tree/master/SQS/Triggering-Lambda-from-SQS Faker: https://pypi.org/project/Faker/

Creating a Queue Using Cross-Account Permissions

00:08:30

Lesson Description:

SQS does not allow API calls such as CreateQueue using cross-account permissions. A workaround is to create and invoke a Lambda function in another account in order to call that API. Create AWS CLI Profiles Development account admin:

aws configure --profile devadmin
Production account admin:
aws configure --profile prodadmin
Create a Lambda Function in the Production Account Function name: CreateSQSQueue See lambda_function.py and assign the role lambda_execution_role.json. Assign Permissions to the Lambda Function Add permissions to the production Lambda function that allow it to be invoked by the development account user:
aws lambda add-permission 
--function-name CreateSQSQueue 
--statement-id DevAccountAccess 
--action 'lambda:InvokeFunction' 
--principal 'arn:aws:iam::__DEVELOPMENT_ACCOUNT_NUMBER__:user/devadmin' 
--region us-east-2 
--profile prodadmin
To view the policy:
aws lambda get-policy 
--function-name CreateSQSQueue 
--region us-east-2 
--profile prodadmin
To remove the policy:
aws lambda remove-permission 
--function-name CreateSQSQueue 
--statement-id DevAccountAccess 
--region us-east-2 
--profile prodadmin
Invoke the Production Lambda Function from the Development Account
aws lambda invoke 
--function-name '__LAMBDA_FUNCTION_ARN__' 
--payload '{"QueueName": "MyQueue" }' 
--invocation-type RequestResponse 
--profile devadmin 
--region us-east-2 
output.txt

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Automating Security

Enabling VPC Flow Logs

00:11:01

Lesson Description:

VPC flow logs enable you to capture information about the IP traffic going to and from network interfaces in your VPC. By default, VPC flow logs are not enabled. However, in our scenario, let's say you have a policy that requires they be enabled for any new VPC that gets created in your account. In this lesson, we will automate the creation of VPC flow logs whenever a new VPC is created.lambda_function.py creates VPC flow logs for the VPC ID in the event.event-pattern.json is the CloudWatch Rule event pattern for monitoring the CreateVpc API call.test-event.json is a sample CloudTrail event that can be used with the Lambda function, as it contains the VPC ID. Create an IAM Role with Permission to Log to CloudWatch LogsAllow the VPC Flow Logs service to assume this role:

 aws iam create-role --role-name VPCFlowLogsRole --assume-role-policy-document file://trust-policy.json
Note the ARN for VPCFlowLogsRole. Example: arn:aws:iam::123456789012:role/VPCFlowLogsRole Grant this role permission to access CloudWatch Logs:
 aws iam put-role-policy --role-name VPCFlowLogsRole --policy-name VPCFlowLogsPolicy --policy-document file://vpc-flow-logs-iam-role.json
Create the Lambda FunctionName: EnableVPCFlowLogs Runtime: Python 3.7 Role: Create a custom role (use lambda_execution_role.json) Code: lambda_function.py Create a CloudWatch Event Rule to Trigger LambdaSelect Event Pattern.Service Name: EC2 Event Type: AWS API Call via CloudTrail Specific operation(s): CreateVpc Event Pattern:
 {
     "source": [
         "aws.ec2"
     ],
     "detail-type": [
         "AWS API Call via CloudTrail"
     ],
     "detail": {
         "eventSource": [
             "ec2.amazonaws.com"
         ],
         "eventName": [
             "CreateVpc"
         ]
     }
 }
Click Add target and select the EnableVpcFlowLogs Lambda function. Click Configure details.Create a New VPCRun the following command:
 aws ec2 create-vpc --cidr-block 172.20.0.0/16 --region us-east-2`
Wait up to a minute for the CloudWatch rule to invoke the Lambda function.

Responding to Invalid SSH Logins

00:18:21

Lesson Description:

In this lesson, we'll explore a real-world scenario in which an EC2 instance is experiencing multiple failed SSH logins, and we want to automatically take the instance offline in response to this potential security event. Use the Web Server Log Group and the Invalid SSH Login metric filter to trigger a CloudWatch alarm set for 2 data points within 1 minute. This alarm should publish to an alarm notification SNS topic and send you an email as well as trigger the Lambda function to stop the instance. Configure the EC2 Instance The EC2 instance must have an IAM role that can communicate with both CloudWatch and Systems Manager. Create an IAM Instance Role Select IAM > Create Role > AWS Service > EC2 > Next: Permissions. Select the CloudWatchAgentAdminPolicy managed policy. Select the AmazonSSMManagedInstanceCore managed policy. Name the role "CloudWatchAgentAdminRole". Launch the EC2 Instance Select Amazon Linux 2. Create or select a security group with SSH (port 22) open to the public (0.0.0.0/0). Attach the IAM Role to the Instance Assign the CloudWatchAgentAdminRole IAM role to the EC2 instance. Install CloudWatch Agent using Systems Manager Run command: AWS-ConfigureAWSPackage Action: Install Name: AmazonCloudWatchAgent Configure the CloudWatch Agent Run the CloudWatch Agent Configuration Wizard. Create a new session using SSM Session Manager.

 sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
Note: Do not select CollectD, unless you already installed it using sudo yum install collectd. Specify /var/log/secure at the "Do you want to monitor any log files?" prompt. Validate the Configuration
```sh
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s
```
Create an SNS Topic The CloudWatch Alarm will notify this topic, and the topic will trigger the Lambda function. Topic name: AlarmNotificationTopic Configure the CloudWatch Alarm Add a Metric Filter to the Web Server Log Group Click Secure log group. Click Create metric filter. Filter pattern: [Mon, day, timestamp, ip, id, status = Invalid*] Click Test pattern. Click Assign metric. Filter name: InvalidSSHLogin Metric namespace: SSH Metric name: InvalidSSHLogin Click Create filter. Create Alarm Metric filter: SSH/InvalidSSHLogin Click Create alarm. Name: InvalidSSHLoginAlarm Description: Invalid login attempts >2 in 1 min for instance <append instance ID> Note: The description is critical, as the instance ID at the end is used by the Lambda function to stop the instance. Whenever InvalidSSHLogin >= 2 for 1 out of 1 datapoints Subscribe to the SNS Topic Select AlarmNotificationTopic and click Create alarm. Create IAM Role for Lambda Function Create the role LambdaStopInstances using policy lambda_execution_role.json. Create Lambda function Name: StopInstanceRole: StopInstancesRole Trigger Lambda from SNS Select Trigger > SNS > AlarmNotificationTopic. Trigger the CloudWatch Alarm Make 3 invalid SSH login attempts within 2 minutes. Verify that the secure log contains the Invalid user string. Verify that the CloudWatch alarm is set. Verify that the CloudWatch Log for the Lambda function ran. Verify that the EC2 instance is stopped.

Remediating Inspector Findings

00:18:08

Lesson Description:

In this lesson, we will learn to automate the detection and remediation of security vulnerabilities using Amazon Inspector, SNS, and Lambda. Helpful Links Common Vulnerabilities and Exposures (CVE): https://cve.mitre.org EC2 Instance Role Policy: arn:aws:iam::aws:policy/AmazonSSMFullAccess Lambda Execution Role Policies:arn:aws:iam::aws:policy/AmazonSSMFullAccessarn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole Additional statement for Inspector:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "inspector:DescribeFindings"
      ],
      "Resource": "*"
    }
  ]
}
Create an SNS TopicCreate an SNS topic. Grant the regional Inspector account permission to publish to the SNS topic. These users are listed here: https://docs.aws.amazon.com/inspector/latest/userguide/inspector_assessments.html#sns-topic Examples:US East (Northern Virginia) — arn:aws:iam::316112463485:rootUS East (Ohio) — arn:aws:iam::646659390643:rootLambda Function Adapted from https://github.com/awslabs/amazon-inspector-auto-remediate. The Lambda function will automatically patch EC2 instances when an Inspector assessment generates a CVE finding. The function requires that the EC2 instance to be patched have the Systems Manager (SSM) agent installed, and the agent must have a role attached with necessary SSM permissions. For details on this, see https://docs.aws.amazon.com/systems-manager/latest/APIReference/Welcome.html. SNS Trigger The Lambda function is triggered by an SNS notification of a new finding from Inspector. The function checks to make sure that the finding is a CVE missing patch finding, and if so, it checks to ensure that the SSM agent is running. It then uses SSM to issue the appropriate patch-and-reboot commands to either Ubuntu or Amazon Linux. Run the InspectorEnsure that you've tagged the EC2 instance(s) you want to scan. Create an Assessment TargetClick Create.Choose a name.Use tags, selecting the EC2 instance(s) you've tagged for inspection.Click Save. Create an Assessment TemplateClick Create.Choose a name (e.g, "All Assessments").Select the Assessment Target created in the previous step.Select all rules packages.Set the duration to 1 hour.Select the SNS topic we created previously.Deselect Assessment Schedule.Click Create and run. Run the AssessmentWatch the assessment run progress, periodically refreshing for the full hour to update the Findings count. If there are any findings, check the CloudWatch log for the Lambda function to observe that SSM ran the update script against the EC2 instance. For example: yum update -q -y; yum upgrade -y.

Making Public S3 Objects Private

00:10:24

Lesson Description:

In this lesson, we'll learn how to detect unintended public access permissions in the ACL of an S3 object and how to revoke them automatically using Lambda, Boto3, and CloudWatch events. Create an S3 BucketCreate the S3 bucket:

 aws s3 mb s3://123456789012-everything-must-be-private
 aws s3 mb s3://123456789012-bucket-for-my-object-level-s3-trail
Apply a bucket policy:
 aws s3api put-bucket-policy 
 --bucket 123456789012-bucket-for-my-object-level-s3-trail 
 --policy file://bucket_policy.json
Create a CloudTrail Trail and Start LoggingRun the following command:
 aws cloudtrail create-trail 
 --name my-object-level-s3-trail 
 --s3-bucket-name 123456789012-bucket-for-my-object-level-s3-trail

 aws cloudtrail start-logging --name my-object-level-s3-trail
Create the event selectors:
 aws cloudtrail put-event-selectors 
 --trail-name my-object-level-s3-trail 
 --event-selectors file://event_selectors.json
Create an IAM Execution Role for LambdaCreate the IAM role:
 aws iam create-role 
 --role-name AllowLogsAndS3ACL 
 --assume-role-policy-document file://trust_policy.json
Define the access policy:
 aws iam put-role-policy 
 --role-name AllowLogsAndS3ACL 
 --policy-name AllowLogsAndS3ACL 
 --policy-document file://access_policy.json
Create a Lambda Function For a PutObjectAcl API event, the function gets the bucket and key name from the event. If the object is not private, then it makes the object private by making a PutObjectAcl call.Zip the Lambda function:
 zip -r9 RemediateObjectACL.zip lambda_function.py
Create the Lambda function:
 aws lambda create-function 
 --function-name RemediateObjectACL 
 --zip-file fileb://RemediateObjectACL.zip 
 --role arn:aws:iam::123456789012:role/AllowLogsAndS3ACL 
 --handler lambda_function.lambda_handler 
 --runtime python3.7 
 --environment Variables={BUCKET_NAME=123456789012-everything-must-be-private}
Allow CloudWatch events to invoke Lambda:
 aws lambda add-permission 
 --function-name RemediateObjectACL 
 --statement-id AllowCloudWatchEventsToInvoke 
 --action 'lambda:InvokeFunction' 
 --principal events.amazonaws.com 
 --source-arn arn:aws:events:us-east-2:123456789012:rule/S3ObjectACLAutoRemediate
Create a CloudWatch Events RuleCreate the rule:
 aws events put-rule 
 --name S3ObjectACLAutoRemediate 
 --event-pattern file://event_pattern.json
Set the Lambda function as the target:
 aws events put-targets 
 --rule S3ObjectACLAutoRemediate 
 --targets Id=1,Arn=arn:aws:lambda:us-east-2:123456789012:function:RemediateObjectACL
TestingRun the following command:
 aws s3api put-object 
 --bucket 123456789012-everything-must-be-private 
 --key MyPersonalInfo

 aws s3api get-object-acl 
 --bucket 123456789012-everything-must-be-private 
 --key MyPersonalInfo
The above should return 1 grantee, the owner (you). This indicates that the object is private. Add public read access, violating our policy:
 aws s3api put-object-acl 
 --bucket 123456789012-everything-must-be-private 
 --key MyPersonalInfo 
 --acl public-read
Quickly check access again:
 aws s3api get-object-acl 
 --bucket 123456789012-everything-must-be-private 
 --key MyPersonalInfo
You will see another grantee, allowing everyone to read the object:
 {
   "Grantee": {
     "Type": "Group",
     "URI": "http://acs.amazonaws.com/groups/global/AllUsers"
   },
   "Permission": "READ"
 }
Describe the ACL again, and you'll see the Lambda function has removed public read access. Verify this in CloudWatch Logs.Cleanup
aws events remove-targets --rule S3ObjectACLAutoRemediate --ids "1"
aws events delete-rule --name S3ObjectACLAutoRemediate
aws lambda delete-function --function-name RemediateObjectACL
aws iam delete-role-policy --role-name AllowLogsAndS3ACL --policy-name AllowLogsAndS3ACL
aws iam delete-role --role-name AllowLogsAndS3ACL
aws cloudtrail delete-trail --name my-object-level-s3-trail
aws s3 rb s3://123456789012-bucket-for-my-object-level-s3-trail --force
aws s3 rb s3://123456789012-everything-must-be-private --force

Automating Resource Tagging

00:13:07

Lesson Description:

In this lesson, we'll learn how to automate the tagging of EC2 instances and their corresponding resources using a Lambda function with CloudTrail and CloudWatch. The function will ensure that users can work only on those resources that they have created based on resource tags. This is enforced via an IAM policy. Create the IAM Policy This policy allows Start/Stop/Reboot/Terminate for EC2 instances where the tag Owner matches the current requester's user ID.Run the following command:

 aws iam create-policy 
 --policy-name TagBasedEC2RestrictionsPolicy 
 --policy-document file://TagBasedEC2RestrictionsPolicy.json
Note the policy ARN.Attach IAM Policy to GroupCreate a group called developers:
 aws iam create-group --group-name developers
Attach the policy to the group:
 aws iam attach-group-policy 
 --policy-arn arn:aws:iam::123456789012:policy/TagBasedEC2RestrictionsPolicy 
 --group-name developers
Create an IAM Role for the Lambda FunctionCreate the IAM role:
 aws iam create-role 
 --role-name LambdaAllowTaggingEC2Role 
 --assume-role-policy-document file://trust_policy.json
Define the access policy:
 aws iam put-role-policy 
 --role-name LambdaAllowTaggingEC2Role 
 --policy-name LambdaAllowTaggingEC2Policy 
 --policy-document file://access_policy.json
Create the Lambda FunctionCreate the function TagEC2Resources. Create a CloudWatch RuleCreate the rule:
 aws events put-rule 
 --name AutoTagResources 
 --event-pattern file://event_pattern.json
Set the Lambda function as the target:
 aws events put-targets 
 --rule AutoTagResources 
 --targets Id=1,Arn=arn:aws:lambda:us-east-2:123456789012:function:TagEC2Resources
Create an EC2 Instance as UserCreate an EC2 instance as an administrative/root user. Observe the Owner tag. Try working with EC2 instances that are untagged or owned by other users, and observe the "Access Denied" errors.What Next? Now that you know you can tag resources with a Lambda function in response to events, you can apply the same logic to other resources such as RDS databases or S3 buckets. With resource groups, each user can focus on just their resources, and the IAM policy provided in this lesson ensures that no unauthorized action is possible on someone else's instance. Additionally, tags are useful in custom billing reports to project costs and determine how much money each individual owner is spending. You can activate the Owner tag from the Cost Allocation Tags section of your billing console to include it in your detailed billing reports. For more information, see Applying Tags.

Rotating IAM Access Keys

00:12:06

Lesson Description:

In this lesson, we'll create a Lambda function that revokes user access keys periodically to enforce rotation and mitigate risk. The Lambda function will perform the following:Collect IAM users using paginationScan each user for existing IAM access keys older than 90 daysDeactivate the keysSend email alerts to the administrator Scheduled CloudWatch Rule:Triggers the Lambda function to run (e.g., weekly) Amazon Simple Email Service (Amazon SES) Be sure to use an SES-verified email address to ensure proper delivery of emails. SES API endpoints are not available in all regions. Go here for a list of supported endpoints.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

AWS Config

Finding Outdated Instance Types

00:11:35

Lesson Description:

In this lesson, we'll create a custom AWS Config rule to find outdated EC2 instance types. Moving off these old instance types can help us reduce costs and improve performance. Create a Lambda FunctionCreate a role called "LambdaCheckInstanceTypeRole" and specify AWS Config Rules permissions as the policy template. Create the Lambda function CheckInstanceType. Note the ARN for the next steps.Create an AWS Config RuleIn the AWS Config console, select Rules, then click the Add rule button.Next, click Add custom rule. Name: DesiredInstanceTypes.Description: Checks that all EC2 instances are of the type specified.AWS Lambda function ARN: (Copy and paste from the Lambda Console. It should look something like this: arn:aws:lambda:us-east-1:123456789012:function:CheckInstanceType)Trigger type: Configuration changesScope of changes: ResourcesResources: EC2: InstanceRule parameters: Key: desiredInstanceTypeValue: t2.micro, or a comma-separated list (e.g, t2.micro,t3.micro). Note: Any values not in this list will evaluate to noncompliant.Click Save. References Example Events for AWS Config: https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_example-events.html

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:30:00

Building Serverless Web Applications

Building a Contact Form with API Gateway and SES

00:14:05

Lesson Description:

This lesson demonstrates a web page with a typical contact form. Using API Gateway and a Lambda function as a backend for this form, we will send the form post contents via email using SES, and also write the contact data to DynamoDB. Source code for this lesson https://github.com/linuxacademy/content-lambda-boto3/tree/master/WebApps/Contact-Form DynamoDB Create a table named Contact with a primary partition key named id:

aws dynamodb create-table --table-name Contact 
  --attribute-definitions AttributeName=id,AttributeType=S 
  --key-schema AttributeName=id,KeyType=HASH 
  --billing-mode=PAY_PER_REQUEST
Lambda function Create a function named ContactEmail. API Gateway Create an API named ContactEmailAPI. Create MethodSelect POST and check the check mark.Integration Type: Lambda Function Use Lambda Proxy Integration: Checked (stores request data in event) Lambda region: Same region as Lambda function Lambda function: ContactEmailEnable CORSSelect the POST methodUnder Actions, select Enable CORS Leave the default options and click on Enable CORS and replace existing CORS headers Click Yes, replace existing values Deploy APIUnder Actions, select Deploy APIDeployment stage: [New stage] Stage name: prod Note the Invoke URL and update form.js.Test locally
cd Contact-Form
python3 -m http.server
Navigate to http://localhost:8000

CloudFormation

Creating Lambda-Backed Custom Resources

00:08:52

Lesson Description:

When you associate a Lambda function with a custom resource, the function is invoked whenever the custom resource is created, updated, or deleted. CloudFormation calls a Lambda API to invoke the function and to pass all the request data (such as the request type and resource properties) to the function. In this lesson, we will learn how to implement password confirmation logic in a CloudFormation template. AWS CloudFormation Deep Dive https://linuxacademy.com/cp/modules/view/id/157

Using Custom Resources to Deploy an S3-Hosted Website

00:10:42

Lesson Description:

In this lesson, we will extend the capabilities of Lambda-backed custom resources, deploying a static website to S3 from a GitHub repository. GitHub Repository https://github.com/linuxacademy/content-lambda-boto3/tree/master/CloudFormation/S3-Hosted-Website Sample Web Application https://github.com/linuxacademy/content-lambda-boto3/tree/master/WebApps/Contact-Form/webapp

Lambda

Working with Lambda Layers

00:06:07

Lesson Description:

Lambda layers allow functions to easily share code. Upload a layer once, and reference it from any function. Layers can contain anything (dependencies, data, configuration files, etc.) and can be shared publicly or with specific AWS accounts. Collect common components in a ZIP file and upload as a Lambda layer. A function may reference up to 5 layers. Working with Lambda LayersCreate a new virtual environment using Pipenv and install the required libraries:

 pipenv --python 3.7
 pipenv shell
 pipenv install requests
Note: If you are on macOS, you can install Pipenv using Homebrew:
 brew install pipenv
On Amazon Linux, or another environment, you can install using pip:
 pip3 install pipenv --user
Create the ZIP deployment package:
 PY_DIR='build/python/lib/python3.7/site-packages'
 # Create temporary build directory
 mkdir -p $PY_DIR
 # Generate requirements file
 pipenv lock -r > requirements.txt
 # Install packages into the target directory
 pip install -r requirements.txt --no-deps -t $PY_DIR
 cd build
 # Zip files
 zip -r ../requests_layer.zip .
 cd ..
 # Remove temporary directory
 rm -r build
Create the Lambda layer.
 aws lambda publish-layer-version 
 --layer-name requests 
 --compatible-runtimes python3.7 
 --zip-file fileb://requests_layer.zip
Note the LayerArn in the output.

Automating CodeCommit Change Notifications

00:14:52

Lesson Description:

In this lesson, we'll demonstrate how to receive detailed email notifications about file changes and commit messages when a code update is pushed to CodeCommit. A code reviewer may subscribe to the SNS topic and recieve updates for any changes. Create the CodeCommit Repository

aws codecommit create-repository --repository-name ChangeNotification
Note the cloneUrlHttp and Arn values in the response. Create and Subscribe to the SNS Topic
aws sns create-topic --name CodeCommitChangeNotification

aws sns subscribe 
--topic-arn arn:aws:sns:us-east-1:123456789012:CodeCommitChangeNotification 
--protocol email 
--notification-endpoint my-email@example.com
Create an IAM Lambda Execution RoleAdd AWSLambdaBasicExecutionRole.Add the following policy: LambdaCodeCommitSnsPolicy.
 {
     "Version": "2012-10-17",
     "Statement": [{
         "Effect": "Allow",
         "Action": [
         "codecommit:*",
         "sns:*"
         ],
         "Resource": "*"
     }]
 }
Create the Lambda FunctionName it "CodeCommitChangeNotification". Set the following environment variables: REPOSITORY_NAME = ChangeNotification SNS_TOPIC_ARN = arn:aws:sns:us-east-1:123456789012:CodeCommitChangeNotificationCreate the CloudWatch Event Rule This rule will detect branch or repository changes.Choose Event Pattern. Service Name: CodeCommitEvent Type: CodeCommit Repository State Change Select Specific resource(s) by ARN, and enter the CodeCommit Repository ARN.Select the referenceCreated and referenceUpdated events. Event Pattern:
 {
 "source": [
     "aws.codecommit"
 ],
 "detail-type": [
     "CodeCommit Repository State Change"
 ],
 "resources": [
     "arn:aws:codecommit:us-east-1:123456789012:ChangeNotification"
 ]
 }
Under Target, select the CodeCommitChangeNotification function.Commit a ChangeCreate and commit a file.Edit the file and commit it.

Third-Party Integrations

Creating Slack Notifications for CloudWatch Alarms

00:11:34

Lesson Description:

Configure Webhooks in SlackCreate a Slack app: https://api.slack.com/apps/newSearch for and select Incoming Webhooks.Set Activate Incoming Webhooks to On.Select Add New Webhook to Workspace.Choose the default channel where messages will be sent and click Authorize.Note the webhook URL from the Webhook URLs for Your Workspace section. For example: https://hooks.slack.com/services/T0HABCGK/BDEFG93SS/BeBSKJHDHmWwyv2SYV4apv1O

 WEBHOOK_URL=https://hooks.slack.com/services/T0HABCGK/BDEFG93SS/BeBSKJHDHmWwyv2SYV4apv1O
Test the webhook:
 curl -X POST -H 'Content-type: application/json' --data '{"text":"Hello, World!"}' $WEBHOOK_URL
Create an SNS Topic
aws sns create-topic --name high-cpu-alarm
Note the TopicArn. Create a CloudWatch AlarmSend notifications to the SNS topic when CPU utilization > 40%:
 aws cloudwatch put-metric-alarm 
     --alarm-name cpu-mon 
     --alarm-description "Alarm when CPU exceeds 40%" 
     --metric-name CPUUtilization 
     --namespace AWS/EC2 
     --statistic Average 
     --period 60 
     --evaluation-periods 1 
     --threshold 40 
     --comparison-operator GreaterThanThreshold 
     --dimensions Name=InstanceId,Value=i-12345678901234567 
     --alarm-actions arn:aws:sns:us-east-1:123456789012:high-cpu-alarm 
     --unit Percent
Create an SSM Parameter
aws ssm put-parameter --cli-input-json '{"Type": "SecureString", "KeyId": "alias/aws/ssm", "Name": "SlackWebHookURL", "Value": "'"$WEBHOOK_URL"'"}'
Create a Lambda Execution RoleAttach the following managed policies:AmazonSSMFullAccessAWSLambdaBasicExecutionRoleCreate a Lambda FunctionUse the SNS topic as a trigger. Stress the CPU
# Install Extra Packages for Enterprise Linux
sudo amazon-linux-extras install epel
# Install stress
sudo yum install -y stress
# Beat it up for 5 mins
stress --cpu 2 --timeout 300s

Creating a Twitter App

00:09:30

Lesson Description:

In this lesson, we will learn how to create a Twitter app that automatically sends tweets on a schedule. Create a Twitter Application More info here: https://developer.twitter.com/en/docs/basics/apps/overview. Note the Consumer API keys and access token & access token secret. You can set these as environment variables, should you wish to create the SSM parameters using the AWS CLI:

CONSUMER_KEY=5YAAipG8DLKjhdkjfhfQMZyuYhJ
CONSUMER_SECRET=qi3YF6hzdflkjLKJFZgJ6J3WwmEktMfX0IwPQMas
ACCESS_TOKEN=110809243434444423-pHHgCWvWOhc1KJHSLKJHFCAyrDOq0cR
ACCESS_TOKEN_SECRET=QXesCU9LKJHFLKJHKFF9FFVOjymg9UIDvRa58ySaQjd7357
Create the SSM Parameters Use the keys from the previous step:
aws ssm put-parameter --cli-input-json '{"Type": "SecureString", "KeyId": "alias/aws/ssm", "Name": "/TwitterBot/consumer_key", "Value": "'"$CONSUMER_KEY"'"}'

aws ssm put-parameter --cli-input-json '{"Type": "SecureString", "KeyId": "alias/aws/ssm", "Name": "/TwitterBot/consumer_secret", "Value": "'"$CONSUMER_SECRET"'"}'

aws ssm put-parameter --cli-input-json '{"Type": "SecureString", "KeyId": "alias/aws/ssm", "Name": "/TwitterBot/access_token", "Value": "'"$ACCESS_TOKEN"'"}'

aws ssm put-parameter --cli-input-json '{"Type": "SecureString", "KeyId": "alias/aws/ssm", "Name": "/TwitterBot/access_token_secret", "Value": "'"$ACCESS_TOKEN_SECRET"'"}'
Create an S3 Bucket and Upload the Data File
aws s3 mb s3://123456789012-twitterbot
aws s3 cp data.txt s3://123456789012-twitterbot
Create an IAM Execution Role for LambdaGrant access to your S3 bucket:
 {
   "Version": "2012-10-17",
   "Statement": [{
       "Effect": "Allow",
       "Action": [
         "s3:GetObject"
       ],
       "Resource": "arn:aws:s3:::123456789012-twitterbot/*"
     },
     {
       "Effect": "Allow",
       "Action": [
         "ssm:GetParameter"
       ],
       "Resource": "arn:aws:ssm:us-east-2:123456789012:parameter/TwitterBot*"
     }
   ]
 }
Create a Deployment PackageCreate a new virtual environment using pipenv and install the required libraries:
 pipenv --python 3.7
 pipenv shell
 pipenv install tweepy
 mkdir package
 pipenv lock -r > requirements.txt
 pip install -r requirements.txt --no-deps -t package
 cd package
 zip -r9 ../package.zip .
 cd ..
 zip -g package.zip lambda_function.py
 rm -rf package/*
Create a Lambda Function
aws lambda create-function 
--function-name TwitterBot 
--zip-file fileb://package.zip 
--role arn:aws:iam::123456789012:role/LambdaTwitterBotRole 
--handler lambda_function.lambda_handler 
--runtime python3.7 
--environment Variables={BUCKET_NAME=123456789012-twitterbot}
Create a CloudWatch Scheduled Rule
aws events put-rule 
--name TwitterBot 
--schedule-expression 'rate(1 hour)'

aws lambda add-permission 
--function-name TwitterBot 
--statement-id TwitterBot 
--action 'lambda:InvokeFunction' 
--principal events.amazonaws.com 
--source-arn arn:aws:events:us-east-1:123456789012:rule/TwitterBot

aws events put-targets --rule TwitterBot --targets file://targets.json

Monitoring and Debugging

Publishing Custom Metrics from Lambda

00:10:50

Lesson Description:

CloudWatch custom metrics are coded into your applications and can be used to report custom business and operational data. In this lesson, we will learn how to publish custom CloudWatch metric data from a Lambda function, reporting several key performance indicators (KPIs). These data can help us monitor progress toward sales, marketing, and customer service goals. Install the demo app using the AWS Serverless Application Model. You can find the instructions for installing the AWS SAM CLI here. Create the Lambda Deployment PackageRun the following command:

cd shopping-cart-app
Create an S3 bucket in the location where you want to save the packaged code. (If you want to use an existing S3 bucket, skip this step.)
 aws s3 mb s3://123456789012-shopping-cart-app
Create the deployment artifacts with dependencies.
 sam build
Create the Lambda function deployment package by running the following package AWS SAM CLI command at the command prompt:
 sam package 
     --output-template-file packaged.yaml 
     --s3-bucket 123456789012-shopping-cart-app
In the AWS SAM CLI, use the deploy command to deploy all of the resources that you defined in the template.
 sam deploy 
     --template-file packaged.yaml 
     --stack-name shopping-cart-app 
     --capabilities CAPABILITY_IAM
Locate the API Gateway Endpoint URLsOpen the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation.Choose the AWS CloudFormation stack that you created in the preceding step from the list.Under Outputs, note the API Gateway endpoint URLs.Browse each one and observe the JSON responses. Generate Traffic Using the API Gateway endpoint URLs from the previous step, generate traffic against each of these endpoints.Run an HTTP testing tool like vegeta to generate traffic to your API gateway endpoints. Modify URLs.txt to use the endpoint URLs in your account. Run a test for 60 minutes.
 cat URLS.txt | vegeta attack -duration=60m | tee results.bin | vegeta report
View Custom Metrics You may view custom metric data while a load test is in progress.Open the CloudWatch Console at https://console.aws.amazon.com/cloudwatch.Navigate to Metrics.Under All metrics, select ShoppingCartApp.Select Metrics with no dimensions.Select ItemsAddedToCart, OrderTotal, and ViewProduct.

Tracing with X-Ray

00:10:49

Lesson Description:

In this video, we're going to learn about tracing our application with AWS X-Ray. We are going to deploy a sample Flask application that is instrumented with the X-Ray SDK, make some sample requests, and then examine the traces and service maps in the AWS Management Console. Zappa GitHub Repo: https://github.com/Miserlou/Zappa Create a DynamoDB TableCreate a table called Flask with a primary partition key called key.

 aws dynamodb create-table --table-name Flask 
   --attribute-definitions AttributeName=key,AttributeType=S 
   --key-schema AttributeName=key,KeyType=HASH 
   --billing-mode=PAY_PER_REQUEST
Configure ZappaRun the following command:
 cd example
 pipenv --python 3.7
 pipenv shell
 pipenv install aws-xray-sdk flask zappa requests
 pipenv lock -r > requirements.txt
 zappa init
Add the following property to zappa_settings.json:
 "xray_tracing": true
Deploy the application.
 zappa deploy
Enable X-Ray Tracing for API Gateway In this step, we will interact with the API Gateway Console to enable X-Ray tracing.Sign in to the AWS Management Console and open the API Gateway Console at https://console.aws.amazon.com/apigateway/.Select your API (e.g., example-dev).Select Stages.Choose the name of your deployment stage (e.g., dev).On the Logs/Tracing tab, select the Enable X-Ray Tracing box.Click Save Changes.Navigate to the endpoint in your browser.

Conclusion

What's Next?

00:01:35

Lesson Description:

Congratulations! If you've made it this far, you have successfully completed the Automating AWS with Lambda, Python, and Boto3 course. In this video, I'll recommend other Linux Academy courses you can take next and show you how to get involved in the Linux Academy community.

Get Recognized

00:00:46

Lesson Description:

Now that you have completed this course, take some time to share your success and get recognized in our community and on LinkedIn.

Take this course and learn a new skill today.

Transform your learning with our all access plan.

Start 7-Day Free Trial