Automating AWS with Lambda, Python, and Boto3


Intro Video

Photo of Mark Richman

Mark Richman

AWS Training Architect II in Content







Hands-on Labs


Course Details

This course will explore AWS automation using Lambda and Python. We'll be using the AWS SDK for Python, better known as Boto3.

You will learn how to integrate Lambda with many popular AWS services, such as EC2, S3, SQS, DynamoDB, and more.

Once we cover the basics, we'll dive into some more advanced use cases to really uncover the power of Lambda.

To chat with the instructor and other students, check out the Linux Academy community Slack and join the #aws channel.

Check out the Interactive Diagram here:


Automating AWS with Lambda, Python, and Boto3


Course Overview


Lesson Description:

This video introduces the Automating AWS with Lambda, Python, and Boto3 course.

About the Training Architect


Lesson Description:

Get to know a little bit more about me, the instructor for this course!

Course Prerequisites


Lesson Description:

This video covers the necessary prerequisites for this course.

Working with the Interactive Diagrams


Lesson Description:

In this video, we'll talk about how to access, navigate, and use the interactive diagram for this course. Interactive Diagram: The Pontoporia Toolbox

Getting Started

Overview of AWS Lambda


Lesson Description:

In this video, we will go over a variety of essential Lambda concepts as well as learn how to create our own Lambda functions in the AWS Management Console. This will allow you to start developing with AWS Lambda and take advantage of its availability, fault tolerance, elasticity, and cost efficiency. We will also learn about the handler, event triggers, event data, the context object, and logging in Amazon CloudWatch. By the end of this lesson, you will be able to create your own simple Lambda functions in the AWS Console and feel confident that you understand the basics of Lambda and serverless compute options. You will know how to modify a Lambda function's code to take advantage of event data and the context object, and you will be able to add events to a Lambda function using the AWS Console.

Introduction to Boto3


Lesson Description:

Boto3 is the AWS SDK for Python. It enables Python developers to create, configure, and manage AWS services, such as EC2 and S3. Boto provides an easy-to-use, object-oriented API, as well as low-level access to AWS services. Installing the AWS CLI and Boto3 Amazon Linux 2 The AWS CLI is already installed on Amazon Linux 2. Install Python 3: sudo yum install -y python3-pip python3 python3-setuptools Install Boto3: pip3 install boto3 --user macOS Install Python3 using Homebrew: ruby -e "$(curl -fsSL" Install Python 3: brew install python Insert the Homebrew Python directory at the top of your PATH environment variable: export PATH="/usr/local/opt/python/libexec/bin:$PATH" Verify you are using Python 3: python --version Install the AWS CLI and Boto3: pip install awscli boto3 --upgrade --user The --upgrade option tells pip to upgrade any requirements that are already installed. The --user option tells pip to install the program to a subdirectory of your user directory to avoid modifying libraries used by your operating system. You may need to add the user directory to your PATH, for example, ~/Library/Python/3.7/bin. Run the following command at the terminal, and add it to your .bashrc, .zshrc, or other shell configuration file: export PATH=~/Library/Python/3.7/bin:$PATH Configuring your AWS environment Obtain your AWS access key and secret access key from the AWS Management Console. Run the following command: aws configure This sets up a text file that the AWS CLI and Boto3 libraries look at by default for your credentials: ~/.aws/credentials. The file should look like this: [default] aws_access_key_id = AKIAIOSFODNN7EXAMPLE aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Test Your Credentials AWS CLI Run the following command: aws sts get-caller-identity The output should look like this: { "UserId": "AIDAJKLMNOPQRSTUVWXYZ", "Account": "123456789012", "Arn": "arn:aws:iam::123456789012:user/devuser" } Boto3 Run python3 interactively: python3 Run the following commands: >>> import boto3 >>> sts = boto3.client('sts') >>> sts.get_caller_identity() The output should look like this: {'UserId': 'AIDAJKLMNOPQRSTUVWXYZ', 'Account': '123456789012', 'Arn': 'arn:aws:iam::123456789012:user/devuser', 'ResponseMetadata': {'RequestId': '3e310806-50c9-11e9-93ae-dbac86675630', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '3e310806-50c9-11e9-93ae-dbac86675630', 'content-type': 'text/xml', 'content-length': '404', 'date': 'Wed, 27 Mar 2019 19:48:06 GMT'}, 'RetryAttempts': 0}} Quit using exit() or Ctrl+D: >>> exit() Resources Boto3 Documentation: Documentation: Community Python Forum:

Elastic Compute Cloud (EC2)

Stopping EC2 Instances Nightly


Lesson Description:

In this video, we'll look at an effective cost-saving technique: shutting down EC2 instances on a nightly basis. If you consider a development team of around 130 people, each using m4.large instances, this can save you well over $2,000 per month in usage fees. Download the source code for this lesson here.

Backing Up EC2 Instances


Lesson Description:

In this video, we'll demonstrate an alternative to the EBS Lifecycle Manager by creating an EBS backup solution from scratch. This solution consists of a pair of Lambda functions: one to perform the EBS snapshots, and the other to prune stale snapshots. Download the full source code for this lesson here.

Removing Unattached EBS Volumes


Lesson Description:

Deleting detached EBS volumes on a regular basis can help lower your AWS bill. In this lesson, we'll demonstrate a technique for deleting unattached EBS volumes across regions with Lambda, Boto3, and a CloudWatch rule. Download the full source code for this lesson here. Changing the root volume of an EC2 instance to persist: aws ec2 modify-instance-attribute --instance-id i-1234567890abcdef0 --block-device-mappings file://mapping.json` JSON file (remember to replace the value for DeviceName with the correct value for your device name): [{ "DeviceName": "/dev/xvda", "Ebs": { "DeleteOnTermination": false } }]

Deregistering Old AMIs


Lesson Description:

Deregistering old, unused AMIs on a regular basis can help lower your AWS bill. In this lesson, we'll demonstrate a technique for deregistering AMIs older than a certain age across regions with Lambda, Boto3, and a CloudWatch rule. Download the full source code for this lesson here.

AWS Instance Scheduler


Lesson Description:

The AWS Instance Scheduler is a solution that automates the starting and stopping of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) instances. The Instance Scheduler leverages AWS resource tags and AWS Lambda to automatically stop and restart instances across multiple AWS regions and accounts on a customer-defined schedule. The solution is easy to deploy and can help reduce operational costs. For example, an organization can use the Instance Scheduler in a production environment to automatically stop instances every day outside of business hours. For customers who leave all of their instances running at full utilization, this solution can result in up to 70% cost savings for those instances that are only necessary during regular business hours (weekly utilization reduced from 168 hours to 50 hours). Deployment Sign in to the AWS Management Console and click the button below to launch the AWS CloudFormation template. You can also download the template as a starting point for your own implementation.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.



Working with DynamoDB Tables


Lesson Description:

In this video, we will talk about working with DynamoDB tables. We will be using DynamoDB in several lessons throughout this course, so it’s important for us to establish a good foundation before we move on. If you choose to use Cloud9, there are a couple of extra steps to get Python 3 and Boto3 working. Under Settings, go to Python Support > Python Version and select Python 3. Installing Boto3 for Python 3 in Cloud9: sudo pip-3.6 install boto3 Clone the GitHub repository for this course: git clone Boto3 DynamoDB documentation:

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


Simple Storage Service (S3)

Resizing Images


Lesson Description:

In this video, we'll go over how to create a Lambda function that will automatically resize images uploaded to an S3 bucket. Pillow - Python Imaging Library (Fork) Download the Pillow-5.4.1-cp37-cp37m-manylinux1_x86_64.whl file, and extract the wheel file in the same folder as unzip Pillow-5.4.1-cp37-cp37m-manylinux1_x86_64.whl The Pillow-5.4.1.dist-info isn't needed: rm -rf Pillow-5.4.1.dist-info Zip the PIL directory along with zip -r9 PIL Upload to AWS Lambda.

Importing CSV Files into DynamoDB


Lesson Description:

In this lesson, we'll talk about how to bulk import data from CSV files into DynamoDB. We will create a trigger from an S3 bucket, invoking a Lambda function on upload. The Lambda function will parse the CSV data, and using Boto3, import this data into DynamoDB.

Transcribing Audio


Lesson Description:

In this lesson, we will learn how to automate Amazon Transcribe, parsing out its raw JSON data and storing transcription results in S3. This solution consists of two Lambda functions and a CloudWatch event.

Detecting Faces with Rekognition


Lesson Description:

In this lesson, we will learn how to build our own facial recognition service by combining the capabilities of Amazon Rekognition with Lambda, S3, and DynamoDB.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


Simple Queue Service (SQS)

Triggering Lambda from SQS


Lesson Description:

In this lesson, we'll look at a common use case where we invoke a Lambda function that is triggered by SQS. Whenever a message is placed in the queue, the Lambda function will be triggered, reading the contents of that message and inserting its data as a record into DynamoDB. Create a DynamoDB Table aws dynamodb create-table --table-name Message --attribute-definitions AttributeName=MessageId,AttributeType=S --key-schema AttributeName=MessageId,KeyType=HASH --billing-mode=PAY_PER_REQUEST Create an SQS Queue aws sqs create-queue --queue-name Messages Sending Messages to SQS Run the provided script to send messages to SQS. Example: Send a message containing random text to the Messages queue every 0.1 second (10 messages per second): ./ -q Messages -i 0.1 Press Ctrl+C to quit. Lesson Links Send Messages to SQS: Faker:

Creating a Queue Using Cross-Account Permissions


Lesson Description:

SQS does not allow API calls such as CreateQueue using cross-account permissions. A workaround is to create and invoke a Lambda function in another account in order to call that API. Create AWS CLI Profiles Development account admin: aws configure --profile devadmin Production account admin: aws configure --profile prodadmin Create a Lambda Function in the Production Account Function name: CreateSQSQueue See and assign the role lambda_execution_role.json. Assign Permissions to the Lambda Function Add permissions to the production Lambda function that allow it to be invoked by the development account user: aws lambda add-permission --function-name CreateSQSQueue --statement-id DevAccountAccess --action 'lambda:InvokeFunction' --principal 'arn:aws:iam::__DEVELOPMENT_ACCOUNT_NUMBER__:user/devadmin' --region us-east-2 --profile prodadmin To view the policy: aws lambda get-policy --function-name CreateSQSQueue --region us-east-2 --profile prodadmin To remove the policy: aws lambda remove-permission --function-name CreateSQSQueue --statement-id DevAccountAccess --region us-east-2 --profile prodadmin Invoke the Production Lambda Function from the Development Account aws lambda invoke --function-name '__LAMBDA_FUNCTION_ARN__' --payload '{"QueueName": "MyQueue" }' --invocation-type RequestResponse --profile devadmin --region us-east-2 output.txt

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


Automating Security

Enabling VPC Flow Logs


Lesson Description:

VPC flow logs enable you to capture information about the IP traffic going to and from network interfaces in your VPC. By default, VPC flow logs are not enabled. However, in our scenario, let's say you have a policy that requires they be enabled for any new VPC that gets created in your account. In this lesson, we will automate the creation of VPC flow logs whenever a new VPC is created. creates VPC flow logs for the VPC ID in the event.event-pattern.json is the CloudWatch Rule event pattern for monitoring the CreateVpc API call.test-event.json is a sample CloudTrail event that can be used with the Lambda function, as it contains the VPC ID. Create an IAM Role with Permission to Log to CloudWatch Logs Allow the VPC Flow Logs service to assume this role: aws iam create-role --role-name VPCFlowLogsRole --assume-role-policy-document file://trust-policy.json Note the ARN for VPCFlowLogsRole. Example: arn:aws:iam::123456789012:role/VPCFlowLogsRole Grant this role permission to access CloudWatch Logs: aws iam put-role-policy --role-name VPCFlowLogsRole --policy-name VPCFlowLogsPolicy --policy-document file://vpc-flow-logs-iam-role.json Create the Lambda Function Name: EnableVPCFlowLogs Runtime: Python 3.7 Role: Create a custom role (use lambda_execution_role.json) Code: Create a CloudWatch Event Rule to Trigger Lambda Select Event Pattern. Service Name: EC2 Event Type: AWS API Call via CloudTrail Specific operation(s): CreateVpc Event Pattern: { "source": [ "aws.ec2" ], "detail-type": [ "AWS API Call via CloudTrail" ], "detail": { "eventSource": [ "" ], "eventName": [ "CreateVpc" ] } } Click Add target and select the EnableVpcFlowLogs Lambda function. Click Configure details. Create a New VPC Run the following command: aws ec2 create-vpc --cidr-block --region us-east-2` Wait up to a minute for the CloudWatch rule to invoke the Lambda function.

Responding to Invalid SSH Logins


Lesson Description:

In this lesson, we'll explore a real-world scenario in which an EC2 instance is experiencing multiple failed SSH logins, and we want to automatically take the instance offline in response to this potential security event. Use the Web Server Log Group and the Invalid SSH Login metric filter to trigger a CloudWatch alarm set for 2 data points within 1 minute. This alarm should publish to an alarm notification SNS topic and send you an email as well as trigger the Lambda function to stop the instance. Configure the EC2 Instance The EC2 instance must have an IAM role that can communicate with both CloudWatch and Systems Manager. Create an IAM Instance Role Select IAM > Create Role > AWS Service > EC2 > Next: Permissions. Select the CloudWatchAgentAdminPolicy managed policy. Select the AmazonEC2RoleforSSM managed policy. Name the role "CloudWatchAgentAdminRole". Launch the EC2 Instance Select Amazon Linux 2. Create or select a security group with SSH (port 22) open to the public ( Attach the IAM Role to the Instance Assign the CloudWatchAgentAdminRole IAM role to the EC2 instance. Install CloudWatch Agent using Systems Manager Run command: AWS-ConfigureAWSPackage Action: Install Name: AmazonCloudWatchAgent Configure the CloudWatch Agent Run the CloudWatch Agent Configuration Wizard. Create a new session using SSM Session Manager. sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard Note: Do not select CollectD, unless you already installed it using sudo yum install collectd. Specify /var/log/secure at the "Do you want to monitor any log files?" prompt. Validate the Configuration ```sh sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json -s ``` Create an SNS Topic The CloudWatch Alarm will notify this topic, and the topic will trigger the Lambda function. Topic name: AlarmNotificationTopic Configure the CloudWatch Alarm Add a Metric Filter to the Web Server Log Group Click Secure log group. Click Create metric filter. Filter pattern: [Mon, day, timestamp, ip, id, status = Invalid*] Click Test pattern. Click Assign metric. Filter name: InvalidSSHLogin Metric namespace: SSH Metric name: InvalidSSHLogin Click Create filter. Create Alarm Metric filter: SSH/InvalidSSHLogin Click Create alarm. Name: InvalidSSHLoginAlarm Description: Invalid login attempts >2 in 1 min for instance Note: The description is critical, as the instance ID at the end is used by the Lambda function to stop the instance. Whenever InvalidSSHLogin >= 2 for 1 out of 1 datapoints Subscribe to the SNS Topic Select AlarmNotificationTopic and click Create alarm. Create IAM Role for Lambda Function Create the role LambdaStopInstances using policy lambda_execution_role.json. Create Lambda function Name: StopInstanceRole: StopInstancesRole Trigger Lambda from SNS Select Trigger > SNS > AlarmNotificationTopic. Trigger the CloudWatch Alarm Make 3 invalid SSH login attempts within 2 minutes. Verify that the secure log contains the Invalid user string. Verify that the CloudWatch alarm is set. Verify that the CloudWatch Log for the Lambda function ran. Verify that the EC2 instance is stopped.

Remediating Inspector Findings


Lesson Description:

In this lesson, we will learn to automate the detection and remediation of security vulnerabilities using Amazon Inspector, SNS, and Lambda. Helpful Links Common Vulnerabilities and Exposures (CVE): EC2 Instance Role Policy: arn:aws:iam::aws:policy/AmazonSSMFullAccess Lambda Execution Role Policies: arn:aws:iam::aws:policy/AmazonSSMFullAccessarn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole Additional statement for Inspector: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "inspector:DescribeFindings" ], "Resource": "*" } ] } Create an SNS Topic Create an SNS topic. Grant the regional Inspector account permission to publish to the SNS topic. These users are listed here: Examples: US East (Northern Virginia) — arn:aws:iam::316112463485:rootUS East (Ohio) — arn:aws:iam::646659390643:root Lambda Function Adapted from The Lambda function will automatically patch EC2 instances when an Inspector assessment generates a CVE finding. The function requires that the EC2 instance to be patched have the Systems Manager (SSM) agent installed, and the agent must have a role attached with necessary SSM permissions. For details on this, see SNS Trigger The Lambda function is triggered by an SNS notification of a new finding from Inspector. The function checks to make sure that the finding is a CVE missing patch finding, and if so, it checks to ensure that the SSM agent is running. It then uses SSM to issue the appropriate patch-and-reboot commands to either Ubuntu or Amazon Linux. Run the Inspector Ensure that you've tagged the EC2 instance(s) you want to scan. Create an Assessment Target Click Create.Choose a name.Use tags, selecting the EC2 instance(s) you've tagged for inspection.Click Save. Create an Assessment Template Click Create.Choose a name (e.g, "All Assessments").Select the Assessment Target created in the previous step.Select all rules packages.Set the duration to 1 hour.Select the SNS topic we created previously.Deselect Assessment Schedule.Click Create and run. Run the Assessment Watch the assessment run progress, periodically refreshing for the full hour to update the Findings count. If there are any findings, check the CloudWatch log for the Lambda function to observe that SSM ran the update script against the EC2 instance. For example: yum update -q -y; yum upgrade -y.

Making Public S3 Objects Private


Lesson Description:

In this lesson, we'll learn how to detect unintended public access permissions in the ACL of an S3 object and how to revoke them automatically using Lambda, Boto3, and CloudWatch events. Create an S3 Bucket Create the S3 bucket: aws s3 mb s3://123456789012-everything-must-be-private aws s3 mb s3://123456789012-bucket-for-my-object-level-s3-trail Apply a bucket policy: aws s3api put-bucket-policy --bucket 123456789012-bucket-for-my-object-level-s3-trail --policy file://bucket_policy.json Create a CloudTrail Trail and Start Logging Run the following command: aws cloudtrail create-trail --name my-object-level-s3-trail --s3-bucket-name 123456789012-bucket-for-my-object-level-s3-trail aws cloudtrail start-logging --name my-object-level-s3-trail Create the event selectors: aws cloudtrail put-event-selectors --trail-name my-object-level-s3-trail --event-selectors file://event_selectors.json Create an IAM Execution Role for Lambda Create the IAM role: aws iam create-role --role-name AllowLogsAndS3ACL --assume-role-policy-document file://trust_policy.json Define the access policy: aws iam put-role-policy --role-name AllowLogsAndS3ACL --policy-name AllowLogsAndS3ACL --policy-document file://access_policy.json Create a Lambda Function For a PutObjectAcl API event, the function gets the bucket and key name from the event. If the object is not private, then it makes the object private by making a PutObjectAcl call. Zip the Lambda function: zip -r9 Create the Lambda function: aws lambda create-function --function-name RemediateObjectACL --zip-file fileb:// --role arn:aws:iam::123456789012:role/AllowLogsAndS3ACL --handler lambda_function.lambda_handler --runtime python3.7 --environment Variables={BUCKET_NAME=123456789012-everything-must-be-private} Allow CloudWatch events to invoke Lambda: aws lambda add-permission --function-name RemediateObjectACL --statement-id AllowCloudWatchEventsToInvoke --action 'lambda:InvokeFunction' --principal --source-arn arn:aws:events:us-east-2:123456789012:rule/S3ObjectACLAutoRemediate Create a CloudWatch Events Rule Create the rule: aws events put-rule --name S3ObjectACLAutoRemediate --event-pattern file://event_pattern.json Set the Lambda function as the target: aws events put-targets --rule S3ObjectACLAutoRemediate --targets Id=1,Arn=arn:aws:lambda:us-east-2:123456789012:function:RemediateObjectACL Testing Run the following command: aws s3api put-object --bucket 123456789012-everything-must-be-private --key MyPersonalInfo aws s3api get-object-acl --bucket 123456789012-everything-must-be-private --key MyPersonalInfo The above should return 1 grantee, the owner (you). This indicates that the object is private. Add public read access, violating our policy: aws s3api put-object-acl --bucket 123456789012-everything-must-be-private --key MyPersonalInfo --acl public-read Quickly check access again: aws s3api get-object-acl --bucket 123456789012-everything-must-be-private --key MyPersonalInfo You will see another grantee, allowing everyone to read the object: { "Grantee": { "Type": "Group", "URI": "" }, "Permission": "READ" } Describe the ACL again, and you'll see the Lambda function has removed public read access. Verify this in CloudWatch Logs. Cleanup aws events remove-targets --rule S3ObjectACLAutoRemediate --ids "1" aws events delete-rule --name S3ObjectACLAutoRemediate aws lambda delete-function --function-name RemediateObjectACL aws iam delete-role-policy --role-name AllowLogsAndS3ACL --policy-name AllowLogsAndS3ACL aws iam delete-role --role-name AllowLogsAndS3ACL aws cloudtrail delete-trail --name my-object-level-s3-trail aws s3 rb s3://123456789012-bucket-for-my-object-level-s3-trail --force aws s3 rb s3://123456789012-everything-must-be-private --force

Automating Resource Tagging


Lesson Description:

In this lesson, we'll learn how to automate the tagging of EC2 instances and their corresponding resources using a Lambda function with CloudTrail and CloudWatch. The function will ensure that users can work only on those resources that they have created based on resource tags. This is enforced via an IAM policy. Create the IAM Policy This policy allows Start/Stop/Reboot/Terminate for EC2 instances where the tag Owner matches the current requester's user ID. Run the following command: aws iam create-policy --policy-name TagBasedEC2RestrictionsPolicy --policy-document file://TagBasedEC2RestrictionsPolicy.json Note the policy ARN. Attach IAM Policy to Group Create a group called developers: aws iam create-group --group-name developers Attach the policy to the group: aws iam attach-group-policy --policy-arn arn:aws:iam::123456789012:policy/TagBasedEC2RestrictionsPolicy --group-name developers Create an IAM Role for the Lambda Function Create the IAM role: aws iam create-role --role-name LambdaAllowTaggingEC2Role --assume-role-policy-document file://trust_policy.json Define the access policy: aws iam put-role-policy --role-name LambdaAllowTaggingEC2Role --policy-name LambdaAllowTaggingEC2Policy --policy-document file://access_policy.json Create the Lambda Function Create the function TagEC2Resources. Create a CloudWatch Rule Create the rule: aws events put-rule --name AutoTagResources --event-pattern file://event_pattern.json Set the Lambda function as the target: aws events put-targets --rule AutoTagResources --targets Id=1,Arn=arn:aws:lambda:us-east-2:123456789012:function:TagEC2Resources Create an EC2 Instance as User Create an EC2 instance as an administrative/root user. Observe the Owner tag. Try working with EC2 instances that are untagged or owned by other users, and observe the "Access Denied" errors. What Next? Now that you know you can tag resources with a Lambda function in response to events, you can apply the same logic to other resources such as RDS databases or S3 buckets. With resource groups, each user can focus on just their resources, and the IAM policy provided in this lesson ensures that no unauthorized action is possible on someone else's instance. Additionally, tags are useful in custom billing reports to project costs and determine how much money each individual owner is spending. You can activate the Owner tag from the Cost Allocation Tags section of your billing console to include it in your detailed billing reports. For more information, see Applying Tags.

Rotating IAM Access Keys


Lesson Description:

In this lesson, we'll create a Lambda function that revokes user access keys periodically to enforce rotation and mitigate risk. The Lambda function will perform the following: Collect IAM users using paginationScan each user for existing IAM access keys older than 90 daysDeactivate the keysSend email alerts to the administrator Scheduled CloudWatch Rule: Triggers the Lambda function to run (e.g., weekly) Amazon Simple Email Service (Amazon SES) Be sure to use an SES-verified email address to ensure proper delivery of emails. SES API endpoints are not available in all regions. Go here for a list of supported endpoints.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


AWS Config

Finding Outdated Instance Types


Lesson Description:

In this lesson, we'll create a custom AWS Config rule to find outdated EC2 instance types. Moving off these old instance types can help us reduce costs and improve performance. Create a Lambda Function Create a role called "LambdaCheckInstanceTypeRole" and specify AWS Config Rules permissions as the policy template. Create the Lambda function CheckInstanceType. Note the ARN for the next steps. Create an AWS Config Rule In the AWS Config console, select Rules, then click the Add rule button.Next, click Add custom rule. Name: DesiredInstanceTypes.Description: Checks that all EC2 instances are of the type specified.AWS Lambda function ARN: (Copy and paste from the Lambda Console. It should look something like this: arn:aws:lambda:us-east-1:123456789012:function:CheckInstanceType)Trigger type: Configuration changesScope of changes: ResourcesResources: EC2: InstanceRule parameters: Key: desiredInstanceTypeValue: t2.micro, or a comma-separated list (e.g, t2.micro,t3.micro). Note: Any values not in this list will evaluate to noncompliant. Click Save. References Example Events for AWS Config:

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


Building Serverless Web Applications

Building a Contact Form with API Gateway and SES


Lesson Description:

This lesson demonstrates a web page with a typical contact form. Using API Gateway and a Lambda function as a backend for this form, we will send the form post contents via email using SES, and also write the contact data to DynamoDB. Source code for this lesson DynamoDB Create a table named Contact with a primary partition key named id: aws dynamodb create-table --table-name Contact --attribute-definitions AttributeName=id,AttributeType=S --key-schema AttributeName=id,KeyType=HASH --billing-mode=PAY_PER_REQUEST Lambda function Create a function named ContactEmail. API Gateway Create an API named ContactEmailAPI. Create Method Select POST and check the check mark. Integration Type: Lambda Function Use Lambda Proxy Integration: Checked (stores request data in event) Lambda region: Same region as Lambda function Lambda function: ContactEmail Enable CORS Select the POST methodUnder Actions, select Enable CORS Leave the default options and click on Enable CORS and replace existing CORS headers Click Yes, replace existing values Deploy API Under Actions, select Deploy API Deployment stage: [New stage] Stage name: prod Note the Invoke URL and update form.js. Test locally cd Contact-Form python3 -m http.server Navigate to http://localhost:8000


Creating Lambda-Backed Custom Resources


Lesson Description:

When you associate a Lambda function with a custom resource, the function is invoked whenever the custom resource is created, updated, or deleted. CloudFormation calls a Lambda API to invoke the function and to pass all the request data (such as the request type and resource properties) to the function. In this lesson, we will learn how to implement password confirmation logic in a CloudFormation template. AWS CloudFormation Deep Dive

Using Custom Resources to Deploy an S3-Hosted Website


Lesson Description:

In this lesson, we will extend the capabilities of Lambda-backed custom resources, deploying a static website to S3 from a GitHub repository. GitHub Repository Sample Web Application


Working with Lambda Layers


Lesson Description:

Lambda layers allow functions to easily share code. Upload a layer once, and reference it from any function. Layers can contain anything (dependencies, data, configuration files, etc.) and can be shared publicly or with specific AWS accounts. Collect common components in a ZIP file and upload as a Lambda layer. A function may reference up to 5 layers. Working with Lambda Layers Create a new virtual environment using Pipenv and install the required libraries: pipenv --python 3.7 pipenv shell pipenv install requests Note: If you are on macOS, you can install Pipenv using Homebrew: brew install pipenv On Amazon Linux, or another environment, you can install using pip: pip3 install pipenv --user Create the ZIP deployment package: PY_DIR='build/python/lib/python3.7/site-packages' # Create temporary build directory mkdir -p $PY_DIR # Generate requirements file pipenv lock -r > requirements.txt # Install packages into the target directory pip install -r requirements.txt --no-deps -t $PY_DIR cd build # Zip files zip -r ../ . cd .. # Remove temporary directory rm -r build Create the Lambda layer. aws lambda publish-layer-version --layer-name requests --compatible-runtimes python3.7 --zip-file fileb:// Note the LayerArn in the output.

Automating CodeCommit Change Notifications


Lesson Description:

In this lesson, we'll demonstrate how to receive detailed email notifications about file changes and commit messages when a code update is pushed to CodeCommit. A code reviewer may subscribe to the SNS topic and recieve updates for any changes. Create the CodeCommit Repository aws codecommit create-repository --repository-name ChangeNotification Note the cloneUrlHttp and Arn values in the response. Create and Subscribe to the SNS Topic aws sns create-topic --name CodeCommitChangeNotification aws sns subscribe --topic-arn arn:aws:sns:us-east-1:123456789012:CodeCommitChangeNotification --protocol email --notification-endpoint Create an IAM Lambda Execution Role Add AWSLambdaBasicExecutionRole.Add the following policy: LambdaCodeCommitSnsPolicy. { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "codecommit:*", "sns:*" ], "Resource": "*" }] } Create the Lambda Function Name it "CodeCommitChangeNotification". Set the following environment variables: REPOSITORY_NAME = ChangeNotification SNS_TOPIC_ARN = arn:aws:sns:us-east-1:123456789012:CodeCommitChangeNotification Create the CloudWatch Event Rule This rule will detect branch or repository changes. Choose Event Pattern. Service Name: CodeCommitEvent Type: CodeCommit Repository State Change Select Specific resource(s) by ARN, and enter the CodeCommit Repository ARN.Select the referenceCreated and referenceUpdated events. Event Pattern: { "source": [ "aws.codecommit" ], "detail-type": [ "CodeCommit Repository State Change" ], "resources": [ "arn:aws:codecommit:us-east-1:123456789012:ChangeNotification" ] } Under Target, select the CodeCommitChangeNotification function. Commit a Change Create and commit a file.Edit the file and commit it.

Third-Party Integrations

Creating Slack Notifications for CloudWatch Alarms


Lesson Description:

Configure Webhooks in Slack Create a Slack app: for and select Incoming Webhooks.Set Activate Incoming Webhooks to On.Select Add New Webhook to Workspace.Choose the default channel where messages will be sent and click Authorize.Note the webhook URL from the Webhook URLs for Your Workspace section. For example: WEBHOOK_URL= Test the webhook: curl -X POST -H 'Content-type: application/json' --data '{"text":"Hello, World!"}' $WEBHOOK_URL Create an SNS Topic aws sns create-topic --name high-cpu-alarm Note the TopicArn. Create a CloudWatch Alarm Send notifications to the SNS topic when CPU utilization > 40%: aws cloudwatch put-metric-alarm --alarm-name cpu-mon --alarm-description "Alarm when CPU exceeds 40%" --metric-name CPUUtilization --namespace AWS/EC2 --statistic Average --period 60 --evaluation-periods 1 --threshold 40 --comparison-operator GreaterThanThreshold --dimensions Name=InstanceId,Value=i-12345678901234567 --alarm-actions arn:aws:sns:us-east-1:123456789012:high-cpu-alarm --unit Percent Create an SSM Parameter aws ssm put-parameter --cli-input-json '{"Type": "SecureString", "KeyId": "alias/aws/ssm", "Name": "SlackWebHookURL", "Value": "'"$WEBHOOK_URL"'"}' Create a Lambda Execution Role Attach the following managed policies: AmazonSSMFullAccessAWSLambdaBasicExecutionRole Create a Lambda Function Use the SNS topic as a trigger. Stress the CPU # Install Extra Packages for Enterprise Linux sudo amazon-linux-extras install epel # Install stress sudo yum install -y stress # Beat it up for 5 mins stress --cpu 2 --timeout 300s

Creating a Twitter App


Lesson Description:

In this lesson, we will learn how to create a Twitter app that automatically sends tweets on a schedule. Create a Twitter Application More info here: Note the Consumer API keys and access token & access token secret. You can set these as environment variables, should you wish to create the SSM parameters using the AWS CLI: CONSUMER_KEY=5YAAipG8DLKjhdkjfhfQMZyuYhJ CONSUMER_SECRET=qi3YF6hzdflkjLKJFZgJ6J3WwmEktMfX0IwPQMas ACCESS_TOKEN=110809243434444423-pHHgCWvWOhc1KJHSLKJHFCAyrDOq0cR ACCESS_TOKEN_SECRET=QXesCU9LKJHFLKJHKFF9FFVOjymg9UIDvRa58ySaQjd7357 Create the SSM Parameters Use the keys from the previous step: aws ssm put-parameter --cli-input-json '{"Type": "SecureString", "KeyId": "alias/aws/ssm", "Name": "/TwitterBot/consumer_key", "Value": "'"$CONSUMER_KEY"'"}' aws ssm put-parameter --cli-input-json '{"Type": "SecureString", "KeyId": "alias/aws/ssm", "Name": "/TwitterBot/consumer_secret", "Value": "'"$CONSUMER_SECRET"'"}' aws ssm put-parameter --cli-input-json '{"Type": "SecureString", "KeyId": "alias/aws/ssm", "Name": "/TwitterBot/access_token", "Value": "'"$ACCESS_TOKEN"'"}' aws ssm put-parameter --cli-input-json '{"Type": "SecureString", "KeyId": "alias/aws/ssm", "Name": "/TwitterBot/access_token_secret", "Value": "'"$ACCESS_TOKEN_SECRET"'"}' Create an S3 Bucket and Upload the Data File aws s3 mb s3://123456789012-twitterbot aws s3 cp data.txt s3://123456789012-twitterbot Create an IAM Execution Role for Lambda Grant access to your S3 bucket: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::123456789012-twitterbot/*" }, { "Effect": "Allow", "Action": [ "ssm:GetParameter" ], "Resource": "arn:aws:ssm:us-east-2:123456789012:parameter/TwitterBot*" } ] } Create a Deployment Package Create a new virtual environment using pipenv and install the required libraries: pipenv --python 3.7 pipenv shell pipenv install tweepy mkdir package pipenv lock -r > requirements.txt pip install -r requirements.txt --no-deps -t package cd package zip -r9 ../ . cd .. zip -g rm -rf package/* Create a Lambda Function aws lambda create-function --function-name TwitterBot --zip-file fileb:// --role arn:aws:iam::123456789012:role/LambdaTwitterBotRole --handler lambda_function.lambda_handler --runtime python3.7 --environment Variables={BUCKET_NAME=123456789012-twitterbot} Create a CloudWatch Scheduled Rule aws events put-rule --name TwitterBot --schedule-expression 'rate(1 hour)' aws lambda add-permission --function-name TwitterBot --statement-id TwitterBot --action 'lambda:InvokeFunction' --principal --source-arn arn:aws:events:us-east-1:123456789012:rule/TwitterBot aws events put-targets --rule TwitterBot --targets file://targets.json

Monitoring and Debugging

Publishing Custom Metrics from Lambda


Lesson Description:

CloudWatch custom metrics are coded into your applications and can be used to report custom business and operational data. In this lesson, we will learn how to publish custom CloudWatch metric data from a Lambda function, reporting several key performance indicators (KPIs). These data can help us monitor progress toward sales, marketing, and customer service goals. Install the demo app using the AWS Serverless Application Model. You can find the instructions for installing the AWS SAM CLI here. Create the Lambda Deployment Package Run the following command: cd shopping-cart-app Create an S3 bucket in the location where you want to save the packaged code. (If you want to use an existing S3 bucket, skip this step.) aws s3 mb s3://123456789012-shopping-cart-app Create the deployment artifacts with dependencies. sam build Create the Lambda function deployment package by running the following package AWS SAM CLI command at the command prompt: sam package --output-template-file packaged.yaml --s3-bucket 123456789012-shopping-cart-app In the AWS SAM CLI, use the deploy command to deploy all of the resources that you defined in the template. sam deploy --template-file packaged.yaml --stack-name shopping-cart-app --capabilities CAPABILITY_IAM Locate the API Gateway Endpoint URLs Open the AWS CloudFormation console at the AWS CloudFormation stack that you created in the preceding step from the list.Under Outputs, note the API Gateway endpoint URLs.Browse each one and observe the JSON responses. Generate Traffic Using the API Gateway endpoint URLs from the previous step, generate traffic against each of these endpoints. Run an HTTP testing tool like vegeta to generate traffic to your API gateway endpoints. Modify URLs.txt to use the endpoint URLs in your account. Run a test for 60 minutes. cat URLS.txt | vegeta attack -duration=60m | tee results.bin | vegeta report View Custom Metrics You may view custom metric data while a load test is in progress. Open the CloudWatch Console at to Metrics.Under All metrics, select ShoppingCartApp.Select Metrics with no dimensions.Select ItemsAddedToCart, OrderTotal, and ViewProduct.

Tracing with X-Ray


Lesson Description:

In this video, we're going to learn about tracing our application with AWS X-Ray. We are going to deploy a sample Flask application that is instrumented with the X-Ray SDK, make some sample requests, and then examine the traces and service maps in the AWS Management Console. Zappa GitHub Repo: Create a DynamoDB Table Create a table called Flask with a primary partition key called key. aws dynamodb create-table --table-name Flask --attribute-definitions AttributeName=key,AttributeType=S --key-schema AttributeName=key,KeyType=HASH --billing-mode=PAY_PER_REQUEST Configure Zappa Run the following command: cd example pipenv --python 3.7 pipenv shell pipenv install aws-xray-sdk flask zappa requests pipenv lock -r > requirements.txt zappa init Add the following property to zappa_settings.json: "xray_tracing": true Deploy the application. zappa deploy Enable X-Ray Tracing for API Gateway In this step, we will interact with the API Gateway Console to enable X-Ray tracing. Sign in to the AWS Management Console and open the API Gateway Console at your API (e.g., example-dev).Select Stages.Choose the name of your deployment stage (e.g., dev).On the Logs/Tracing tab, select the Enable X-Ray Tracing box.Click Save Changes.Navigate to the endpoint in your browser.


What's Next?


Lesson Description:

Congratulations! If you've made it this far, you have successfully completed the Automating AWS with Lambda, Python, and Boto3 course. In this video, I'll recommend other Linux Academy courses you can take next and show you how to get involved in the Linux Academy community.

Get Recognized


Lesson Description:

Now that you have completed this course, take some time to share your success and get recognized in our community and on LinkedIn.