AWS Certified DevOps Engineer – Professional Level
AWS Training Architect II in Content
The AWS Certified DevOps Engineer - Professional exam tests the candidate's technical expertise in provisioning, operating, and managing distributed application systems on the AWS platform. For this exam, you should know how to:Implement and manage continuous delivery systems and methodologies on AWSUnderstand, implement, and automate security controls, governance processes, and compliance validationDefine and deploy monitoring, metrics, and logging systems on AWSImplement systems that are highly available, scalable, and self-healing on the AWS platformDesign, manage, and maintain tools to automate operational processes
This course is designed to provide you with the knowledge and skills required to pass the AWS Certified DevOps Engineer - Professional exam and to successfully apply these skills in the workplace.
NOTE: AWS transitioned to a new exam in February 2019, and we've updated our course material to reflect the changes.
Welcome to the DevOps Professional Course
This lesson discusses the AWS Certified DevOps Professional Certification, and covers the course objective of preparing the student for passing the certification exam. It also details the sections covered in the syllabus. As of October 11, 2018, AWS no longer requires you to hold an Associate or Foundational certification to sit for any Professional or Specialty certification exam.However, to ensure your success, we highly recommend you follow Linux Academy's suggested pre-requisites (for both our coursework and AWS exams). These prerequisite suggestions can be found in the "details'' section of the course syllabus.
About the Training Architect
This video introduces the course instructor.
About the Exam
This lesson explains the format of the AWS Certified DevOps Professional exam and walks you through the exam registration process.
Introduction to the DevOps Doctrine
This lesson introduces you to the DevOps Doctrine, an interactive diagramming tool used throughout the course. The DevOps Doctrine is organized to match each section presented in the course, and although it is used in the video lessons, it can also be accessed as a standalone tool. You can access the tool in the Downloads section of the course at the provided URL. In this lesson, we'll learn how to navigate through the DevOps Doctrine by section and drill down further by lesson. DevOps Doctrine Link: https://interactive.linuxacademy.com/diagrams/DevopsDoctrine2019.html
Course Tools and Features
This lesson introduces all of the tools available to the student as they go through the AWS Certified DevOps Professional preparation course.
AWS Free Tier: Usage Tracking and Billing Widget
This video gives you a walkthrough on how to use the AWS Free Tier Tracking and Billing Widget for you own AWS Account!
Deployment with EC2 and Auto Scaling
Auto Scaling Deployment Concepts
Deployment Concepts with Auto Scaling
This lesson will give you an overview of Auto Scaling and some of the key techniques for using Auto Scaling groups. This lesson covers the theoretical concepts behind Auto Scaling and sets the stage for upcoming lessons in the section. We will also walk through how to create a Launch Configuration and an Auto Scaling group.
Auto Scaling Termination Policies
This lesson provides a detailed explanation of the available termination policies for an Auto Scaling group as well as a detailed walkthrough of the default termination policy. Understanding the default termination policy will give you a solid foundation for understanding termination policies as a whole. We will also learn how to set up termination policies for an Auto Scaling group in the AWS Management Console. For the latest on customizing the Termination Policy, see: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html#custom-termination-policy
Suspending Auto Scaling Processes
In this lesson, we’ll learn how to suspend Auto Scaling processes. We’ll discuss which Auto Scaling processes can be suspended and walk through how to suspend Auto Scaling processes from the AWS Management Console. In addition, we’ll monitor the behavior of an Auto Scaling group while the launch process is suspended.
Auto Scaling Lifecycle Hooks Part 1
This lesson explains the proper use and configuration of lifecycle hooks for Auto Scaling groups. We'll go through a step-by-step review of the lifecycle of an EC2 instance and discuss how and why a lifecycle hook can be applied to an EC2 instance during instance launch or termination. This lesson lays the groundwork for "Auto Scaling Lifecycle Hooks Part Two," which is a hands-on lesson.
Auto Scaling Lifecycle Hooks Part 2
This lesson provides a hands-on walkthrough of how to create a lifecycle hook for an Auto Scaling group. We’ll create a launch configuration, an Auto Scaling group, and a lifecycle hook for the Auto Scaling group. Finally, we’ll manually launch an instance to monitor and verify the proper execution of the lifecycle hook.
Auto Scaling API, CLI, and SDK Calls
This lesson reviews the AWS API, CLI, and SDK with a specific focus on Auto Scaling. We’ll discuss the documentation on each of these topics, including where to find it and how to use it. In addition, we’ll open up the Command Line Interface and use Auto Scaling commands to create an Auto Scaling group and a launch configuration.
Using Auto Scaling to Process SQS Messages
A common use case is to have an SQS queue backed by an AWS Auto Scaling group. In general, the queue will store data that needs to be processed (in this lesson, the queue will store customer orders from a web application). The Auto Scaling group can then be configured to scale out or in based on the size of the queue, with CloudWatch alarms configured to trigger on queue size. This lesson provides a hands-on walkthrough of this use case using both the AWS Management Console and the AWS Command Line Interface to configure the scaling parameters.
QUIZ: AWS DevOps Auto Scaling Concepts
Deployment Concepts with EC2
Using IAM Roles with EC2
This lesson will teach you how to use IAM roles to delegate permissions with EC2. It is often necessary for applications to access resources in the AWS environment. Using IAM roles that can be attached to an EC2 instance is a best practice for allowing applications on an EC2 instance to access other resources and services. This lesson provides a hands-on walkthrough of how to create an IAM role, attach the role to an EC2 instance, and access an S3 bucket using the role. Update: AWS Systems Manager Parameter Store is a new, preferred way to store data such as passwords, database strings, license codes, and other sensitive information as parameter values. For details, see: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html
When creating an Elastic Load Balancer, you have the option of storing the access logs for the Load Balancer in S3. This lesson provides a detailed discussion of access logs and their contents. It also provides a hands-on walkthrough of how to configure access logs for an Elastic Load Balancer and set up delivery of those access logs to a specified S3 Bucket. During the hands-on part of the lesson, it is necessary to enter an Elastic Load Balancer ID. This ID can be found at: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html
ELB Health Checks
An Elastic Load Balancer performs health checks on all instances registered to it. In this lesson, we’ll discuss how to properly configure these health checks. We will create an Auto Scaling group; register the instances in the group with the Elastic Load Balancer; and configure, monitor, and troubleshoot health checks for this architecture.
In this lesson, we’ll learn about encrypted communication between clients and the ELB and between the ELB and backend instances, as well as how to configure ELB listeners. You’ll learn not only how to encrypt data to the load balancer but also to take it a step further by encrypting data the entire end-to-end distance.
EC2 Backup Strategies
This lesson details the various options and techniques for backing up EC2. You’ll learn about EC2 snapshots, AMIs, and their various uses and creation techniques. We’ll use the Management Console to create snapshots and AMIs and launch an EC2 instance in a new region from an AMI (a common disaster recovery technique).
QUIZ: DevOps Deployment Techniques for AWS EC2 and ELB
Monitoring and Logging with CloudWatch
CloudWatch for DevOps
CloudWatch for DevOps
CloudWatch is an essential tool for the DevOps engineer. CloudWatch supports the DevOps tasks of automation, communication, and collaboration by providing access to monitoring and logging tools. CloudWatch metrics work with Elastic Load Balancers and can be used to determine the scaling actions of Auto Scaling groups. Custom metrics are a very powerful tool, allowing the DevOps engineer to leverage CloudWatch monitoring in a wide range of scenarios. This lesson introduces all of these concepts before each is covered in further detail later in the section.
CloudWatch Concepts and Terminology
This lesson covers the core concepts and terminology a DevOps engineer needs to know when working with CloudWatch. Becoming familiar with these concepts lays the foundation for moving forward in this section of the course.
Elastic Load Balancer Metrics
This lesson explains the differences between ELB logging and metrics, what metrics we have access to, and how we can pull statistics from those metrics. You will learn how to configure, capture, and interpret metrics for the Elastic Load Balancer.
Auto Scaling and EC2 Metrics
This lesson explains the different Auto Scaling and EC2 metrics and how they can be used to monitor your infrastructure and application. We then discuss how different scaling policies work, including how to use step adjustments in order to react more or less aggressively depending on the severity of an alarm breach.
EC2 Custom Metrics and Application Logging
In addition to EC2 metrics, logging is extremely powerful for troubleshooting and monitoring data that is not supported with CloudWatch EC2 metrics out of the box. In this lesson, you’ll learn how you can publish data specific to your application and requirements and why this is important.
Using SNS to Notify Personnel of CloudWatch Alarms
This lesson focuses on how to notify personnel of CloudWatch alarms. One way to do this is to create an SNS topic and have personnel subscribe to it. In this lesson, we’ll set up a CloudWatch alarm, link it to an SNS topic, and use it to notify the appropriate personnel of the alarm via email.
Using Kinesis to Process Log Data
This lesson explains how Kinesis can be used with CloudWatch to consume CloudWatch log data. DevOps engineers often have to stream CloudWatch log data to other services. Kinesis can be used both as an endpoint for CloudWatch logs and as a vehicle for streaming log data to another service, such as Elastic MapReduce, for more in-depth analysis.
QUIZ: Monitoring and Logging with CloudWatch
Application Deployments with CloudFormation
CloudFormation for DevOps
This lesson focuses on the core concepts of CloudFormation and lays the foundation for the rest of the section. We’ll take a look at how to create, update, and delete or roll back stacks as well as the techniques used to perform each of these tasks.
Deployment Strategies with CloudFormation
This lesson explains the different deployment strategies and methods available with CloudFormation. You will learn about bootstrapping EC2 instances and how helper scripts can be used to manage this process. Additionally, we will learn how CloudFormation can work with other technologies, such as Puppet, OpsWorks, and Elastic Beanstalk, for deployments.
CloudFormation: Wait Conditions
Wait conditions can be used to pause the creation of a CloudFormation stack and wait for a signal (or multiple signals) before continuing to create the stack. In this lesson, you will learn how to use wait conditions and to recognize the proper use cases for wait conditions.
CloudFormation: Creation Policies
Creation policies are very useful for creating and configuring instances with an application. In this lesson, you will learn how to use creation policies to pause the creation of EC2 instances or instances within an Auto Scaling group.
Using a Creation Policy with Auto Scaling
This lesson will provide a hands-on walkthrough of how to use the CreationPolicy attribute with an Auto Scaling group to make sure the instances were bootstrapped properly before creating the rest of the stack.
CloudFormation: Helper Scripts
CloudFormation provides helper scripts to assist with provisioning EC2 instances. In this lesson, we will learn how to use the following four helper scripts:
CloudFormation: Stack Policies
Stack policies are used to control which resources can be updated and by what actions. In this lesson, we will learn how to configure stack policies in the JSON format and allow or deny actions on specific resources. Files used in this lesson can be found here: https://github.com/natonic/DevOpsPro/tree/master/StackPolicies
CloudFormation: Update Policies
Update policies play a crucial role in deploying updates when using Auto Scaling groups in CloudFormation. In this lesson, we will learn about the different update options we have, how they differ from one another, and when to use each one.
CloudFormation Cross-Stack References
In this lesson, the student will be introduced to cross-stack references as a way to share data between stacks. CloudFormation Templates are code (Infrastructure as Code), and it is good practice to modularize code. It can often be helpful to create multiple templates for an organization per functional group. But the need arises to share data between the stacks built from these templates and this can be done with Cross-Stack References. After getting a thorough introduction to Cross-Stack References, the student will be walked through the creation of stacks which use Cross-Stack References.
CloudFormation Drift Detection
This lesson instructs the student on CloudFormation Drift Detection. CloudFormation Stack Drift occurs when the resources in a stack have drifted from their original creation template. This can become a major problem and Drift Detection can be used to identify this problem. After the problem is identified, the lesson covers the steps that can be taken to bring the CloudFormation stack back into synch with the original template. Stack updates can be used to bring the stack back in to synch and should be used in most cases to update the stack at any time.
This lesson talks about the options available for updating CloudFormation Stacks. CloudFormation Stacks can be update by the direct method, which is applying an update template to the stack. But CloudFormation Stacks can be updated using Change Sets, which differ from the direct update in that the changes are evaluated by CloudFormation. And before the update is performed (but after the evaluation) there is an opportunity to back out of the update. After discussing update methods, the student will be walked through an update using a Change Set.
Template Architecture - Nested Stacks
This lesson focuses on Nested Stacks. In CloudFormation, Stacks can be nested to promote code reuse and to simplify the modification of commonly used elements. In the lesson, the student will learn about the use cases for nested stacks and how to construct a nested stack. The lesson will conclude with a walkthrough on creating a complex nested stack that has six child stacks below the root stack. After the nested stack is constructed, the stack will also be deleted to illustrate the interdependence of the stacks even during deletion.
CloudFormation Custom Resources
In this lesson, the student will learn about CloudFormation Custom Resources. CloudFormation Custom Resources enable you to write custom provisioning logic in templates that AWS CloudFormation runs anytime you create, update, or delete stacks. The most common use for custom resources is to have a Lambda Function which executes and returns specific results back to CloudFormation during stack creation. This lesson will take an in-depth look at a Lambda backed custom resource and all of the components needed for execution. The lesson will end will a walkthrough using a Lambda backed custom resource to verify a password.
QUIZ: Deployments with CloudFormation
Application Deployments on Elastic Beanstalk
Elastic Beanstalk for DevOps
Introduction to Elastic Beanstalk
This lesson provides an introduction to Elastic Beanstalk and demonstrates how to create an Elastic Beanstalk environment.
Deployment Strategies Deep Dive with Elastic Beanstalk
This lesson explains the different application deployment strategies available in Elastic Beanstalk: all at once, rolling, rolling with an additional batch, and immutable. We’ll discuss the pros, cons, and use cases for each method.
Docker Deployments with Elastic Beanstalk
This lesson explains how Docker can be used with Elastic Beanstalk to perform deployments. We’ll discuss the differences between single- and multi-container deployments and walk through a single-container deployment.
In this lesson, we’ll learn how to configure Elastic Beanstalk environments using configuration files and saved configurations, and by applying changes directly to the environment. We’ll take a look at some example config files, and we'll discuss precedence.
Using Elastic Beanstalk with CloudFormation
This lesson explains how CloudFormation can be used with Elastic Beanstalk for deployments. This type of deployment combines CloudFormation’s control over versioning and infrastructure with Elastic Beanstalk’s ease of application deployment. To demonstrate this, we’ll walk through an example of this type of deployment.
Elastic Beanstalk with RDS
In this lesson, the student is given options on deploying Elastic Beanstalk with an RDS database. As a DevOps Engineer, it is important to understand these different options and the use cases that correspond to these options. An RDS database can be deployed directly in an Elastic Beanstalk environment. However, this means that when the Elastic Beanstalk environment is deleted, the RDS database will be deleted as well. This may be fine for dev or test environments but is probably not ideal for a prod environment. The other option to more loosely couple RDS with Elastic Beanstalk is to create the database external to Elastic Beanstlak and then have application within Elastic Beanstalk connect to it.
QUIZ: AWS DevOps Deployments with Elastic Beanstalk
OpsWorks for DevOps
Application Deployments on OpsWorks
AWS OpsWorks Essentials
This lesson provides an in-depth look at the essential parts of OpsWorks, including stack creation, the purpose and methods of creating layers within a stack, instance creation, and application deployment to instances. We’ll also walk through how to create an OpsWorks stack.
Creating Our First Stack
In this lesson, we’ll discuss the fundamentals of creating an OpsWorks stack. Understanding stack architecture, including layers and instances, is an essential DevOps skill that is often tested on the AWS DevOps Professional Certification exam.
Deployment Strategies with AWS OpsWorks
This lesson explains the different deployment strategies for OpsWorks. OpsWorks deployments can be done manually, by rolling updates, or by blue/green deployments. In this lesson, we’ll go over each of these techniques and discuss the pros and cons of using each method. We'll also go through a hands-on walkthrough of a manual deployment. Note: A common cause for OpsWorks instances not starting is a problem with the VPC configuration for Internet access. For details, see: https://aws.amazon.com/premiumsupport/knowledge-center/linux-instance-start-opsworks/
QUIZ: Application Deployments on AWS OpsWorks
Using DynamoDB with Applications
This lesson provides a review of the essential concepts of AWS DynamoDB. We’ll walk through the structure and principles of DynamoDB using a few tables as well as review and create several indexes, partition keys, and sort keys. Finally, we’ll look at a scenario in which a secondary index would be useful, which sets the stage for the next lesson.
Local and Secondary Indexes Deep Dive
In this lesson, we’ll take a detailed look at local and global secondary indexes and walk through how to create each type of index. In the previous lesson, we saw a situation where we needed a secondary index on a table, and in this lesson we’ll learn how to create that secondary index. In addition, we’ll learn the difference between local and global secondary indexes. For more on AWS Secondary Indexes, see: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-indexes-general.html
Understanding Provisioned Throughput
This lesson explains provisioned throughput, including how to perform in-depth calculations of provisioned throughput. We’ll learn the difference between provisioned throughput calculations for read and write operations as well as the differences in provisioned throughput for local and global secondary indexes. We’ll also take a look at AWS Auto Scaling for DynamoDB (although this is not expected to be on the exam at the time of recording).
Using Web Identity Federation with DynamoDB
This lesson explains how to provide access to DynamoDB from outside an AWS account using web identify federation. We’ll walk through how to configure web identity federation and discuss how to use cross-account access with Data Pipeline (or DynamoDB Streams) to propagate DynamoDB data to other regions.
In-Memory Acceleration with DAX
In this lesson, the student will learn about Amazon DynamoDB Accelrator, otherwise known as DAX. DynamoDB is already optimized for performance and can deliver response times in the single-digit milliseconds. And for most use cases this is acceptable. But some use cases require microsecond response times and this is where DAX can be used. This lesson will detail the use cases for DAX and ultimately, walk through the configuration of a DAX Cluster. A DAX Cluster is a cluster of memory nodes which work together to provide the single digit microsecond response times.
DynamoDB Streams and TTL
In this lesson, the student will learn about DynamoDB Streams, the use cases appropriate to implement DynamoDB Streams, and also Time-To-Live (TTL). DynamoDB Streams allow for the ability to capture changes to items stored in a DynamoDB table, at the point in time when such changes occur. Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams writes a stream record with the primary key attribute(s) of the items that were modified. The lesson will go on to walk through the configuration of DynamoDB Streams. Time To Live can also be set on a table and items can be automatically removed when their lifetime expires. You can back up, or otherwise process, items deleted by Time To Live by enabling Amazon DynamoDB Streams on the table and processing the Streams records of the expired items.
Data Pipeline with DynamoDB
Data Pipeline with DynamoDB
This lesson demonstrates how AWS Data Pipeline can be used to import data from and export data to DynamoDB. We’ll walk through how to set up and activate a data pipeline and discuss other use cases for the Data Pipeline service.
QUIZ: AWS DynamoDB DevOps Concepts
S3 for DevOps
S3 Concepts for DevOps
Overview of S3 for DevOps
This lesson explains the different ways that S3 can be used for DevOps deployments. We’ll discuss credential storage and access using the IAM role, cross-account access, exporting load balancer access logs to S3, and S3 lifecycle management policies. In addition, we’ll walk through how to attach an IAM role to an EC2 instance for S3 access, which allows you to access S3 programmatically from an application on the EC2 instance.
Object Lifecycle Management
This lesson discusses AWS S3 lifecycle management. We provide an overview of S3 and discuss possible scenarios and key words that you may encounter on the AWS Certified DevOps Professional exam. Finally, we’ll walk through how to configure lifecycle management for an S3 bucket.
Create a Cross-Account Role and S3 Bucket Policy
In this lesson, we’ll learn about granting cross-account access to an S3 bucket. We’ll take a look at a scenario in which a company has two AWS accounts: Development and Production. Instead of having to duplicate accounts for team members, we can use cross-account access to grant employees access to both accounts. To demonstrate this, we will give developers on the Development account access to an S3 bucket in the Production account.
Exporting Log Data to S3
This lesson explains how S3 can be used to store log files from an Elastic Load Balancer. We’ll walk through how to configure the Elastic Load Balancer to send access logs to S3 as well as how to configure S3 to allow the ELB access to S3 to store the log files. Finally, we’ll verify that our configuration was successful.
S3 Intelligent Tiering
In this lesson, the student will learn about S3 Intelligent Tiering. S3 Intelligent Tiering provides another option for storing objects in S3. Although S3 Standard, S3 Infrequent Access, and Glacier are still relevant storage options, S3 Intelligent Tiering provides a valuable alternative. With Intelligent Tiering, the user can just turn the decision making on S3 Lifecycle Management over to AWS. And based on usage patterns, S3 Intelligent Tiering will store the objects in the most appropriate and cost-effective tier. This lesson will discuss the features of Intelligent Tiering and its use cases and conclude with configuring Intelligent Tiering on an object in an S3 Bucket.
QUIZ: AWS S3 DevOps Concepts
A/B Testing and Blue/Green Deployments
A/B Testing and Blue/Green Deployments for DevOps
Introduction to Blue/Green Deployments
This lesson introduces the core concepts of blue/green deployments. After learning the basics, we’ll discuss the different patterns used to perform blue/green deployments. Finally, we’ll walk through two simple patterns, setting the stage for a deeper dive into the remaining patterns later in the section.
Update DNS Routing with Route 53
In this lesson, we will learn how to use Route 53 to perform blue/green deployments. This pattern uses a DNS or IP address endpoint to switch over from a blue environment to a green environment. We can also perform a switchover from within the OpsWorks and Elastic Beanstalk consoles instead of directly interacting with Route 53. We will walk through how to change the alias record in Route 53 to perform a switchover as well as take a look at weighted routing in Route 53.
Swap the ASG behind the ELB
This lesson focuses on how to perform blue/green deployments using Auto Scaling groups in the AWS Management Console. In this lesson, we will not use Route 53 to facilitate a blue/green deployment. Instead, we will attach a second Auto Scaling group (Green) to the Elastic Load Balancer and gradually increase it to the desired size. We will then decommission the instances in the original Auto Scaling group. For more on Elastic Load Balancers, see: https://aws.amazon.com/elasticloadbalancing/ https://aws.amazon.com/elasticloadbalancing/faqs/?nc=sn&loc=6
Update ASG Launch Configurations
In this lesson, we’ll learn how to use Auto Scaling group launch configurations to perform blue/green deployments from the AWS Management Console. When we can’t or don’t want to use Route 53 to manage traffic flow to the blue and green environments, we can instead create a second launch configuration (Green) and attach it to our Auto Scaling group. We then double the size of the Auto Scaling group, and the new instances will be created using the Green launch configuration. Once these instances have been successfully deployed, we can decommission the old (Blue) instances.
A/B Testing Overview
This lesson explains the concept of A/B testing. Although A/B testing is not the same as blue/green deployments, they are similar. Like blue/green deployments, A/B testing is used to compare two different versions of an application in order to find out which is more effective at engaging users. We can manage two versions of an application and the traffic flow to each using Route 53 and weighted routing. In this lesson, we’ll take a look at some of these techniques using the AWS Management Console, Route 53, and CloudFront.
QUIZ: AWS DevOps Blue/Green Deployments
AWS DevOps Scenarios
The Scenario Solver
Scenario Solver Introduction
This lesson introduces the Scenario Solver, a learning tool that provides scenarios with partial graphical solutions and enables you to use your knowledge of AWS DevOps to come up with solutions.
In this lesson, we’ll take a look at two CloudWatch scenarios in the Scenario Solver. This activity will test your understanding of CloudWatch and the CloudWatch tools that interact with other AWS services and challenge you to create a well-designed logging and alarm solution.
Auto Scaling Scenarios
This lesson uses the Scenario Solver to test your understanding of Auto Scaling groups. You’ll use various Auto Scaling tools and techniques to solve the scenarios presented.
Elastic Beanstalk Scenarios
This lesson will test your ability to perform deployments using Elastic Beanstalk. To solve the presented scenarios, you’ll need to understand the tools and techniques used to perform deployments with Elastic Beanstalk.
This lesson will test your ability to perform deployments with CloudFormation. To solve the presented scenarios, you’ll need a solid grasp of the tools and techniques available for performing deployments in CloudFormation. Update: AWS now recommend that you use a CreationPolicy attribute instead of wait conditions. For more details, see: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-waitcondition.html
This lesson will test your ability to perform deployments using EC2. To solve the presented scenarios, you’ll need a solid understanding of the tools and techniques available to you when working with EC2. Update: AWS Secrets Manager is better now a days for database credentials, API keys, and other secrets. For more details on AWS Secrets Manager, see: https://aws.amazon.com/secrets-manager/
This lesson will test your ability to perform deployments with OpsWorks. Understanding the fundamentals of OpsWorks, including the construction and hierarchy of stacks, is an important skillset for the DevOps engineer.
Deployment Pipelines with AWS Developer Tools
Introduction to Deployment Pipelines
This lesson provides an introduction to Deployment Pipelines. In this section of the course Deployment Pipelines will be examined and constructed using AWS Developer Tools. This includes AWS CodeCommit, CodeBuild, CodeDeploy, and Code Pipleine. In addition, Git and GitHub will be examined with a specific eye on how these tools interact with the AWS Developer Tools suite. The focus on this module will be creating Deployment Pipelines to simplify and automate deployments. You will find all source code for this section in the Git Repository here: https://github.com/natonic/DeploymentPipeline
AWS Developer Tools Suite
In this lesson, the student will look at the AWS Developer Tools Suite. The lesson will review each of the service offerings in the Developer Tool Suite. This includes AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline. These services will be used throughout this section of the course. But additionally, this lesson provides and overview on the other services in Developer Tools: AWS Cloud9, AWS CodeStar, and AWS X-Ray.
Introduction to Github
This lesson provides an introduction to Github. Guthub is a web-based hosting service for version control using Git. Git and Github can be used with AWS Deployment Pipelines with or without AWS CodeCommit.
This lesson provides a walkthrough on Github. The student will learn to use Github by walking through using Github and most of the primary operations provided by Github.
Introduction to CodeCommit
In this lesson, the student will be introduced to AWS CodeCommit. CodeCommit will be examined with respect to its interaction with Github. And additionally, the student will learn about CodeCommit as part of a Deployment Pipeline.
This lesson provides a walkthrough on using CodeCommit. This includes the initial configuration of CodeCommit, including setting up and appropriate IAM user and permission for CodeCommit. The student will also be shown how to create HTTPS Git credentials for CodeCommit. The student will then create a repository in CodeCommit and, much like in the Github lesson, perform several of the key operations in CodeCommit.
Introduction to CodeBuild
This lesson provides an introduction to AWS CodeBuild. The student will be introduced to all of the features of CodeBuild, on what platforms you can use CodeBuild, and will begin prepping for a demonstration usage of CodeBuild in the next lesson.
This is an in-depth lesson on CodeBuild. The student will be introduced to the primary artifacts necessary to perform a build using CodeBuild. This includes the all important buildspec.yml file. This lesson will cover all of the configuration and set the stage for the CodeBuild demonstration walkthrough in the next lesson.
Performing an Application Build with CodeBuild
This lesson provides a walkthrough on performing an application build with CodeBuild. In the previous two lessons, the student was ramped up on CodeBuild and configuring CodeBuild in preparation for a build. In this lesson, the student will be walked through using the CodeBuild Management Console to create a build project and finally to run the build project.
Introduction to CodeDeploy
This lesson provides an introduction to AWS Code Deploy. The student will be introduced to the capabilities of CodeDeploy, and the appspec.yml file which is instrumental in performing deployent on CodeDeploy. This lesson provides a detailed looked at the inner workings of the CodeDeploy service.
CodeDeploy Pre-requisite Configuration
This lesson details the security pre-requisites necessary before performing deployments with CodeDeploy. This includes provisioning an IAM User, install the AWS CLI, creating a service role for CodeDeploy, and creating an IAM instance profile for the EC2 instances being used during deployment.
Code Deploy Walkthrough
This lesson walks the student through a deployment using CodeDeploy. The lesson takes an in-depth look at the appspec.yml file which is required for doing deployments with CodeDeploy. And then concludes with the start of the CodeDeployment walkthrough by configuring an auto scaling group for the purpose of using with CodeDeploy.
CodeDeploy Walkthrough Part 2
This lesson completes the deployment using CodeDeploy and verifies the successful deployment. After creation of an Auto Scaling Group in the previous lesson, Code Deploy is used to deploy to the instance in the Auto Scaling Group. The Auto Scaling Group is then scaled up to verify that any new instances will trigger a new deployment to the Auto Scaling Group.
Introduction to CodePipeline
This lesson provides an introduction to AWS CodePipeline. Everything in this section of the course relates to the end product, which is a Deployment Pipeline using Code Pipeline. So this lesson introduces the student to CodePipeline and ties everything together. The student will be introduced to the features of CodePipeline in preparation for a walkthrough in the next lesson.
In this lesson, the student will be walked through using CodePipeline to orchestrate an application deployment. This is the final lesson in the section and everything prior to this lesson builds up to this final implementation of AWS CodePipeline.
Deployments With AWS Lambda
Introduction to Lambda
In this lesson, the student will be introduced to AWS Lambda. This includes a definition of what is Lambda, an overview of its uses and how it works as well as an introduction to AWS Lambda from the AWS Management Console.
How Lambda Works
This lesson provides the student with an in-depth look at how Lambda works. Using the AWS Management Console and an AWS provided example the student can see the innner workings of AWS Lambda. This includes Event Sources, and the events that trigger a Lambda function. Additionally, the items that the Developer needs to provide are covered. This includes such things as the code for the Lambda function, specifying the memory allocation and execution timeout.
Simple Lambda Walkthrough
This lesson provides the student with a walkthrough on configuring and deploying a Lambda Function. A Lambda function is created from the AWS Management Console and all of its components are examined. The Lambda Funnction, which is an AWS sample function, is then deployed and run several times to examine the runtime operation of the function. AWS CloudWatch, which records runtime information for Lambda, is also examined. Update: Lambda now has a function timeout of 15 minutes. For the latest on Lambda limits, see: https://docs.aws.amazon.com/lambda/latest/dg/limits.html
Simple Lambda Walkthrough With An SQS Event Source
This lesson performs another AWS Lambda Walkthrough. This example is a bit more complex than the previous example and continues a gentle ramp-up on Lambda for the student. The walkthrough entails configuring AWS SQS as the Lambda event source and starts by setting up a Lambda executuon role for SQS. An SQS queue is then created and configured as the event source. And to test the configuration, messages are sent to the queue which triggers the Lambda Function.
Lambda Deployments With CloudFormation
This lesson provides a walkthrough on using CloudFormation to create a Lambda Function which can be used to retrieve AMI ids real-time for EC2 deployments. A thorough investigation of the CloudFormation template includes a discussion on the Custom Resource, the Lambda Function, and Lambda Execution Role creation all within the CloudFormation template. A CloudFormation Stack is then built from the template and verification is performed to ensure that an EC2 instance was built from an AMI ID retrieved in real-time.
This lesson provides a walkthrough on how to version Lambda functions. Additionally, Lambda Aliases are introduced along with the role Aliases play in versioning and deploying Lambda functions.
API Gateway Deployments
Introduction to API Gateway
This lesson introduces the student to AWS API Gateway. It answers the question What is API Gateway and digs a bit deeper into what API Gateway offers for deploying APIs. The introductory knowledge presented in this lesson will set the user up to begin using API Gateway as they progress through this section of the course.
How API Gateway Works
This lesson goes into detail about how API Gateway works. Using an AWS example, the student can walk through using an API to call a back end data store. Additionally, the inner workings of API Gateway during this process will be examined. Additionally, a deployment of an API will be performed.
REST API vs Web Socket API
This lessons highlights the differences between REST APIs and Web Socket APIs. The lesson goes on to a deeper dive on Web Sockets APIs.
API Gateway Private Endpoints
This lesson introduces the student to API Gateway Private Endpoints. The use case for Private Endpoints is if you only want services within your VPC to access your API through API Gateway. The lesson walks through configuring a VPC Endpoint within a private subnet. An API in API Gateway is also configured and then finally a Lambda function is invoked from within the private subnet to communicate with the API.
API Gateway With Lambda
This lesson walks the student through configuring API Gateway as a front-end for invoking a Lambda function. The lesson goes into detail on the specific interactions that take place from the client to API Gateway to ultimately invoking the Lambda function.
Deploying And Versioning Your APIs
This lesson will use a Lambda function fronted by an API in API Gateway to demonstrate different options for deployments. By using versioning and aliases, the student will learn to invoke different versions of a Lambda function from API Gateway. Aliases are an efficient means to quickly change the version to which an API points. All of these different methods combine to ease the difficulty of deployments and to greatly reduce the chance of erroneous deployments, and simplify rollback if errors occur.
AWS Secrets Manager
What is Secrets Manager and How Does It Work
In this lesson, the student will learn about AWS Secrets Manager and how Secrets Manager can be used to secure credentials such ad database passwords, access keys, and API Keys. Secrets Manager is an AWS service which can be used to store your secrets in a secure and encrypted manner. This will allow things such as database access credentials to be removed from application code and stored securely. The lesson will conclude with a walkthrough on configuring and storing secrets in AWS Secrets Manager.
Creating and Storing Secrets
AWS Secrets Manager is an AWS service that makes it easier for you to manage secrets. Secrets can be database credentials, passwords, third-party API keys, and even arbitrary text. You can store and control access to these secrets centrally by using the Secrets Manager console, the Secrets Manager command line interface (CLI), or the Secrets Manager API and SDKs. This lesson will use the AWS Management Console to walk the student through create and storing secrets in AWS Secrets Manager.
You can configure AWS Secrets Manager to automatically rotate the secret for a secured service or database. Secrets Manager already natively knows how to rotate secrets for supported Amazon RDS databases. However, Secrets Manager also can enable you to rotate secrets for other databases or third-party services. Because each service or database can have a unique way of configuring its secrets, Secrets Manager uses a Lambda function that you can customize to work with whatever database or service that you choose. You customize the Lambda function to implement the service-specific details of how to rotate a secret. This lesson walks the student through setting up the rotation of secrets from the AWS Management Console.
How to Prepare for the Exam
This lesson goes over the steps you can take to ensure your success on the exam. We’ll outline a full strategy for preparing for the exam, including study tips, key terms you should know, how and when to schedule your exam, and helpful test-taking strategies.
What's Next After Certification?
This lesson discusses your options for continued learning after you've completed the AWS Certified DevOps Professional course.
In this video, Linux Academy Founder and CEO Anthony James explains the steps you can take to get recognized after successfully passing your certification exam. Anthony will explain how to connect with him on LinkedIn and Twitter so that he can endorse you for the skills you have mastered by getting your certification.
AWS Certified DevOps Engineer - Professional Practice Exam