Google Cloud Certified Professional Cloud Architect
April 4th, 2019
Team Lead Google Cloud in Content
Google Cloud Platform is one of the fastest growing cloud service platforms offered today that lets you run your applications and data workflows at 'Google-sized' scale.
The Google Cloud Certified Professional Cloud Architect certification is one of the most highly desired IT certifications out today. It is also one of the most challenging exams offered by any cloud vendor today. Passing this exam will take many hours of study, hands on experience, and understanding of a very wide range of GCP topics
Luckily, we're here to help you out! This course is designed to be your best single resource to prepare for and pass the exam to become a certified Googel Cloud Architect.
So let's get started!
Interactive Diagram: https://interactive.linuxacademy.com/diagrams/MasterBuildersGuide.html
Welcome to our course on preparing for the Google Cloud Certified Professional Cloud Architect certification. This introductory lesson will go over what to expect as well as covering prerequisite knowledge required to be successful for this certification. The additional courses and learning paths listed as required or recommended prerequisites are listed below: Courses Google Cloud EssentialsGoogle Cloud Security EssentialsGoogle Cloud Associate Cloud EngineerGoogle Cloud Kubernetes Engine Deep DiveGoogle Cloud App Engine Deep Dive Learning Paths Junior DevOps Learning PathJunior Google Cloud Engineer Learning Path
Using the Master Builder's Guide to Google Cloud - Interactive Diagram
This lesson is going to cover the interactive diagram we will be using througout this course, and is also available for you to use as a study reference on your own. The direct link is below and also available from the main syllabus page. https://interactive.linuxacademy.com/diagrams/MasterBuildersGuide.html
Purpose of this Course
In this lesson, we will discuss what exactly the role of a Google Cloud Architect is in the context of the role of a traditional architect.
Architect Exam and Course Overview
In this lesson, we will take a look at how the Google Cloud Architect exam is organized, and how we will approach preparing for the exam in this course.
Core Management Services
Cloud Resource Manager
In this lesson, we will preview the management services we will discuss for the next few sections and why they are important to the role of a cloud architect.
In this lesson, we will cover how Google Cloud's resources are organized from a top level view and how permissions are inherited from other layers in the hierarchy.
Resource Hierarchy Hands On
In this lesson, we will take a brief tour of the Google Cloud organization layers, taking a look at the organization, folder, and project nodes.
Let's now talk about and demonstrate labels, which are used to label your resources for organizational purposes.
Quotas are an important 'soft brake' to help prevent spikes in usage in cost. We will learn more about them in this lesson.
Cloud IAM Overview
The ability to manage access to GCP resources for users and applications is a critical skill that needs to be mastered. We are going to start our deep dive into IAM by discussing the main pillars of IAM and start working through the various layers of understanding.
IAM Hands-On Part 1
It's now time to take a hands-on tour of working with IAM on Google Cloud. In this first part of a two part process, we will merely view different IAM settings and layers, which we will follow up by actually making some changes. From the lesson, the command to retrieve an IAM policy in YAML format is below. gcloud projects get-iam-policy (PROJECT_ID) > (filename).yaml
IAM Hands-On Part 2
We are going to continue working with IAM by now actually making some changes. We are going to add roles to both the project and organization layer and show how to do the same via command line. Commands used in this video are below: Retrieve IAM policy and download in YAML format: gcloud projects get-iam-policy (PROJECT_ID) > (filename).yaml Update IAM policy from updated file: gcloud projects set-iam-policy PROJECT_ID (filename).yaml Add single binding without downloading file gcloud projects add-iam-policy-binding PROJECT_ID --member user:(user's email)--role roles/editor
Service accounts are a special type of member account that are used by applications and servers instead of people. They can also be both a member and a resource at the same time. We will take a closer look at service accounts before we go into a hands-on demonstration.
Service Accounts Hands On
After a conceptual overview, let's now take a hands-on demonstration on how to work with service accounts in the context of using them on Compute Engine.
IAM Best Practices
Let's close out this section by discussing best practices for IAM. In general, you want to grant the smallest scope of rights necessary, which is known as the 'principle of least privilege'.
The ability to manage access to billing roles and work with billing data is yet another important skill for a Cloud Architect. We will take a look at this role before going into a hands-on demonstration.
Hands On - Billing and IAM
In this lesson, we will demonstrate how to work with billing IAM roles at both the organization and billing account level. We will explore how to navigate billing options in the web console, create billing exports, and view a sample dataset of billing data in BigQuery. The queries we used for this demo can be found below for your convenience: Find all charges that were more than 3 dollars: SELECT product, resource_type, start_time, end_time, cost, project_id, project_name, project_labels_key, currency, currency_conversion_rate, usage_amount, usage_unit FROM `cloud-training-prod-bucket.arch_infra.billing_data` WHERE (cost > 3) Find which product had the highest total number of records: SELECT product, COUNT(*) FROM `cloud-training-prod-bucket.arch_infra.billing_data` GROUP BY product LIMIT 200 Which product most frequently cost more than a dollar: SELECT product, cost, COUNT(*) FROM `cloud-training-prod-bucket.arch_infra.billing_data` WHERE (cost > 1) GROUP BY cost, product LIMIT 200
Monitoring with Stackdriver
The Stackdriver suite of products allow you more effectively log, monitor, and troubleshoot everything that is happening in both your GCP and AWS environments. We are going to start this section off by briefly discussing the entire suite before we go into more in depth discussions.
Stackdriver Logging Concepts
Stackdriver Logging is used to centrally collect all log events that take place in your GCP, AWS, and other supported environments. Let's take a look at some of the need-to-know concepts of Stackdriver Logging before we go into a hands-on demonstration.
Hands On - Stackdriver Logging
Let's now take a hands-on tour of different parts of Stackdriver Logging. We will work with different types, create an export sink, and much more.
Stackdriver Monitoring Concepts
We are now going to explore the Stackdriver Monitoring product on Google Cloud, which provides easy to use native monitoring tools for both GCP and AWS, as well as a number of other features such as alerts and uptime checks on applications. Let's go through a brief conceptual lesson before we dive into a hands-on demo.
Hands On - Stackdriver Monitoring
Let's now go through a hands-on demo on Stackdriver Monitoring. In this lesson we will cover how to set up your monitoring workspace, create custom dashboards, configure uptime checks and alerts, and much more.
Trace, Error Reporting, and Debug Concepts
Stackdriver Trace, Debug, and Error Reporting are used in conjunction with each other to find performance bottlenecks in an application, and to troubleshoot and fix errors. We will take a conceptual view of each service (along with the Profiler product in beta) before we go into a hands-on demo.
Hands On - Stackdriver Trace, Error Reporting, and Debug
We are now going to go through a hands-on demonstration of working with Stackdriver Trace, Error Reporting, and Debug to trace latency and troubleshoot a broken application. If you want to deploy the same application to follow along, the github repo and steps to deploy the application are below. All commands need to be run in Cloud Shell. git clone https://github.com/GoogleCloudPlatformTraining/cp100-bookshelf cd ~/cp100-bookshelf/app-engine pip install -r requirements.txt -t lib gcloud app deploy
Core Building Blocks
Google Cloud Storage
Cloud Storage Concepts
Cloud Storage is a key component of many design architectures, being able to hold anything and everything as well as migrate data between services. We will take a quick crash course in the basic concepts of Google Cloud Storage (GCS) in this lesson before we dive into more advanced topics.
Using the gsutil Command
Becoming comfortable with the gsutil command is yet another skill a GCP Cloud Architect needs. We will cover the basics in this lesson. References for commands are listed below: Bucket syntax: gs://(BUCKET_NAME)/(OBJECT_NAME) gsutil Command Structure: gsutil (command) (options) (target) gsutil (command) (options) (source) (target) Example: Copy a file to a storage bucket gsutil cp file1.txt gs://mybucket List bucket directory: gsutil ls -a gs://mybucket Top-level command options (before the command): gsutil -m cp -r gs://(BUCKET) (target) References and help: GCP documentation: https://cloud.google.com/storage/docs/gsutil Command line help: gsutil help (command) gsutil help cp gsutil help options
Cloud Storage Security Concepts
Cloud Storage security is a critically important topic. When not taken seriously, poor Cloud Storage security practices can lead to unauthorized public data access, which places confidential customer and company information at risk. In this lesson, we will discuss GCS security concepts, which will be followed by a hands-on demonstration.
Hands On - Cloud Storage Security
We are now going to go hands-on with the security concepts we discussed in the previous lesson. Reference from the hands-on demo is listed below: Assign IAM roles to buckets: gsutil iam ch user:(user_email):(role1,role2) gs://(BUCKET) Remove IAM role from bucket: gsutil iam ch -d user:(user_email):(role1,role2) gs://(BUCKET) Remove all roles from bucket for given user: gsutil iam ch -d user:(user_email) gs://(BUCKET) Assign ACL roles to buckets and objects: gsutil acl ch -u (user_email):(O/R/W) gs://(BUCKET) Delete all ACLs: gsutil acl ch -d (user_email) gs://(BUCKET) Signed URLs Create service account with keyUpload to cloud shell (or add to current CLI environment)gsutil signurl -d (time_period (10m)) (keyfile.json) gs://(BUCKET)/(object)
Object Versioning and Lifecycle Management
The ability to create and retain multiple versions of an object and protect it from deletion helps to keep your objects safe and available. Lifecycle Management policies let you automatically manage the buildup of objects over time to help save on space and costs. We will discuss both in this conceptual lesson before we go into a hands-on demonstration of each concept in the next one.
Hands On - Object Versioning
In this lesson, we will go hands-on working with both object versioning and lifecycle management. Commands from the interactive diagram are listed below for further reference. Check current versioning policy: gsutil versioning get gs://<BUCKET> Enable Object Versioning: gsutil versioning set on gs://<BUCKET> Check full object details in bucket: gsutil ls -a gs://<BUCKET> Download current lifecycle policy to local machine to edit: gsutil lifecycle get gs://<BUCKET> > filename.json Set new lifecycle policy after making above edits: gsutil lifecycle set filename.json gs://<BUCKET>
gsutil Command Line A-Z
We are now going to take a tour of various gsutil commands that we will explore in the course of performing a variety of tasks on Cloud Storage. Becoming comfortable with gsutil commands is an essentials skill for a Google Cloud Architect. Commands used in this lesson are also listed below. Remove a bucket: gsutil rm -r gs://<BUCKET> Create a new bucket: gsutil mb -l <location> -c <class> gs://<BUCKET> Copy local files to bucket: Note: Use -m for parallel threading gsutil -m cp -r <files/directory> gs://<BUCKET> Check versioning policy: gsutil versioning get gs://<BUCKET> Enable versioning: gsutil versioning set on gs://<BUCKET> View bucket folder contents: gsutil ls gs://<BUCKET>/<folder> View all subfolder contents: gsutil ls -r gs://<BUCKET> Change storage class in existing bucket: Note: Disable versioning first gsutil versioning set off gs://<BUCKET> gsutil -m rewrite -r -s NEARLINE gs://<BUCKET>/* Note: Add -m for parallel threading Give public read access to an object via ACL: gsutil acl ch -u AllUsers:R gs://<BUCKET/<object> Revoke public access: gsutil acl ch -d AllUsers gs://<BUCKET/<object> Delete bucket: gsutil rm -r gs://<BUCKET>
Managed Databases on Google Cloud Platform
Managed Databases Overview
In this section we are going to discuss the managed database options on Google Cloud and the exam perspective on these topics. We are going to start with a very high level overview covering what differences existing in the different databases options on GCP.
Managed Databases on Google Cloud
In this lesson we will take a closer look at the individual managed databases we are concerned about for the exam. Links and commands used in this lesson are shared below. Storage options at a glance: https://cloud.google.com/storage-options/ MS SQL Server - Configure High Availability: https://cloud.google.com/compute/docs/instances/sql-server/configure-availability Upload custom index .yaml config file to Cloud Datastore: gcloud datastore create-indexes
Cloud SQL Closer Look
Let's take a closer look at some management considerations for the Cloud SQL service to improve performance and availability.
We are now going to take a closer look at BigQuery and the different management concepts that you will need to become familiar with in the role of a cloud architect. Resources referenced in this lesson are below: BigQuery IAM Roles: https://cloud.google.com/bigquery/docs/access-control View BigQuery Jobs/Query history using command line: bq ls -j -a (myproject) Create a partitioned table with an expiration time period: bq mk --time_partitioning_type=DAY --time_partitioning_expiration=259200 [DATASET].[TABLE]
Virtual Private Clouds (VPCs) are simply a virtual version of a physical network. In this lesson we will cover the primary concepts of working with a VPC before we go into more detailed hands-on. Link for Subnetting Fundamentals course (if needed) is below: https://linuxacademy.com/linux/training/course/name/subnetting-fundamentals
We are now going to expand the concepts we discussed in the VPC concepts lesson by exploring the "ins-and-outs" of both the default and custom VPC environments.
We are now going to take both a conceptual and hands-on look at firewalls, which will be a very important exam topic. Firewalls are the component that manage what network traffic is either allowed or denied when either coming in, or going out from your network resources.
Hands-On - Firewall Rules
We are going to continue discussing firewalls by going through a hands-on demo on creating both a wide-scope and narrow-scope rule on a demonstration network. NOTE: After this lesson was created, Google made a change on their end that removed subnets as a source filter option. To filter by subnets, you now need to enter the IP range of the subnet. The hands on lab at the end of this section addresses this change.
Shared VPC Concepts
In these next two lessons, we will discuss shared VPCs, which allow you to share a VPC network across multiple projects, which allow you to host resources on a single network while still dividing billing and access across multiple projects.
Hands On - Shared VPC
Let's now go hands-on and create a shared VPC network that will be used in a host project, and set up a service project to host resources on it.
Interconnecting Networks (Hybrid Networking)
The Power of the Network
We are now going to discuss how to connect your VPC network to other networks via different __interconnect__ options. We are going start by briefly discussing what makes Google Cloud's network, and their approach to global networking, different.
Connecting your Network to Google
In this lesson, we will explore the methods available to connect your network to Google, which include Cloud Interconnect, Cloud VPN, and Peering. Each one has unique pros and cons as well as when you would want to use it depending on your requirements.
Google Cloud VPN allows you to connect your Google Cloud VPC network to an external network over a secure IPSec connection. We will discuss the concepts for this service in this lesson, which we will follow up with a couple hands-on demonstrations.
Hands On - Cloud VPN
We are now going to go through a hands-on demonstration of connecting two VPC networks using the Cloud VPN service using static routes. Static routing requires you to manually add every subnet in a peer network that you are connecting to. We will follow this lesson up by doing the same thing, except using dynamic routing with Cloud Router so we can see the difference.
Hands On - Cloud Router
We are now going to demonstrate how dynamic routing using Cloud Router paired with Cloud VPN makes the routing process much easier.
Google Cloud DNS
We are going to take a quick look at the Google Cloud DNS before we finish this section. It does not have a significant presence in the Architect exam, however it does belong in a discussion about interconnecting networks, so we want to be sure to cover it.
Compute Engine - Virtual Machines
Compute Engine Deep Dive
In this section, we are going to take a deep dive in learning about different core concepts with Compute Engine, which will be applicable in many topics throughout this course.
Disks are handled differently on Compute Engine compared to a traditional physical version, because they are network attached, replicated across many physical disks, and are EXTREMELY flexible and powerful. We will discuss persistent disks and other storage options in this lesson before we go into a hands-on demonstration afterwards. File Server Options using Compute Engine: https://cloud.google.com/solutions/filers-on-compute-engine
Hands On - Disks
We are now going to go hands-on working with disks on Compute Engine. This is going to be part 1 of 2 hands-on demonstrations. Command line reference from the lesson is below: Create disk: gcloud compute disks create <DISK_NAME> --type=<DISK_TYPE> --size=<SIZE> --zone=<ZONE> Resize disk: gcloud compute disks resize <disk_name> --size=<size> --zone=<zone> Attach disk: gcloud compute instances attach-disk <instance> --disk=<disk_name> --zone=<zone>
Hands On - Attaching and Resizing Disks
In this second hands-on demonstration using disks on Compute Engine, we will pick up where we left off by now initializing the attached disks in their respective operating systems, as well as initializing the grown boot disk on each instance as well. Command line examples used in this lesson for the Linux instance are below: Format and mount new disk View available disks: sudo lsblk Format attached disk: sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb Create mount directory: sudo mkdir -p /mnt/disks/disk2 Mount disk: sudo mount -o discard,defaults /dev/sdb /mnt/disks/disk2 Set read/write permissions: sudo chmod a+w /mnt/disks/ Resize existing Linux disk Identify the disk to resize: sudo lsblk Resize (grow) the partition: sudo growpart /dev/sda 1 Extend file system to use added space: sudo resize2fs /dev/sda1 Verify file system is resized: df -h
We will now move on to discussing images, which are used to create widely accessible boot sources that you can customize to your liking. We'll have a conceptual lesson followed by a hands-on demonstration.
Hands On - Custom Images
We are now going to go hands-on in working with images. We will create, deprecate, view, use, and delete images using both the web console and command line where applicable. Commands used in this demo are listed below for reference: Create image via command line: gcloud compute images create <image_name> --source-disk <disk_name> --source-disk-zone <zone> --family <image_family> View image family info: gcloud compute images describe-from-family <image_family> Deprecate/set active image version: gcloud compute images deprecate <image_name> --state <STATE> Delete image: gcloud compute images delete <image_name>
Let's continue our discussion on Compute Engine by discussing and demonstrating Snapshots, which are a powerful backup method for instances and disks.
Startup and Shutdown Scripts
In this lesson, we will cover the concepts and perform a hands-on demonstration of working with scripts, which are very useful for large scale automation.
We will now discuss and demonstrate using preemptible VMs to save costs, with the tradeoff in the short-lived nature of the individual virtual machine. References for real life uses cases shown in this lesson are below: MIT Professor used preemptible VMs for massive compute job https://cloud.google.com/blog/products/gcp/220000-cores-and-counting-mit-math-professor-breaks-record-for-largest-ever-compute-engine-job Using Preemptible VMs to save costs https://medium.com/@duhroach/saving-money-with-preemptive-vms-8acdcf9960c9
Elastic Cloud Infrastructure: Scaling and Automation
Load Balancing and Instance Groups
Force Multipliers - Automation and Scaling
Force multipliers on Google Cloud are what enable you to scale and grow your application at a global level. We will go over a 'big picture' view of these services in this lesson.
We are going to continue our discussion on force multipliers on Google Cloud by now discussing load balancers. There are 5 load balancer options available on Google Cloud, and you need to be familiar with each one. Google documentation for load balancer reference is below: https://cloud.google.com/load-balancing/docs/load-balancing-overview
Instance Groups and Autoscaling
We will continue our discussion on force multiplier to covering instance groups, which are very often paired with load balancers to offer multi-regional, scalable computing.
Hands On - Managed Instance Group
We are now going to go through a three part hands on demonstration of creating an instance group for a web app, set up a load balancer, and update and trigger autoscaling on our application. This first lesson will be in setting up the first version of the managed instance group. For reference, the IP ranges that you need to allow for health checks at the firewall we mentioned in the lesson are listed below. Health Check IP range you must allow in firewall rules: 126.96.36.199/16 188.8.131.52/22
Hands On - Load Balancer
In this second part of a 3 part demo, we will demonstrate creating an HTTP load balancer to pair with our previous instance group.
Hands On - Autoscaler and Instance Group Updates
We are going to pick up where we left off in the previous lesson, and now we are going to update our instance group to a new version of our website as well as stress test it to view autoscaling in action. Apache Bench reference for stress testing websites is below: First install Apache on your version of Linux, for a Debian distro, you'd type: sudo apt install apache2 Once Apache is installed, the following command will send 1 million requests to your front end at 1000 at a time: ab -n 1000000 -c 1000 http://(YOUR_FRONTEND_IP)/ Note that the slash at the end of the address is required.
Google Cloud CDN
Cloud CDN Concepts
In this very short section, we will bridge off our lessons on load balancers and discuss the Google Cloud Content Delivery Network, which caches data closer to your users for improved performance.
Cloud Deployment Manager
Cloud Deployment Manager Concepts
In this lesson we are going to continue with the theme of discussing force multipliers by going over Cloud Deployment Manager, which allows you to use Infrastructure as Code to create repeatable, reliable deployments. We will start with a conceptual lesson to be folowed up by a brief hands on demonstration.
Hands On - Cloud Deployment Manager
Let's now go hands on with a couple different deployment manager scenarios, using both single config files and templates for a more modular approach. The command line references we used in this lesson are listed below. Create a deployment gcloud deployment-manager deployments create (deployment_name) --config (config_file.yaml) Delete deployment (web console or command line) gcloud deployment-manager deployments delete (deployment_name) Preview a configuration without actually deploying it Same as regular created deployment, but add --preview as an option Deploying a previewed deployment gcloud deployment-manager deployments update (deployment_name)
Elastic Cloud Infrastructure: Managed Compute Services
Compute Services Overview
Where Should I Run My Code?
Before we get into exploring management of managed compute services on Google Cloud, we are going to take a big picture view of the differences between the compute services available, and ask the question "Where should I run my code?"
App Engine Overview
We are now going to take a look at the management and administrative tools available in App Engine which we will need to be familiar with for the exam. We will follow this up with a hands-on demonstration of the tools we talk about in this lesson. If you are not familiar with App Engine, I highly recommend going through the App Engine Deep Dive course, as this course will assume you already have some familiarity with the service https://linuxacademy.com/cp/modules/view/id/167
Hands On - App Engine Versions and Memcache
In this hands-on demo, we will work with an App Engine application, look at Memcache service levels, and deploy multiple versions of an application while splitting traffic between them. Command line reference in this lesson listed below: To deploy the application used in this lesson, in Cloud Shell, type the following to clone the GitHub repository, browse to the directory, install requirements, and deploy your application to your desired region: git clone https://github.com/GoogleCloudPlatformTraining/cp100-bookshelf cd ~/cp100-bookshelf/app-engine pip install -r requirements.txt -t lib gcloud app deploy Deploy an application without promoting it to the 'live version' gcloud app deploy --no-promote
Before we dive headfirst into Kubernetes Engine, let's first take a big picture view of the overall process and services involved from packing containers, to storing them, to deploying and running them on Kubernetes Engine.
GKE Administration Concepts
Let's now take a closer look at the administrative and management concepts, terminology, and best practices surrounding Kubernetes Engine. We will follow this up with a hands on demonstration of packaging a Docker container and deploying to GKE. As mentioned in the lesson, below are the highly recommended prerequisite lessons and learning paths for gaining a further understanding of Kubernetes and Google Kubernetes Engine. Google Kubernetes Engine Deep Dive https://linuxacademy.com/google-cloud-platform/training/course/name/google-kubernetes-engine-deep-dive Getting Started with Kubernetes Learning Path https://linuxacademy.com/cp/learningpaths/preview/id/3460
GKE Commands and Hands-On
Let's jump into a hands-on demonstration of creating a web app on Kubernetes Engine. The commands used in this lesson are copied below for your own reference: Set default region/zone gcloud config set compute/region us-central1 gcloud config set compute/zone us-central1-a Enable API's gcloud services enable container.googleapis.com gcloud services enable containerregistry.googleapis.com Clone github for sample application and browse to directory git clone https://github.com/linuxacademy/content-gc-essentials cd content-gc-essentials/gke-lab-01 Package code into Docker container - tag as version 1 ("build it") docker build -t gcr.io/$DEVSHELL_PROJECT_ID/hello-la:v1 . Note: the period at the end is required to build it at this location Check status of images to ensure success docker images Authenticate gcloud as a Docker credential helper gcloud auth configure-docker Push Docker container into Container Registry docker push gcr.io/$DEVSHELL_PROJECT_ID/hello-la:v1 Create Kubernetes Engine Cluster (Nodes) named 'hello-cluster' gcloud container clusters create hello-cluster --num-nodes=2 Authenticate kubectl to point to the cluster we just made (already done for last created cluster) gcloud container clusters get-credentials hello-cluster Deploy your app ("deploy and run it") - listen on port 80 kubectl run hello-la --image=gcr.io/$DEVSHELL_PROJECT_ID/hello-la:v1 --port 80 Check out our pods on the nodes kubectl get pods Create load balancer and expose application to the Internet on port 80 kubectl expose deployment hello-la --type=LoadBalancer --port 80 --target-port 80 Find our load balancer frontend IP address kubectl get service Scale up deployment - add static number of replicas (pods) kubectl scale deployment hello-la --replicas=3 On second thought, let's just autoscale our application instead.... Horizontal Pod Autoscaler kubectl autoscale deployment hello-la --max 6 --min 4 --cpu-percent 50 Maybe we should statically resize the node pool/cluster as well? gcloud container clusters resize hello-cluster --size 3 If more than one pool per cluster, specify pool with --node-pool (pool_name) On second thought (again), let's also enable autoscaling for our cluster gcloud container clusters update hello-cluster --enable-autoscaling --min-nodes 2 --max-nodes 8 Make changes to source code, then build as Docker file as VERSION 2 docker build -t gcr.io/$DEVSHELL_PROJECT_ID/hello-la:v2 . Push to Container Registry, also as version 2 docker push gcr.io/$DEVSHELL_PROJECT_ID/hello-la:v2 Update our website - Apply rolling update to deployment with image update kubectl set image deployment/hello-la hello-la=gcr.io/$DEVSHELL_PROJECT_ID/hello-la:v2 Get log info Logs are written to pods, by default also written to Stackdriver Logging View log on pod kubectl logs (POD_ID) Other commands/scenarios Upgrade version of Kubernetes on cluster gcloud containers clusters upgrade (cluster_name)
Big Data, Machine Learning, and Data Lifecycle
Big Data and Machine Learning Services
We are now going to obtain a conceptual understanding of the different big data and machine learning services that Google Cloud uses to create a front to back lifecycle of data management. We will cover each puzzle piece individually in this lesson, in we'll take a look at the 'big picture' on how they fit together after this.
Now that we've looked at additional pieces of the big data 'puzzle', let's now take a bigger picture view of the entire process of collecting, storing, processing, analyzing and using data via the data lifecycle.
Case Study Overview
We are now going to take a detailed look at the case studies you'll see on the exam. These case study questions will comprise about 30-40% of your total questions, so we will want to be well acquainted with them,.
This lesson will explore the first casey of Mountkirk Games. Mountkirk Games is a mobile game developer who has serious scaling concerns. We will go into more detail here. Link to the case study is below for your reference. https://cloud.google.com/certification/guides/cloud-architect/casestudy-mountkirkgames-rev2
We'll now go to our second case study, which is Dress4Win, an online fashion based social network that needs help deploying a proof of concept migration to Google Cloud. Link for case study is below. https://cloud.google.com/certification/guides/cloud-architect/casestudy-dress4win-rev2
Time for our final case study lesson, focusing on TerramEarth, which is a heavy equipment manufacturer that uses sensors to optimize their vehicles and predict breakdowns. Link to case study is below. https://cloud.google.com/certification/guides/cloud-architect/casestudy-terramearth-rev2
Architecting Google Cloud Solutions
Planning Your Cloud Transition
Making the Case for the Cloud and GCP
What does your senior leadership care about when you are making your case to move to the cloud and GCP? We'll explore how to speak to what they care about in this lesson.
This lesson will cover many of the cost optimization features built into Google Cloud that we have previously discussed, as well as some new ones.
Architecting Cloud Applications
Let's now talk about the key principles of good app design when creating cloud native applications.
Migrating to Google Cloud
Planning a Successful Cloud Migration
In this new section, we will cover best practices for migrating your data and application to Google Cloud. This first lesson will cover the 5 step planning approach to ensure a successful cloud migration.
Storage Transfer Service
This lesson will cover the concepts of the Storage Transfer Service, as well as a hands on demo. Storage Transfer Service is used for transferring AWS S3, GCS, or public HTTP/HTTPS data to a GCS bucket.
Data Migration Best Practices
In this lesson we will go over a number of data migration tips and best practices to keep in mind when transferring data to Cloud Storage. We will also cover the GCP Transfer Appliance as well.
This lesson will cover best practices for migrating servers (i.e. your applications) over to GCP.
Resilient Cloud Solution Infrastructure
Disaster Recovery Concepts
You need to have a plan in case something goes "boom". This lesson will cover key concepts when planning for disaster such as how long you have to wait until your application is up and running, and what is the difference between a warm and hot failover. Disaster Recovery Planning Guide: https://cloud.google.com/solutions/dr-scenarios-planning-guide
Backup and Recovery Methods in GCP
In this lesson, we will review the backup and recovery methods of different compute and storage services that we have covered that may be relevant on the exam.
Security and Compliance
Security Methods in GCP
In this lesson we will cover multiple security releated topics, many of which will be review by now, in a full security-themed section of the course.
Network Design for Security and Isolation
This lesson will cover the multiple layers of security and access control methods that are available for VPC networks.
Legal Compliance and Audits
This lesson will cover concepts related to meeting compliance requirements and saving data for potential audits. The GCP compliance page reference in the lesson is also below: https://cloud.google.com/security/compliance/
Software Development Lifecycle Concepts
The software development lifecycle (SDLC) is a key concept to modern software deployment that you need to be familiar with. We will discuss it and the CI/CD model (Continuous Integration/Continuus Deployment) model.
Testing Your Application for Resiliency
Now that you are ready to deploy your cloud-based application, we need to test it to 1. Make sure it works and 2. Is resilient to unexpected failures.
Getting Ready for the Exam
Additional Study Resources
In this lesson, we will cover additional study resources that are a great supplementary resource for learning more about Google Cloud Platform. Links to mentioned resources are below: Codelabs https://codelabs.developers.google.com/ Filter by 'Cloud' Use GCP Playgrounds on Linux Academy for practice Google Cloud Solutions Architecture Reference https://gcp.solutions/ GCP in 4 words or less https://github.com/gregsramblings/google-cloud-4-words Official Google Cloud Blog https://cloud.google.com/blog/
Preparing for the Exam
Congratulations on making it to the end of the course! Let's talk about how to prepare for AND pass the Google Cloud Architect exam. Link for Google's official practice exam, referenced in this lesson, is below. https://cloud.google.com/certification/practice-exam/cloud-architect
Google Cloud Certified Professional Cloud Architect - Practice Exam