Skip to main content

Google Cloud Network Management

Course

Intro Video

Photo of Matthew Ulasien

Matthew Ulasien

Team Lead Google Cloud in Content

Length

07:50:55

Difficulty

Advanced

Videos

31

Hands-on Labs

4

Course Details

This course will be the second of a multi-course track to prepare you for the role of a GCP Network Engineer. The Network Management course will build on top of the fundamentals covered in the Network Concepts course, and expand on it by covering how to grow your network with Shared VPC's and Network Peering, using 'force multipliers' in load balancers and instance groups, managing networked Kubernetes Engine clusters, and managing Cloud DNS

Let's get started!

Syllabus

Getting Started

Introduction

Course Introduction

00:01:57

Lesson Description:

Welcome to the Google Cloud Network Management course!

Using the Interactive Diagram

00:01:42

Lesson Description:

In this lesson, we cover the interactive diagram we will be using throughout these series of courses, which you will also have access to as an additional study tool.

Growing Your Network

Shared VPC

Shared VPC Overview

00:09:00

Lesson Description:

This lesson covers the primary concepts and terminology regarding Shared VPCs, followed by a hands-on demonstration.

Hands On - Shared VPC

00:09:18

Lesson Description:

This hands-on demonstration covers the process of assigning IAM roles and then creating, managing, and using a Shared VPC network.

Shared VPC Scenarios

00:05:58

Lesson Description:

This lesson covers some of the common use case scenarios in which a Shared VPC would be utilized. Be aware that some of these scenarios may appear on the exam.

VPC Network Peering

VPC Network Peering Overview

00:10:23

Lesson Description:

We will now go through a conceptual overview of VPC Network Peering, followed by a hands-on demonstration. Pay special attention to the use case scenarios, as you may see them on the exam.

Hands On - VPC Network Peering

00:14:56

Lesson Description:

This lesson will go through a hands-on demonstration of creating two VPC network peering connections, show how overlapping subnets can cause the connection to fail, and then test out connectivity between our peered networks. To set up the same environment that is used in the lesson, you can copy/paste the below command to download and run the bash file to create the same environment. Additionally, we have also listed out each of the necessary commands below as well for your own reference. Download and run script to create hands-on lesson environment:

gsutil cp gs://la-gcloud-course-resources/network-engineer/network-peering-handson-setup-script.sh . && bash network-peering-handson-setup-script.sh
Full command list from the above script file is below for additional reference. Delete default firewall rules and default VPC:
gcloud compute firewall-rules delete default-allow-icmp default-allow-internal default-allow-rdp default-allow-ssh --quiet

gcloud compute networks delete default --quiet
Create our three VPC networks, with subnets (some conflicting on purpose) and firewall rules to allow port 22 (SSH) access to all resources:
gcloud compute networks create network-1 --subnet-mode=custom

gcloud compute networks subnets create subnet-1a --network=network-1 --region=us-central1 --range=10.0.1.0/24

gcloud compute networks subnets create subnet-1b --network=network-1 --region=us-central1 --range=10.1.1.0/24

gcloud compute networks create network-2 --subnet-mode=custom

gcloud compute networks subnets create subnet-2a --network=network-2 --region=us-central1 --range=10.0.2.0/24

gcloud compute networks subnets create subnet-2b --network=network-2 --region=us-central1 --range=10.1.2.0/24

gcloud compute networks subnets create conflicting-subnet-1 --network=network-2 --region=us-central1 --range=10.0.1.0/24

gcloud compute networks create network-3 --subnet-mode=custom

gcloud compute networks subnets create subnet-3a --network=network-3 --region=us-central1 --range=10.0.3.0/24

gcloud compute networks subnets create subnet-3b --network=network-3 --region=us-central1 --range=10.1.3.0/24

gcloud compute networks subnets create conflict-with-network-2-subnet --network=network-3 --region=us-central1 --range=10.0.2.0/24

gcloud compute firewall-rules create ssh-allow-network-1 --direction=INGRESS --priority=1000 --network=network-1 --action=ALLOW --rules=tcp:22 --source-ranges=0.0.0.0/0

gcloud compute firewall-rules create ssh-allow-network-2 --direction=INGRESS --priority=1000 --network=network-2 --action=ALLOW --rules=tcp:22 --source-ranges=0.0.0.0/0

gcloud compute firewall-rules create ssh-allow-network-3 --direction=INGRESS --priority=1000 --network=network-3 --action=ALLOW --rules=tcp:22 --source-ranges=0.0.0.0/0
Create a Compute Engine instance in each VPC network that we will use for testing:
gcloud compute instances create instance-1 --zone=us-central1-a --machine-type=f1-micro --subnet=subnet-1a

gcloud compute instances create instance-2 --zone=us-central1-a --machine-type=f1-micro --subnet=subnet-2a

gcloud compute instances create instance-3 --zone=us-central1-a --machine-type=f1-micro --subnet=subnet-3a

Shared VPC vs. VPC Network Peering

00:02:56

Lesson Description:

This lesson will cover the key differences between a Shared VPC and Network Peering that you need to be aware of.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:30:00

Load Balancing and Managed Instance Groups

Networking Force Multipliers

00:03:35

Lesson Description:

It is time to jump into a new topic, in which we are going to explore the networking "force multipliers" of instance groups and load balancers on Google Cloud. This first lesson will be a preview of what we mean by "force multipliers," and how they allow you to easily scale your application and make it highly available on a global scale.

Load Balancing Overview

00:14:08

Lesson Description:

This lesson provides a detailed overview of all the load balancer options on Google Cloud, including the differences between all five choices.

Managed Instance Groups Overview

00:14:15

Lesson Description:

We will continue our conceptual overview of networking force multipliers by covering instance groups, with a special focus on managed instance groups.

Load Balancer Backend Considerations

00:03:46

Lesson Description:

So far we've discussed load balancers and instance groups. Now, let's discuss how to pair them together.

HTTP(S) Load Balancing

00:05:19

Lesson Description:

This lesson takes a closer look at HTTP/S Load Balancing before we use it in our hands-on demonstrations.

Hands On - HTTP Load Balancing and Managed Instance Groups (Part 1)

00:16:51

Lesson Description:

This first of a series of hands-on demonstrations will create: Firewall rules to allow HTTP and health check/load balancers access to your managed instance groupA health check for our instancesTwo instance templates that will be used to deploy a "version 1" and "version 2" of our applicationA single instance group, initially locked to five machines (autoscaling off) for this lesson. We will change this later.An update of the instance group from version 1 to version 2 using the rolling update feature. The command line cross-reference for all actions performed in this lesson are below: Create firewall rules allowing http and load balancer/health check access:

gcloud compute firewall-rules create http-allow --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=http-server

gcloud compute firewall-rules create health-check-allow --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp --source-ranges=130.211.0.0/22,35.191.0.0/16 --target-tags=http-server
Create version 1 template:
gcloud compute instance-templates create template-v1 --machine-type=f1-micro --metadata=startup-script-url=gs://la-    gcloud-course-resources/network-engineer/mig-startup-v1.sh --tags=http-server --boot-disk-device-name=template-v1
Create version 2 template:
gcloud compute instance-templates create template-v2 --machine-type=f1-micro --metadata=startup-script-url=gs://la-    gcloud-course-resources/network-engineer/mig-startup-v2.sh --tags=http-server --boot-disk-device-name=template-v2
Create health check and single instance group from version 1 template:
gcloud compute health-checks create tcp "health-check" --timeout "5" --check-interval "10" --unhealthy-threshold "3" --healthy-threshold "2" --port "80"

gcloud beta compute instance-groups managed create us-central1-group --base-instance-name=us-central1-group --template=template-v1 --size=5 --zones=us-central1-b,us-central1-c,us-central1-f --instance-redistribution-type=PROACTIVE --health-check=health-check --initial-delay=300

Hands On - HTTP Load Balancing and Managed Instance Groups (Part 2)

00:13:22

Lesson Description:

This lesson picks up where the last one left of, where we will: Convert our first instance group into an autoscaling group.Create a second instance group in a different region.Create an HTTP Load Balancer.Explore URL map rules. The command line cross-reference for all actions performed in this lesson is below: Convert our us-central1-group into an autoscaling group:

gcloud beta compute instance-groups managed set-autoscaling "us-central1-group" --region "us-central1" --cool-down-period "60" --max-num-replicas "4" --min-num-replicas "2" --target-cpu-utilization "0.6"
Create second instance group in the europe-north-1 region, and enable autoscaling:
gcloud beta compute instance-groups managed create europe-north1-group --base-instance-name=europe-north1-group --template=template-v1 --size=1 --zones=europe-north1-a,europe-north1-b,europe-north1-c --instance-redistribution-type=PROACTIVE --health-check=health-check --initial-delay=300

gcloud beta compute instance-groups managed set-autoscaling "europe-north1-group" --region "europe-north1" --cool-down-period "60" --max-num-replicas "4" --min-num-replicas "2" --target-cpu-utilization "0.6"
The steps to create a load balancer are significantly more involved compared to the web console. All steps are listed below. Set named ports for instance groups:
gcloud compute instance-groups managed set-named-ports europe-north1-group 
--named-ports http:80 
--region europe-north1

gcloud compute instance-groups managed set-named-ports us-central1-group 
--named-ports http:80 
--region us-central1
Create backend service and add backends:
gcloud compute backend-services create http-backend 
--protocol HTTP 
--health-checks health-check 
--global

gcloud compute backend-services add-backend http-backend 
--balancing-mode=RATE 
--max-rate-per-instance=50 
--capacity-scaler=1 
--instance-group=europe-north1-group 
--instance-group-region=europe-north1 
--global

gcloud compute backend-services add-backend http-backend 
--balancing-mode=RATE 
--max-rate-per-instance=50 
--capacity-scaler=1 
--instance-group=us-central1-group 
--instance-group-region=us-central1 
--global
Create load balancer URL map and frontend:
gcloud compute url-maps create http-lb 
--default-service http-backend

gcloud compute target-http-proxies create http-lb-proxy 
--url-map=http-lb

gcloud compute forwarding-rules create http-frontend 
--global 
--target-http-proxy=http-lb-proxy 
--ports=80

Hands On - HTTP Load Balancing and Managed Instance Groups (Part 3)

00:10:44

Lesson Description:

This final part of our series of hands-on lessons with load balancers and instance groups will have us: Create a stress-test instance.Send a small trickle of traffic to our application, and view backend distribution metrics.Send a flood of traffic to overwhelm our application, and view how our load balancers compensate. Create stress-test instance:

gcloud compute instances create stress-test --zone=us-east1-b --machine-type=n1-standard-1 --subnet=default --metadata=startup-script-url=gs://la-gcloud-course-resources/install-apache-script.sh --boot-disk-device-name=stress-test
Log in to the instance via SSH, and enter the following commands to send a small amount of traffic to our application and then send a flood of traffic to overwhelm it:
ab -n 1000000 -c 3 http://(your-frontend-ip)/

ab -n 1000000 -c 1000 http://(your-frontend-ip)/

Hands On - Cloud Armor

00:09:29

Lesson Description:

This lesson will go through a hands-on demonstration of using Cloud Armor to prevent our stress-test instance from overloading our load-balanced application. If you need to recreate the load-balanced application environment from the previous hands-on lessons, copy/paste the below command in your Cloud Shell to recreate the entire environment. You will need to wait about five minutes after script completion for your load balancer to fully initialize so your frontend address brings you to the correct site:

gsutil cp gs://la-gcloud-course-resources/network-engineer/lb-mig-environment-setup.sh . && bash lb-mig-environment-setup.sh

SSL/TCP Proxy Load Balancing

00:04:10

Lesson Description:

This lesson takes a closer look at both the SSL and TCP Proxy load balancer options on GCP.

Network and Internal Load Balancing

00:07:01

Lesson Description:

This lesson covers the last of our load balancer options in network and internal load balancers. We will go through a conceptual overview as well as a brief demo.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Cloud CDN

Cloud CDN Overview

00:12:50

Lesson Description:

We are going to go through a conceptual understanding of the Cloud CDN (Content Delivery Network) service, followed by a hands-on demonstration.

Cloud CDN Hands On

00:12:46

Lesson Description:

This lesson goes through a hands-on demonstration of enabling and configuring the Cloud CDN service to improve performance to an HTTP Load Balanced application, which is serving traffic from a backend instance group that is very far away from our location. The commands used in this lesson are listed below for your reference. Run the below command in Cloud Shell to download and execute a script to build out the same environment used in the demo:

gsutil cp gs://la-gcloud-course-resources/network-engineer/cdn/cdn-setup-script.sh . && source cdn-setup-script.sh
After you are connected to your testing-instance via SSH, the below commands will send a curl command to both the index.html and page-2.html pages and measure the response time to each. Be sure to substitute your unique frontend IP address for the site:
for i in {1..15};do curl -s -w "%{time_total}n" -o /dev/null http://(your-frontend-ip)/index.html; done

for i in {1..15};do curl -s -w "%{time_total}n" -o /dev/null http://(your-frontend-ip)/page-2.html; done

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

00:45:00

Network Management

Kubernetes Engine

GKE Networking Concepts

00:14:47

Lesson Description:

This big-picture section will start a series of fairly in-depth lessons covering networking on Google Kubernetes Engine. We will go over the basic structure of how nodes, pods, and services are divided, how they interact, and why this is relevant from a networking perspective (especially when it comes to making sure you have enough IP addresses to go around).

Planning for Growth of Your GKE Cluster

00:13:34

Lesson Description:

This lesson covers what you need to know to properly plan for the growth of your GKE cluster. We will cover IP addressing for nodes, pods, and services, and how they are tightly linked together (from an IP allocation perspective).

Hands On - GKE IP Addressing

00:14:35

Lesson Description:

This lesson covers both the default network configuration for a GKE cluster, as well as creating our own custom secondary IP ranges for pods and services. For reference, the commands to create your own custom network with secondary ranges, along with a custom GKE cluster, are below. Create custom VPC network with subnet and secondary IP ranges:

gcloud compute networks create custom-network --subnet-mode=custom

gcloud compute networks subnets create subnet-a --network=custom-network --region=us-central1 --range=192.168.25.0/24 --secondary-range=pod-range=10.2.0.0/16,service-range=10.3.0.0/16 --enable-private-ip-google-access
Create a GKE cluster using the pod and service IP ranges specified above:
gcloud beta container clusters create custom-range-cluster --zone "us-central1-a" --enable-ip-alias --network "projects/playground-s-11-96af95/global/networks/custom-network" --subnetwork "projects/playground-s-11-96af95/regions/us-central1/subnetworks/subnet-a" --cluster-secondary-range-name "pod-range" --services-secondary-range-name "service-range" 

Hands On - GKE IP Load Balancing (with Ingress Objects)

00:07:25

Lesson Description:

In this second part of our hands-on demo, we are going to take the custom range cluster from the previous lesson and deploy two sets of pods, pair them with two services, and use an Ingress object to create an HTTP load balancer. For your reference, the commands to recreate the custom VPC network and GKE cluster are below, as well as the commands used to create and deploy our pods, services, and ingress. Create custom VPC network with subnet and secondary IP ranges:

gcloud compute networks create custom-network --subnet-mode=custom

gcloud compute networks subnets create subnet-a --network=custom-network --region=us-central1 --range=192.168.25.0/24 --secondary-range=pod-range=10.2.0.0/16,service-range=10.3.0.0/16 --enable-private-ip-google-access
Create GKE cluster using the pod and service IP ranges specified above:
gcloud beta container clusters create custom-range-cluster --zone "us-central1-a" --enable-ip-alias --network "projects/playground-s-11-96af95/global/networks/custom-network" --subnetwork "projects/playground-s-11-96af95/regions/us-central1/subnetworks/subnet-a" --cluster-secondary-range-name "pod-range" --services-secondary-range-name "service-range" 
Get the kubectl credentials for our cluster so we can run commands against it:
gcloud container clusters get-credentials custom-range-cluster --zone us-central1-a
Create two hello world pod deployments and two services to expose both deployments, along with the ingress object. These will be YAML file contents that are "catted" out:
cat > hello-world-deployment.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
spec:
selector:
    matchLabels:
    greeting: hello
    department: world
replicas: 3
template:
    metadata:
    labels:
        greeting: hello
        department: world
    spec:
    containers:
    - name: hello
        image: "gcr.io/google-samples/hello-app:2.0"
        env:
        - name: "PORT"
        value: "50000"
EOF

cat > hello-world-service.yaml << EOF
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: NodePort
selector:
    greeting: hello
    department: world
ports:
- protocol: TCP
    port: 60000
    targetPort: 50000
EOF

cat > hello-kubernetes-deployment.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-deployment
spec:
selector:
    matchLabels:
    greeting: hello
    department: kubernetes
replicas: 3
template:
    metadata:
    labels:
        greeting: hello
        department: kubernetes
    spec:
    containers:
    - name: hello-again
        image: "gcr.io/google-samples/node-hello:1.0"
        env:
        - name: "PORT"
        value: "8080"
EOF

cat > hello-kubernetes-service.yaml << EOF
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: NodePort
selector:
    greeting: hello
    department: kubernetes
ports:
- protocol: TCP
    port: 80
    targetPort: 8080
EOF


cat > my-ingress.yaml << EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- http:
    paths:
    - path: /*
        backend:
        serviceName: hello-world
        servicePort: 60000
    - path: /kube
        backend:
        serviceName: hello-kubernetes
        servicePort: 80
EOF
Create and expose deployments:
kubectl apply -f hello-world-deployment.yaml

kubectl apply -f hello-world-service.yaml

kubectl apply -f hello-kubernetes-deployment.yaml

kubectl apply -f hello-kubernetes-service.yaml
Create the ingress, which will set up the HTTP load balancer:
kubectl apply -f my-ingress.yaml
View ingress:
kubectl get ingress my-ingress --output yaml

Private GKE Clusters and Network Policies

00:07:40

Lesson Description:

This lesson covers what exactly a GKE private cluster is and, more importantly, what is required to set one up and connect to it. Pay very careful attention to the interactions with the GKE master, which is not in your original project. Network policies restrict intra-cluster communications. By default, all pods and services in a cluster can communicate with each other. Here, we cover how network policies restrict internal communications when needed.

Hands On - Private GKE Clusters

00:15:33

Lesson Description:

This lesson provides a hands-on demonstration of setting up, exploring, and using a GKE private cluster. The commands used in this lesson are cross-referenced below. Enable GKE API if not already done:

gcloud services enable container.googleapis.com
Create a private cluster:
gcloud container clusters create private-cluster 
    --enable-master-authorized-networks 
    --enable-ip-alias 
    --enable-private-nodes 
    --master-ipv4-cidr 172.16.0.0/28 
    --no-enable-basic-auth 
    --no-issue-client-certificate
On your Compute Engine instances used for testing, authenticate with each one using your lab user account:
gcloud init
Find the external IP address for your Cloud Shell session:
dig +short myip.opendns.com @resolver1.opendns.com
Update your cluster to add master authorized network IP addresses/ranges. Remember to use a /32 CIDR notation for singular IP addresses:
gcloud container clusters update private-cluster-0 
    --enable-master-authorized-networks 
    --master-authorized-networks (first-ip-range),(second-ip-address))

Cloud DNS

Cloud DNS Overview

00:11:15

Lesson Description:

Cloud DNS is a managed DNS zone that can scale to millions of records and handle millions of queries per second. We will cover Cloud DNS in the next few lessons.

DNSSEC Overview

00:05:23

Lesson Description:

DNSSEC establishes a chain of trust between your DNS host and domain registrar. We cover the facts you need to know about DNSSEC on Cloud DNS in this lesson.

Cloud DNS Hands On

00:15:20

Lesson Description:

We will now go through a hands-on demonstration of creating a public Cloud DNS zone, importing records from a live domain into the zone and switching name server records, and enabling/disabling DNSSEC. The command to import DNS records in BIND format is below, for reference:

gcloud dns record-sets import --zone=(zone-name) --zone-file-format (path-to-import-file)

Conclusion

Course Conclusion and Next Steps

00:00:41

Lesson Description:

The next course in this series is our Google Cloud Hybrid Networking course.

Take this course and learn a new skill today.

Transform your learning with our all access plan.

Start 7-Day Free Trial