Microservice Applications in Kubernetes
Linux Training Architect II in Content
Welcome to the Microservice Applications in Kubernetes course! This course is designed to give you hands on experience with installing and administering a complex microservice application in a Kubernetes cluster. The course will begin with an overview of Kubernetes conceptually and then move on to installing a functioning Kubernetes cluster. Once the cluster has been bootstrapped, we will learn how to install a simple microservice and then install a more complex microservice application that we will be working with throughout the remainder of the course. Using this microservice application, we will work through the various components and features of Kubernetes like service discovery and autoscaling. Working through this course will give you hands on experience with the microservice architecture and show you why this method has become one of the most sought after in the IT industry.
Download the Interactvie Diagram here: https://interactive.linuxacademy.com/diagrams/LearnMicroServicesByDoing.html
Welcome to the Learn Microservices by Doing course. This video is a brief overview of the course content and the topics that we will cover.
About the Author
Learn a little more about the training architect, Matthew Pearson.
Course Tools and Resources
In addition to the video lessons, several other tools and resources are provided to aid you in your studies. In this video, we will discuss the tools and resources that are available.
Kubernetes has several components that work together to make installing and managing applications incredibly straightforward. In this lesson, we will go over a high-level view of Kubernetes and the various components that drive this incredible technology. Interactive Diagram: https://linuxacademy.com/cp/guides/download/refsheets/guides/refsheets/learn-microservices-by-doing-interactive-diagram_1548699939.pdf
Configuring a Kubernetes Cluster
Configure Cloud Servers
The new Cloud Playground at Linux Academy is a fantastic tool for spinning up cloud servers and practicing all the things you are learning in your courses. In this lesson, we will go over some of the new features provided by Cloud Playground and configure our cloud servers for this course. **UPDATE - Now that cloud playground is out of beta some of the options have changed.
Docker Installation and Overview
Docker is a containerization technology that allows you to package and run applications in loosely isolated environments called containers. For our Kubernetes cluster, we will be using Docker as our container runtime. In this lesson, we will go through a brief overview of Docker and then install Docker CE 18.06.1 to our cluster nodes.
Add GPG key:
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Add Docker repository:
# sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# sudo apt-get update
# sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu
Installing Kubeadm, Kubelet, and Kubectl
**Important: Due to an issue introduced by a recent security update, the kubernetes binaries need to use version1.12.7-00 instead of 1.12.2-00. Please see instalation instructions below.
Kubeadm, kubelet, and kubectl are important components for our implementation of Kubernetes. Kubeadm provides a streamlined way to bootstrap a kubernetes cluster. Kubelet is the agent that runs on the cluster nodes and performs various actions on the components of the cluster. Then we have kubectl, which is the command line utility for interacting and managing the cluster. In this lesson, we will discuss each of these components and install them on our cluster nodes.
Installation Instructions:Add the GPG key:
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -Add the Kubernetes repository:
Resynchronize the package index:
# cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF
# sudo apt-get updateInstall the required packages:
# sudo apt-get install -y kubelet=1.12.7-00 kubeadm=1.12.7-00 kubectl=1.12.7-00Prevent packages from being automatically updated:
# sudo apt-mark hold kubelet kubeadm kubectl
Bootstrap the Kubernetes Cluster
The kubeadm command was developed to provide best practices for initializing a Kubernetes cluster and for joining nodes to the cluster. In this lesson, we will be using the kubeadm command to bootstrap our kubernetes cluster and then join our cloud servers to the cluster.
Initialize the Cluster on the Master:
# sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Set up kubeconfig for a Local User on the Master
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
Join Nodes to the Cluster
# sudo kubeadm join $controller_private_ip:6443 --token $token --discovery-token-ca-cert-hash $hash
Configure Cluster Network with Flannel
In Kubernetes, the communication between pods occurs on the cluster network. To set up the cluster network, install the network add-on after bootstrapping the cluster. In this lesson, we will prepare the cluster nodes for the cluster network and then install the Flannel network add on.
Install the Flannel Network Addon(on all nodes) Add
net.bridge.bridge-nf-call-iptables=1 to sysctl.conf.
# echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf(on all nodes) Apply the change made to sysctl.conf
# sudo sysctl -p(on Master) Use kubectl to install Flannel using YAML template.
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
Working with Microservices
What are Microservices?
Microservice architecture has become a very appealing model that many companies are trying to adopt. In this video, we will try to answer the question “What are microservices?”, and talk about some benefits of microservices architecture.
Installing a Single Application to the Cluster
The Kubernetes command line utility, kubectl, provides a streamlined way to interact with the cluster and manage applications. In this lesson, we will discuss namespaces and pods, and then install an application to our Kubernetes cluster.
Installing a Microservice Application to the Cluster
In this lesson, we will install a complex microservice application provided by WeaveWorks. This microservice application will be our main subject of study as we continue the course. We will work with this application as we walk through some of the main features provided by Kubernetes.
Install Git and Clone the Microservice RepositoryInstall Git:
sudo apt-get install gitSwitch to the user's home directory:
cd ~Clone the Microservice repo:
git clone https://github.com/linuxacademy/microservices-demoInstall the Microservice Application to the ClusterCreate a namespace for the application:
kubectl create namespace sock-shopInstall the microservice application under the sock-shop namespace:
kubectl -n sock-shop create -f complete-demo.yamlList the pods for the newly created application."
kubectl get pods -n sock-shopNote: Using -w allows you to view the pods as they start-up in real-time:
kubectl get pods -n sock-shop -w
The Kubernetes API is the main gateway for interacting with the Kubernetes cluster. In this lesson, we will discuss the API server and interacting with it to gain information about our cluster.
Service discovery is the process of automatically detecting available services and how to connect to them. In this lesson, we will go over the different methods of service discovery in Kubernetes and also work with the cluster DNS server.
Kubernetes uses replication to create multiple instances of an application across the cluster. In this video, we will discuss replication and talk about the differences between replication controllers, replica sets, and deployments.
In reference to Kubernetes, Ingress is an API object that manages external access to services in the cluster. In this lesson, we will discuss Ingress and work with a NodePort service that will provide access to the cluster.
The ability to scale the individual components of a microservice application is one of the great features that Kubernetes offers. In this lesson, we walk through manually and automatically scaling microservices in our cluster.
Self-healing is the ability of the Kubernetes cluster to recover from service or node failure automatically. In this video, we discuss and show the benefit of using replication for your microservices and the way that the Kubernetes cluster can automatically recover from a service failure.
Congratulations on completing the course! In this video, we will go over some of the other available content at Linux Academy and how to search for it.
Take this course and learn a new skill today.
Transform your learning with our all access plan.Start 7-Day Free Trial