There are many ways to build a Kubernetes cluster. One of them is using a tool called kubeadm. Kubeadm is the official tool for “first-paths” when creating your first Kubernetes cluster. With the ease of getting up and running, I thought I would put together this quick guide to installing a Kubernetes cluster using kubeadm!
Spin up Servers Instantly
Linux Academy cloud servers are the best way to build your cluster because if you mess up, it’s easy to delete the server and re-create it in a matter of minutes. So, let’s go over to our Linux Academy Cloud Playground and spin up some servers. From the Linux academy navigation bar, select Cloud Server, and then select Playground.
Let’s spin up three servers, all the same type, with the Ubuntu 16.04 Xenial LTS distribution, and in medium size. As a part of your account access, you have a total of nine units, so to create three medium servers will be just enough to hit that capacity. A three-node cluster will be sufficient for most practice clusters and will allow you to create all the resources you want.
We’ll add a label for the servers, which doesn’t change anything about the machine itself — it’s just a name that helps you organize your cloud servers so you can tell them apart. Especially when you are creating servers that are the same distribution and size, this will help us decipher the difference between our master server and our worker nodes. Let’s enter the name of “Kube Master” for one, and call the other two “Kube Worker 0″ and Kube Worker 1”. Here’s what they will look like when they are all labeled:
Log in and Start Building
Once the servers show a status of “ready”, let’s go ahead and get logged in. Click on the server to expand and show the details of each server. Click on the page icon next to the temporary password and click on Terminal to open the terminal in a new browser tab. Enter the username and paste in the temporary password provided.
Let’s get logged in to all three servers at once. This will make it easier to perform the commands on each server by just switching over to the next browser tab and pasting in the commands. The first thing we’re going to do on each server is add our gpg key for the docker repository. Docker is the container runtime we’re going to use.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Now, on all three servers, let’s add Docker to our repository.
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Still applying to each of the three servers, add our gpg key for Kubernetes, and then add the repository.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF
Now that we’ve got our repositories locked in, let’s update our packages.
sudo apt-get update
Install Docker, kubelet, kubeadm, and kubectl
Now, let’s install Docker, kubelet, kubeadm, and kubectl on all three of our Linux Academy servers.
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.13.3-00 kubeadm=1.13.3-00 kubectl=1.13.3-00
sudo apt-mark hold docker-ce kubelet kubeadm kubectl
The kubelet is the node agent that will run all the pods for us, including the kube-system pods. The kubeadm is a tool for deploying multi-node kubernetes clusters. And then the kubectl is the command line tool for interacting with Kubernetes. We've installed specific versions and marked them to hold so that Kubernetes and Docker don't automatically update and become incompatible.
Okay, now that we’ve installed Docker, kubelet, kubeadm, and kubectl, we will now install commands on only the master server. The following commands will be completed on the master server only. Keep the other two server terminals open, but we won't be using them until we join them to the cluster at the end.
So, go ahead and get to the terminal that represents your master server. If you need to, go back to the Cloud Playground screen and check which one you've labeled. PRO TIP: If it's the first server you've created, it will end in "1c".
Initialize the Cluster
Let's go ahead and initialize the cluster with this command, using the kubeadm (this may take a few minutes).
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
When it's finished, you’ll notice it gives us an output that is very helpful as far as what steps we need to do next. So we’ll follow the instructions here and make the
.kube directory, we’ll copy the kube config, and change ownership of the config. Finally, we’ll copy this
kubadm join command.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
PRO TIP: If the join command is longer than one line, paste it into a new file and take out the line break. Zoom all the way out in your browser until it shows on one line.
Before we paste it into the two worker nodes, let’s apply our flannel CNI. Flannel is our network overlay, so our nodes can communicate with each other.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
Once we've installed flannel, we can paste in the
kubeadm join command (make sure it's all on one line) on our two worker nodes.
kubeadm join [your unique string from the kubeadm init command]
Now that the nodes are successfully joined, let's run our first
kubectl command to view our cluster node status.
kubectl get nodes
We’ve successfully set up our three-node cluster — let's start to explore the cluster. Maybe run some pods? Here are a few fun exercises to get you started.
Run a busybox Pod
A busybox pod is a container image that allows you to run utilities like nslookup to find the default hostnames in your Kubernetes cluster.
As of version 1.13, CoreOS has replaced kube-dns as the default cluster DNS server. It is written in Go and can be used in a multitude of environments because of its flexibility. CoreDNS integrates with Kubernetes via the Kubernetes plugin or directly with etcd with the etcd plugin.
Core DNS is running as a deployment with two replicas. You can see this by running
kubectl get deployments -n kube-system.
kube-dns is running as a service (yes, it's still called kube-dns in version 1.13). This is done so there is greater interoperability with workloads that relied on the legacy kube-dns service name to resolve addresses internal to the cluster. You can see this by running
kubectl get deployments.
In the old kube-dns, several containers were used within a single pod: kubedns, dnsmasq, and sidecar. The kubedns container watched the Kubernetes API and served DNS records based on the Kubernetes DNS specification, dnsmasq provided caching and stub domain support, and sidecar provided metrics and health checks.
This setup led to a few issues over time. For one, security vulnerabilities in dnsmasq have led to the need for a security-patch release of Kubernetes in the past. Additionally, because dnsmasq handled the stub domains, but kubedns handled the External Services, you cannot use a stub domain in an external service, which was very limiting to that functionality.
All of these functions are done in a single container in CoreDNS, which is running a process written in Go. The different plugins that are enabled replicate (and enhance) the functionality found in kube-dns.
Here's a busybox pod to use for your dns lookups, which you can run with just a single command.
kubectl run busyboxy --image=busybox:1.28.4 --generator=run-pod/v1 --command -- sleep 99999
After you have created the busybox pod, run the following:
kubectl exec busybox -- cat /etc/resolve.conf
kubectl exec busyboxy -- nslookup kubernetes
kubectl exec busybox -- nslookup kubedns.kube-system.svc.cluster.local
If you'd like to discover more ways to use Kubernetes, check out all of our great Kubernetes courses, including:
- Kubernetes Quick Start
- Kubernetes Essentials
- Getting Started with Kubernetes
- Certified Kubernetes Administrator
- Monitoring Kubernetes with Prometheus
- Essential Container Concepts
Learn more about Kubernetes here.