Skip to main content

MicroK8s: Up and Running in Azure

Posted on April 10, 2020 by ChadCrowellChadCrowell

By now you’ve heard the stories that Microsoft loves Linux and fully embraces the open-source world. Microsoft, under Satya Nadella, has taken a new stance on open-source, which to some is a complete flip-flop from the Microsoft of the past. The “New Microsoft” is one of the top open-source contributors on GitHub and has opened the following projects to open-source:

  • PowerShell
  • Visual Studio Code
  • Edge
  • Windows Terminal
  • Calculator
  • TypeScript
  • .NET Core
  • Windows Subsystem for Linux (WSL)

and many more.

Not to mention, in October of 2018, Microsoft joined the Open Invention Network (OIN) and has opened 60,000 patents to open-source. You can read more about that here.

Canonical

Among the relationships Microsoft has formed with Red Hat, Oracle, and SUSE since its mobile-first, cloud-first initiative in 2014, one of its closest is Canonical. Canonical fully supports the Ubuntu distribution running in Azure. So much so, that they even have their department at the Redmond campus. Contact them via email: linux@microsoft.com.

You started to get a feel for this cultural shift at the most recent WSL conf; which we talked about in episode 344 of the Linux Unplugged Podcast. Canonical, having such an integrated and unified presence in the Azure ecosystem, provides a perfect opportunity to demonstrate how easy it is to get up and running with MicroK8s in Azure. 

MicroK8s

MicroK8s is a CNCF certified upstream Kubernetes deployment that runs entirely on your workstation or edge device. The only requirement is a Linux VM. Since the snap package manager is pre-installed on the official Ubuntu 18.04 LTS image in Azure, getting up and running is a cinch! 

Your Toolbox

By the end of this tutorial, you will have a Kubernetes cluster running locally on Azure VMs. Why? Because it’s arguably the fastest way to get Kubernetes up and running, the footprint is small, it’s reliable, and it’s FREE! 

To follow along, you must have access to an Azure account. If you don’t have an Azure account, NO PROBLEM! Use one of our cloud sandboxes! Cloud sandboxes are included with your Linux Academy subscription.

All of the following commands should be run in Azure Cloud Shell. Access this shell by going to http://shell.azure.com from any browser and logging into your Azure account. 

Let’s Get Up and Running!

Starting with a deployment to our Azure account, we’ll create two virtual machines, which we’ll use for our two MicroK8s nodes. To do this, we’ll create a parameter file and an Azure Resource Manager (ARM) template file. 

 

First, create some variables to set the username and password for the two virtual machines (I chose to generate a secure 16 character password, but you can create a custom password if you wish):

~$ adminUsername=”azureuser”

~$ adminPassword=$(openssl rand -base64 16)

 

Now, insert those variables into a file named ‘parameters.json’ which will be used in our deployment:

~$ cat <<EOF > parameters.json

{

 “adminUsername”: { “value”: “$adminUsername” },

 “adminPassword”: { “value”: “$adminPassword” }

}

EOF

Now, Download the ARM Template File, and Rename It “template.json”:

~$ wget https://gist.githubusercontent.com/chadmcrowell/441058e5fd9379b64b7c875b521564f5/raw/0db25244f66f99af3bf24f812cb537635d15f295/ubuntu-microk8s.json-O template.json

 

Create the resource group and deploy the ARM template (if you’re using a cloud sandbox, set the “resourceGroupName” variable to the existing resource group name and skip the resource group creation):

~$ resourceGroupName=”vmdeploy-rg”

~$ location=”westus”

~$ deploymentName=”vm-deploy”

 

~$ az group create -n $resourceGroupName -l $location # skip this step if you are using cloud sandbox

 

Now that we’re all set up, let’s start the deployment of the VMs, with the “az deployment group create” command, passing in the variables we just set as well as the template file and the parameters file:

~$ az deployment group create \

 -g $resourceGroupName \

 -n $deploymentName \

 –template-file template.json \

 –parameters parameters.json

 

Once the deployment is complete (mine took about 5 minutes), you can retrieve both of the public IP addresses of the VMs with the following command:

~$ az vm list-ip-addresses -g vmdeploy-rg | grep ipAddress

 

SSH into the first node using the following command:

~$ ssh azureuser@<public-ip-of-node1>

 

Once you’re logged in, add your user to the microK8s group and allow access to the “.kube” directory so that you can issue commands to the cluster:

~$ sudo usermod -a -G microk8s $USER

~$ sudo chown -f -R $USER ~/.kube

~$ su – $USER

 

You can check if microK8s is ready by using this command:

~$ microk8s status –wait-ready

 

Once microK8s is up and running, generate the join command that you’ll use to join node 2 to the cluster.

(Copy the output to a notepad – it should look like this: “microk8s join 10.0.0.5:25000/DIlwTwTyNskZhfyZyzZUqkskAvGfZoNL” ):

~$ microk8s add-node

 

PRO TIP: Instead of typing ‘microk8s.kubectl’ for each command in microK8s, change it to just ‘kubectl’ by creating an alias:

~$ sudo snap alias microk8s.kubectl kubectl

 

Now that microK8s is running on the first node, SSH into the second node and perform the same steps, followed by the “microk8s join” command:

~$ ssh azureuser@<public-ip-of-node2>

~$ sudo usermod -a -G microk8s $USER

~$ sudo chown -f -R $USER ~/.kube

~$ su – $USER

~$ microk8s status –wait-ready

~$ sudo snap alias microk8s.kubectl kubectl

~$ microk8s join 10.0.0.5:25000/DIlwTwTyNskZhfyZyzZUqkskAvGfZoNL

After You’ve Run the “microk8s join” Command

You will no longer run the kubectl commands from node 2. Go ahead and log back into node 1, which is your “master” node.

~$ ssh azureuser@<public-ip-of-node1> 

 

Once you’ve logged into the master node, run this command to see the two nodes up and running:

~$ kubectl get nodes

 

Congratulations! You’ve set up a two-node Kubernetes cluster using MicroK8s! Wasn’t that easy?

Let’s keep going and create a deployment inside of our cluster!

Optional: Add-ons and Cluster Info

MicroK8s is a bare-bones Kubernetes setup, so it doesn’t include DNS management or storage for the pods in your cluster. However, it’s easy to enable these “add-ons” in one simple command:

~$ microk8s.enable dns storage

 

NOTE: There are many pre-configured add-ons (i.e., Kubernetes dashboard and Prometheus) in MicroK8s that can be enabled in this manner. Check out the full list here

 

View the Kubernetes API server URL with the following command:

~$ kubectl cluster info

 

PRO TIP: You can talk to the API through a proxy by running the “kubectl proxy” command. This proxy’s the HTTP connections to the API server while taking care of authentication. Try it out! Run “kubectl proxy” (it will say “Starting to serve on 127.0.0.1:8001” and won’t give you the prompt back), then open another browser tab (https://shell.azure.com) and curl the API server URL:

~$ curl 127.0.0.1:8001

Deploy an Application on MicroK8s

Let’s see how to run an application inside of our brand new MicroK8s cluster. We start by creating a deployment with two pods:

~$ kubectl create deployment microbot — image=dontrebootme/microbot:v1

~$ kubectl scale deployment microbot –replicas=2

 

To make our application accessible from the outside world, we need to create a service:

~$ kubectl expose deployment microbot –type=NodePort –port=80 –name=microbot-service

 

Now, let’s see all the Kubernetes resource we’ve created thus far:

~$ kubectl get all –all-namespaces

 

NOTE: You’ll see the “microbox-service” is a nodeport type service. This means that our application is available on a port on the host machine. The ports are listed there as well. The port 80 resembles the port on the pod, and the other number (it was 30243 for me, but it will be a different number for you) resembles the port on the node. You can read more about NodePort type services here

 

Since our service is now being exposed from a port on the node, we can open up that port, so that we can access the service from the internet. To open a port to a VM in Azure, we use Network Security Group (NSG) rules (make sure to change the “destination-port-range” to node port for your service):

~$ az network nsg rule create \

 -g vmdeploy-rg \

 –nsg-name microk8s-nsg \

 -n k8s-service \

 –priority 500 \

 –source-address-prefixes “*” \

 –destination-port-ranges 30243 \

 –access Allow \

 –protocol Tcp \

 –description “Allow service port 30243 for microk8s”

 

Now that the port is open to the VM, you can go ahead and access it from a browser. Type this into a new browser tab (make sure to change the IP address to the public IP address of node 2 and the port to the node port for your service):

http://138.91.157.207:30243/

 

Here’s what you should see:

containers image

Summary

In this short tutorial, we spun up a two-node Kubernetes cluster, created a deployment, and exposed that deployment via service to access our web application from the internet. If you’re like me, you’re excited about the possibilities of running many different containerized applications in MicroK8s. 

What’s next? How about deploying a microservices application called “Stan’s Robot Shop,” a sample shopping web app?

Check out the “Kubernetes Essentials” course on Linux Academy.

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *