Setting Up a Frontend Load Balancer for the Kubernetes API

Hands-On Lab

 

Photo of Will Boyd

Will Boyd

DevOps Team Lead in Content

Length

01:00:00

Difficulty

Intermediate

One way to manage access to the Kubernetes API across distributed control nodes is to use a load balancer. This activity will guide you through the process of setting up an Nginx load balancer to manage traffic to the Kubernetes API across multiple nodes. You’ll learn more about the relationship between the Kubernetes API and the different Kubernetes components, such as kubelet and kubectl. After completing this activity, you will have a basic understanding of how to load balance Kubernetes API traffic.

What are Hands-On Labs?

Hands-On Labs are scenario-based learning environments where learners can practice without consequences. Don't compromise a system or waste money on expensive downloads. Practice real-world skills without the real-world risk, no assembly required.

Setting Up a Frontend Load Balancer for the Kubernetes API

Introduction

Our team is working on setting up a new Kubernetes cluster. Two Kubernetes controllers have been configured, but the team wants to use a load balancer for managing traffic to the Kubernetes API. We have the task of setting up an Nginx load balancer that will balance traffic across the Kubernetes API services running on the two Kubernetes controllers.

Let's get logged into the server that will be our load balancer using the credentials provided on the lab page.

Install Nginx on the load balancer server.

We can install Nginx, and set it to fire up when the system boots, like this:

sudo apt-get install -y nginx
sudo systemctl enable nginx

Then we check to make sure it's running before we move on:

sudo systemctl status nginx

Configure Nginx to balance Kubernetes API traffic across the two controllers.

To get Nginx acting as a load balancer:

sudo mkdir -p /etc/nginx/tcpconf.d
sudo vi /etc/nginx/nginx.conf

Add the following configuration at the bottom of nginx.conf:

include /etc/nginx/tcpconf.d/*;

Create a config file to configure Kubernetes API load balancing using this combination of cat and tee commands:

cat << EOF | sudo tee /etc/nginx/tcpconf.d/kubernetes.conf
stream {
    upstream kubernetes {
        server <CONTROLLER 0 PRIVATE IP>:6443;
        server <CONTROLLER 1 PRIVATE IP>:6443;
    }

    server {
        listen 6443;
        listen 443;
        proxy_pass kubernetes;
    }
}
EOF

Be sure to edit the file (vi /etc/nginx/tcpconf.d/kubernetes.conf) afterwards to put the correct controller IP addresses in. They're back on the Linux Academy lab page with the login info.

Reload the Nginx configuration:

sudo nginx -s reload

You can verify that everything is working by making a request to the Kubernetes API through the load balancer:

curl -k https://localhost:6443/version

This request should return some Kubernetes version data in JSON format.

Conclusion

That was fairly painless, wasn't it? And quick. We managed to grab hold of an existing server, install Nginx on it, and turn it into a load balancer in pretty short order. Congratulations!