Is Kubernetes Serverless?

Posted on February 20, 2019 by ChadCrowellChadCrowell

The Kubernetes Trend

If you take a look at where the IT industry is going, you will start to see a trend: a layer of complexity added to the relationship between applications and infrastructure. No longer can you draw a straight line from the application to the machine it runs on. Developers have been trying to get away from having to manage infrastructure for years. It’s no fun having to provision, manage, and patch a multitude of disparate servers, new and old.

As a result, that layer of abstraction between the application and the underlying infrastructure has led to the invention of a number of technologies, one of them being Kubernetes. Not only can we ensure our application is going to run on a consistent Docker container image, no matter what environment, but we don’t have to manage the containers and keep track of where and how many are running at all times.

The term “serverless” — the ability to run code without having to choose what to run it on — shares that same overall goal, which is to eliminate the need for a developer of an application to manage infrastructure, and have that taken care of by another service or component. You could derive from this that Kubernetes is serverless, but that’s not quite right. The next step for Kubernetes is, however, serverless, and here, we’ll talk about why.

What is Kubernetes?

Kubernetes is just as important for the operations team as it is for the developers. In the past, at the end of every release cycle, developers would package up their entire application and throw it over the wall to the operations teams. The operations team would then have to deploy it to one or more servers and manage it throughout its lifetime. When the servers would fail, the ops team would manually migrate the application to a set of “healthier” servers. And as applications are transforming from monoliths to microservices, the process of managing each and every piece of the application gets harder. Kubernetes solves this, on both fronts, and managing the life of the backend infrastructure eases.

Kubernetes makes the management part of infrastructure easier, but it in no way manages itself. As the application scales up, down, in, or out, the infrastructure needs to change, be replaced, and/or be upgraded. What it enables the developers to do is become closer to the end user and understand the issues faced by the operations teams while maintaining the application. Through streamlining the deployment process, developers can develop new features and improve the user experience much faster.

The Why

The Kubernetes API is designed to interact with nodes, which are physical or virtual machines. One benefit of using Kubernetes is that it monitors the health of these physical or virtual nodes and can alert you when they are unhealthy. When Kubernetes schedules a pod, it first ensures the node has adequate CPU and memory in order to run the pod, and that CPU and memory are obtained by gaining access to the control group (cgroup).

For Kubernetes to truly become serverless, it has to remove its dependencies from the notion that a physical machine is required in order to interact with the Kubernetes API. In order to do this, two things need to happen. The first being a container that will act like a pod, in that it can isolate processes in separate namespaces. Second, an add-on or plugin to the Kubernetes API itself that will treat the instance as just another pod.

The good news is that adoption is in full swing. Major public cloud providers AWS and Azure, amongst others, are offering serverless container instances. For example, Azure Container Instances runs your Docker containers without the need for virtual machines, and after configuring a few settings accessing the running application through the Azure provisioned fully qualified domain name (FQDN). AWS does essentially the same thing with the AWS Fargate service.

The virtual kubelet project has been making progress toward the interaction between this serverless container instance and the Kubernetes API. With the integration of container instances, this dependency of “nodes” in the Kubernetes environment may be nonexistent.

If you’d like to learn more about Kubernetes, check out the many different Kubernetes courses and hands-on labs at Linux Academy.

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *