Skip to main content

One of the questions I get asked quite often by people who are just starting or are simply not used to the “new” way things are done in IT is, “What is the cloud?” This, I think, is something you get many different answers to depending on who you ask. I like to think of it this way: The cloud is a grouping of resources (compute, storage, network) that are available to be used in a manner that makes them both highly available and scalable, either up or down, as needed. If I have an issue with a resource, I need to be able to replace that resource quickly — and this is where containers come in. They are lightweight, can be started quickly, and allow us to focus a container on a single job. Containers are also replaceable. If I have a DB container, for instance, there can’t be anything about it that makes it “special” so that when it is replaced, I do not lose operational capability.

How does this relate to Kubernetes, and how does it allow us to meet the goal of being “cloudy”?

One of the things Kubernetes provides is Services. By registering in DNS, Services decouple IP addresses from Pods and allow us to separate what is going on behind the scenes from what is seen by the consumers of those services. No matter where the container is behind the service, we always point to the service.  This provides failover so that when a Pod dies, another Pod can serve the traffic while a replacement is built.

Speaking of pods, these are the smallest unit in Kubernetes and can hold from one to many containers. I like to think of them as a single virtual machine host in a traditional data center. Machines with a similar purpose are placed there for lowest latency, but if that host goes down, we failover to another host to serve traffic while the host is replaced. Pods are the same principal. We place like containers there, and we failover to like Pods in the event of an emergency.

I am a huge fan of CI/CD as well as Continuous Deployment. Deployments in Kubernetes allow us to version what we build and to “roll out” new versions with zero downtime. We can also roll back and track the deployments so that we can audit our changes. The ability to interact with the control plane via an API means we can integrate with our favorite CI/CD server solution so that we can make a complete pipeline from code to container to deployment in an automated fashion.

Moving forward, it is my opinion that containers are here to stay and that management techniques for massive container farms will be one of the most important skills an administrator or architect can have moving forward. Right now, Docker and Kubernetes are the primary drivers in this space, and if you want to be fluent in cloud, you need to know these technologies. To get started, check out my Kubernetes Quick Start course. I will show you how to set up a cluster, as well as demonstrate each of the technologies we’ve talked about here: Services, DNS, and Deployments. It’s the perfect time to start your journey to becoming a container expert, and Kubernetes is a great skill to add to your skill set, so let’s get started!


Leave a Reply

Your email address will not be published. Required fields are marked *