Skip to main content

In the first part of this blog, we covered use cases for both on-premises and publicly-hosted Kubernetes deployments. In this part, we’ll be covering use cases in a multi-cloud public deployment as well as one in a hybrid deployment.
There can be many factors that cause a company to choose one public cloud company over another, such as price, features, or location. Now, to use more than one provider, there needs to be either similar functionality or each needs to offer some unique feature that makes separating out your application to run on multiple providers worth creating that dependency. Due to the adoption of Kubernetes by many public cloud providers, we’re going to base our use case on the ability to deploy the same application on multiple providers, with the main reason for this strategy being accessibility to multiple unique locations, allowing us to maximize response times and increase availability. In this instance, our use case is going to be for a media streaming application that may need to scale quickly to respond to customer demand, have as minimal latency as possible, and return the stored content to the consumer. Using a product such as Akamai Global Traffic Management, we’re going to we are going to use DNS-based load balancing to send our customers to the best site for them and, from there, allow users to utilize one of the identical applications deployed on Kubernetes.
So, as shown in the diagram below, the customer would first make a DNS request, which would be answered by Akamai and then seamlessly sent to to the best location for the application based on the configuration of the service. Next, it would go through a locally-deployed load balancer and sent to the customer-facing part of the application in an application node cluster. Once the customer requests their desired media, it is selected from the CDN and returned directly to the customer.

Now, the final scenario and use case we are going to discuss is that of a hybrid model, where there is an on-premises version of the application and also a public cloud deployment to allow the application to be auto-scaled quickly, should the traffic and load to the application increase; the auto-scaled application should also remove servers once the need for more capacity passes, saving the company money.
For our use case, we are going to use another media streaming service. In this case, our CDN is hosted on-premises, along with the primary deployment of our application, which has a fixed number of node clusters we can host there. We will again be using a service, such as Akamai, that is configured so that when our usage gets to a certain threshold it will send traffic to a public cloud. With our public cloud configured so it has a single node cluster available as a warm stand-by, when traffic is sent there it can automatically receive it. As the traffic increases it will also auto-scale as needed. Configuration-wise, the application is the same as the on-premises clusters and will talk back to the on-premises CDN, where it will retrieve the media and then stream it back to the customer.

The last use cases I’m going to discuss can actually be used in any of these hosting scenarios and for seamless deployments or rolling back of code. One of the advantages of Kubernetes is that you can use it to deploy pods running a new version of your application by utilizing kubectl and your manifest file, along with a tag for the version.
For a seamless deployment, you would have pods running both versions, until finally all pods are on the new version. This rolling deployment allows your application to continue to be up and responsive to your customers, provided there are no dependencies, such as DB changes that might require downtime.
And, finally, for an A/B type deployment, you would bring up new pods running the newer version and, once they are up and running, you could simply change your load balancer to point to the new pods or shutdown the older pods. Now, where this could be considered a better option is if you needed to rollback – you would simply either change back the load balancer or change which pods are up and running.
As you can see, the use cases for Kubernetes vary, but at the same time can it be used in multiple ways based upon the needs of the business and the application as well. The best part is that Kubernetes can make your application portable, scalable, and highly available if designed properly.
Other Resources:


Leave a Reply

Your email address will not be published. Required fields are marked *