Skip to main content

Top Ten Ways to Not Sink the Kubernetes Ship

Posted on March 28, 2019 by JohnMarxJohnMarx

As organizations increase their use of Kubernetes in public and hybrid cloud architectures, it is important to not let a maiden voyage end like a Titanic catastrophe. To ensure ongoing security, site reliability engineers must work hand-in-hand with the CISO’s (Chief Information Security Officer) office to implement Kubernetes security. The following is a list of some useful considerations when deploying Kubernetes clusters.

Server Hardening

Kubernetes nodes rely on underlying virtual servers for both the master control plane and for worker nodes. It is important to use security tooling such as OpenSCAP, the open source version of the Security Content Automation Protocol, to harden virtual machine images prior to their deployment in virtual private clouds. Vulnerabilities at the operating system level may be exploited by rogue container workloads if not hardened. Additionally, the reduction of the server’s attack surface is key to the advantages of cluster and container isolation.

Cluster Hardening

In addition to server hardening, cluster hardening is equally important. Kubernetes out-of-the-box doesn’t exactly default to a secure enterprise-grade cluster. Fortunately, Aquasec has provided us with a utility for the automated scanning of our clusters against the Kubernetes benchmark published by CIS (Center for Internet Security). Running kube-bench against a cluster prior to production deployment allows engineers to configure Kubernetes settings and adjust manifests to harden the cluster itself.

Use the Right Installer

With over forty installation methods to choose from, it is difficult to know today which installers will emerge as standards in our future. Currently, when deploying in a hybrid cloud setting, it is important to have an installer capable of launching clusters in both on-premises and off-premises infrastructure. Kubernetes Operations (kops) is an enterprise-grade installer that produces a production-grade cluster and provides the configurability needed to customize the implementation. By employing an installer, the consistency and repeatability of the install is assured so quality may be sustained.

Establish Immutable Clusters

The immutable server is a best-practice for virtualization. Constantly changing environments require consistency and if changes may be made at a server level, they are often missed in the repeatable automation that is needed for large scale environments. Installers help by allowing clusters to be launched and configured from bastion hosts or jump servers outside of the node clusters themselves. If the Kubernetes installer does not afford all of the administration that is required, a configuration management tool such as Ansible, Chef, Puppet, or Salt is a must. Any organization utilizing Kubernetes must be doing so to allow for scale. Scale requires strict automation and standardization.

Consider the Cluster Ephemeral

At the time of this writing, Kubernetes is at version 1.14. A one-dot-anything is not a mature product and Kubernetes is no different. Recent vulnerabilities in the runc container engine, and the CVE-2018-1002105 vulnerability in TCP (Transport Control Protocol) itself requires quick upgrades of the cluster modules themselves. The kops installer has the capability of specifying the version of Kubernetes to implement. This form of rolling upgrades to clusters is critical when remediating vulnerabilities. Much in the way we consider container workloads to be ephemeral, we need to also view our clusters that way. In-place upgrades may be difficult, especially in hardened environments. With cloud virtualization and advanced networking, vacating a cluster after the initialization of a new one is both feasible and pragmatic. Hygiene, using current patched versions of Kubernetes modules, is of key importance and by vacating and launching clusters, this may be done with minimal outage.

Rotate Your Certificates

When Kubernetes is first launched, a number of self-signed or otherwise rendered TLS (Transport Layer Security) certificates from a certificate authority are  implemented within the cluster for any North-South Internet traffic to the API Server, and any East-West peer-to-peer traffic between Kubernetes components on a virtual private network or cloud (VPN/VPC). These certificates guard critical sessions between endpoints such as the kubelet on a worker node and the API server on the master node. But like any secret, TLS certificates reside on disk and may be compromised by intruders. The automated installers are capable of rotating TLS credentials on an ongoing basis at regular intervals to prevent unwanted intrusion and vulnerability caused by stale certificates.

Implement RBAC

Role Based Access Control (RBAC) has become a standard for the Kubernetes Authentication-Authorization-Admission security paradigm. Defining roles that grant specific permissions and then binding these roles to specific service accounts is a versatile and well-controlled means of controlling access to cluster resources. Role binding is further controlled due to its ability to constrain resources within namespaces and lock down the kube-system and default namespaces if deployed in production.

Implement Pod Security Policy

Pod Security Policies are a fairly new capability within Kubernetes that allows roles and cluster roles to carry with them pod security contexts to govern deployments. Pod Security Policies are implemented through the admission controller and allow pods that do not meet bound policies to be rejected by the scheduler. A typical example would be a policy to restrict container workloads from running in privileged mode. A large number of control aspects provided enable restrictions on network ports, mounted volumes, and many other aspects of pod replicas and deployed services.

Manage Secrets

Kubernetes provides a resource of secret to manage credentials required for container workloads and services. By creating a secret, you may store usernames, passwords, security tokens, and other credentials in the etcd datastore in base64 encoded or encrypted form. These secrets may then be passed to pods at the time of deployment by defining them in the deployment pod spec. They may be passed to the container applications as environment variables set at the time of pod instantiation or written to a specified mounted volume for retrieval by the application once running. These secrets prevent any repository-based assets from containing secrets and allow the secrets to be rotated at regular intervals. Application services needed to access other services such as databases may then use these secrets for East-West peer-to-peer sessions within microservices environments.

Implement Network Policies

Kubernetes nodes may be penetrated just as any server can. The popular easy-to-use network overlays for Kubernetes such as flannel do not allow for the implementation of network policies (at the time of this writing). With over a dozen overlays to choose from, there is broad discrepancy in their features. Calico is a popular overlay that provides for the implementation of network policies. These policies, like any firewall, provide an invaluable means of limiting attack vectors both inside and outside the VPC. It is important to remember that just because an endpoint is behind a firewall, that doesn’t mean it is not a vulnerability. Attackers use weak internal networks to land-and-expand when a breach of the node server is made. Network policies prevent attackers from exploiting a server breach beyond its initial platform when implemented with prudence. Penetration tests with tooling such as OWASP’s (Open Web Application Security Project) pentest and Aquasec’s kube-hunter are great ways to harden networks at the time of deployment and throughout their use.

These suggestions and others are covered in Linux Academy’s Kubernetes Security Course.

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *