Skip to main content

Exposing Services in Kubernetes

Hands-On Lab

 

Photo of Will Boyd

Will Boyd

DevOps Team Lead in Content

Length

01:00:00

Difficulty

Intermediate

With deployments, you can create a dynamically-managed set of replica pods. But this introduces a need for an equally dynamic way to access them. Services provide a layer of abstraction that provides access to pods and other entities, allowing dynamic, high-availability access to the necessary components of your applications. In this lab, you will have the opportunity to work with services by creating a service on top of an existing deployment.

What are Hands-On Labs?

Hands-On Labs are scenario-based learning environments where learners can practice without consequences. Don't compromise a system or waste money on expensive downloads. Practice real-world skills without the real-world risk, no assembly required.

Exposing Services in Kubernetes

The scenario

Our company has just deployed two components of a web application to a Kubernetes cluster, using deployments with multiple replicas. We need a way to provide dynamic network access to these replicas so that there will be uninterrupted access to the components whenever replicas are created, removed, or replaced. One deployment is called auth-deployment, an authentication provider that needs to be accessible from outside the cluster. The other is called data-deployment, and it is a component designed to be accessed only by other pods within the cluster.

The team wants us to create two services to expose these two components. We'll examine the two deployments, and create two services that meet the following criteria:

auth-svc

  • The service name is auth-svc.
  • The service exposes the pod replicas managed by the deployment named auth-deployment.
  • The service listens on port 8080 and its targetPort matches the port exposed by the pods.
  • The service type is NodePort.

data-svc

  • The service name is data-svc.
  • The service exposes the pod replicas managed by the deployment named data-deployment.
  • The service listens on port 8080 and its targetPort matches the port exposed by the pods.
  • The service type is ClusterIP.

Note: All work should be done in the default namespace.

Get logged in

Use the credentials and server IP in the hands-on lab overview page to log in with SSH.

Survey the landscape

Just to see what's already been deployed, and what we're dealing with, run this:

[user@host]$ kubectl get deploy

We should see auth-deployment and data-deployment listed, with some details about each.

Create the auth-svc service

Examine the auth-deployment. Take note of the labels specified in the pod template (app), as well as the containerPort exposed by the containers:

[user@host]$ kubectl get deployment auth-deployment -o yaml

Create a service descriptor file (using whichever text editor you like) called auth-svc.yml:

apiVersion: v1
kind: Service
metadata:
  name: auth-svc
spec:
  type: NodePort
  selector:
    app: auth
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 80

Our selector and port should match what was in the yaml output (the app label and containerPort) from the last command.

Create the service in the cluster:

[user@host]$ kubectl apply -f auth-svc.yml

Create the data-svc service

Like we did with the auth-deployment, let's examine some of the data-deployment details. Again, note the labels specified in the pod template, as well as the containerPort exposed by the containers:

[user@host]$ kubectl get deployment data-deployment -o yaml

Create a service descriptor file (again, using whichever text editor you like) called data-svc.yml:

apiVersion: v1
kind: Service
metadata:
  name: data-svc
spec:
  type: ClusterIP
  selector:
    app: data
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 80

The only real differences between this and the auth-svc.yml file we created are name:, type:, and app:. Once that file is good to go, create the service in the cluster:

[user@host]$ kubectl apply -f data-svc.yml

Now, to check on things, we can run this:

[user@host]$ kubectl get svc

We should see both auth-svc and data-svc running. But to make sure the service is mapping to the pods correctly, we'll run:

[user@host]$ kubectl get ep auth-svc
[user@host]$ kubectl get pods

The first command will show us that auth-svc has two endpoints, and the second will show us that auth-svc is in fact running two pods.

We can run those two command again, but for data-svc and see similar results, except that there are three pods involved, not two.

Conclusion

We needed to set things up so that our web app allowed uninterrupted access when pods get created, removed, or replaced. Everything is up and running. Congratulations!