Skip to main content

Linux Academy Red Hat Certified Specialist in OpenShift Administration (ex280) Prep Course

Course

Intro Video

Photo of Treva Williams

Treva Williams

Training Architect

Length

09:26:02

Difficulty

Advanced

Videos

25

Hands-on Labs

7

Course Details

This course is specifically designed to prepare students for the Red Hat Certified Specialist in OpenShift Administration (ex280 v3.9) exam, but is also suitable for experienced Linux adminstrators seeking to learn OpenShift cluster administration.

Syllabus

Introduction

About the Instructors

About the Instructors

00:01:17

Lesson Description:

Meet Treva Williams, the Training Architect for the ex280 Prep Course! In this video, we introduce the topics that this course will cover, as well as who will benefit the most from taking this course.

About the Course

Course Overview

00:01:32

Lesson Description:

This course is intermediate-to-advanced and is specifically designed to aid you in preparing for the Red Hat ex280 exam, but can also be used to learn basic OpenShift administration. The course will start off with instructions for setting up a 3-node OpenShift cluster using OKD, the upstream version of OpenShift, for those who wish to follow along on their own workstations, which will be used to practice OpenShift administration tasks included in the exam objectives.

About the Exam

About the Red Hat Certified Specialist in OpenShift Administration

00:02:17

Lesson Description:

Red Hat Certificate of Specialty in OpenShift Administration Exam ObjectivesOpenShift Container Platform general configuration and management. Understand and use the command line and web console.Create and delete projects.Import, export, and configure Kubernetes resources.Configure persistent registry storage.Examine resources and cluster status.View logs.Troubleshoot common problems. Docker image management. Understand and work with image registries.List images.Load images from archive files.Use image tags.Pull and push images. Users and Policies management. Create and delete users.Modify user passwords.Modify user and group permissions. Application creation and management. Provision persistent application storage.Deploy applications using Source-to-Image (S2I).Use Git to configure applications.Edit and import application templates.Assemble an application from existing components.Deploy multi-container applications.Create containerized services.Create and edit external routes.Secure routes using TLS certificates. Monitoring and Tuning. Install and configure metrics.Limit resource usage.Scale applications to meet increased demand.Control pod placement across cluster nodes.Register for the exam by purchasing an exam license on the Red Hat website

OpenShift Architecture

Introduction to OpenShift

00:04:52

Lesson Description:

OpenShift Container Platform, abbreviated OCP, is a platform as a service (PaaS) offering from Red Hat that brings together Docker and Kubernetes, and provides an API to manage these services. OpenShift has a microservices-based architecture of smaller, decoupled units that work together, running on top of a Kubernetes cluster.

Architectural Overview

00:05:08

Lesson Description:

The standard OpenShift cluster is composed of at least one master node (with 2 or more masters in an HA environment) and one or more nodes. Kubernetes manages containerized applications across a set of containers or hosts, and provides deployment mechanisms, application scaling, and maintenance, while Docker packages instantiates and runs containerized applications. OpenShift is a layered system designed to expose underlying Docker-formatted container images and Kubernetes concepts as accurately as possible, with a focus on easy composition of applications by a developer.

Workstation Setup

Instructor Config

About My Setup

00:02:41

Lesson Description:

My setup: 3 intel i5 NUCs (1 master, one infra node, one compute node)32 GB RAM250 - 500 GB SSDs4 CPU coresCentOS 7.5 with custom partitioning System requirements 3-5 KVM guests (1 master, 2 nodes, optional registry and NFS storage node) Minimum resources (Production):16 GB RAM4 (v)CPU40 GB diskRHEL/CentOS x86_64 7.3 or higher Registry requirements:16 GB RAM2 vCPU15 GB disk + 15 GB unallocated space for Docker storage backendCentOS/RHEL 7.3-7.4 x86_64 Minimum resources (Dev/PoC):4 GB RAM40 GB disk2 vCPU coresCentOS/RHEL 7.3 x86_64 or higher Registry requirements:4 GB RAM2 (v)CPU15 GB disk + at least 2 GB unallocated space for Docker LVM storageRHEL/CentOS x86_64 7.3 or higher Or just use hands-on labs ;)

Using KVM

Installing KVM Components

00:02:56

Lesson Description:

First, verify your CPU supports virtualization:

grep -E '(vmx|svm)' /proc/cpuinfo
We should get either the word vmx or svm in the output, otherwise CPU doesn’t support virtualization. Install KVM and associated packages:
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils
Enable and start libvirtd:
systemctl enable --now libvirtd
Verify that the KVM kernel module is loaded:
$ lsmod | grep kvm
kvm_intel             162153  0
kvm                   525409  1 kvm_intel
If you're running CentOS/RHEL 7 minimal, virt-manager may not start unless the x-window-system package is installed:
yum install "@X Window System" xorg-x11-xauth xorg-x11-fonts-* xorg-x11-utils -y
If not running as root (which you shouldn't be), add your $USER to the libvirt and KVM groups:
usermod -aG libvirt $USER
Start the virt-manager GUI:
sudo virt-manager

The ShadowMan Network

00:02:47

Lesson Description:

All KVM guests to be used as OpenShift nodes will need to be connected to the same network, which can be achieved by creating a Bridge in KVM.Start up virt-managerGo into Menu --> Connection details --> Virtual networksClick + to add a networkGive a name shadowmanSet IP rangeEnable DHCPv4Click NextSkip IPv6 detailsSelect Forward to physical networkFor destination, select your network (eth0, eno1, wlps0, etc)Set Mode to NATOptional: Set domain name (or leave as shadowman)

Build the OpenShift Master

00:07:35

Lesson Description:

Create the OpenShift master in KVM with at least 4 GB of RAM and a 20 GB disk. Use custom partitioning to leave space available for Docker thinpool storage.

OpenShift Compute Nodes

00:07:06

Lesson Description:

In addition to the OpenShift master, you will need a compute and infra node to replicate the environment used for the course. The nodes can be configured using the virt-manager GUI or with the virt-install command. Create the compute node:

sudo virt-install --name=compute 
--file=/var/lib/libvirt/images/infra.qcow2 --file-size=40 --graphics spice 
--vcpus=1 --ram=4096 
--cdrom=/path/to/Downloads/CentOS-7-x86_64-Minimal-1810.iso 
--network bridge=virbrN --os-type=linux --os-variant=centos7.0
Create the infra node:
sudo virt-install --name=infra 
--file=/var/lib/libvirt/images/infra.qcow2 --file-size=40 --graphics spice 
--vcpus=1 --ram=4096 
--cdrom=/path/to/Downloads/CentOS-7-x86_64-Minimal-1810.iso 
--network bridge=virbrN --os-type=linux --os-variant=centos7.0

Host Preparation

Base Packages and Docker Configuration

00:09:01

Lesson Description:

Once you have created your KVM nodes, you will need to install base packages on all nodes before moving any further. You can quickly and easily run updates and install needed dependencies with the basepkgs.sh bash scriptDownload the basepkgs.sh script from the content-openshift-ex280/release-3.9 branch onto all nodes:

curl -o basepgks.sh 
  https://raw.githubusercontent.com/linuxacademy/content-openshift-ex280/release-3.9/scripts/basepkgs.sh
Execute the script as root:
bash basepkgs.sh

DNS Resolution for OpenShift

00:06:29

Lesson Description:

OpenShift requires wildcard DNS resolution in order to resolve OpenShift routes. This can be configured either with DNSMasq, a lightweight internal DNS resolver, or by using a public wildcard DNS resolver like xip.io or nip.io. In this video, I will show you how to configure your OpenShift nodes with both solutions.

Upstream Setup

Install OKD with openshift-ansible

00:11:26

Lesson Description:

Both the Enterprise and Upstream versions of OpenShift 3.x are installed using a series of Ansible playbooks included in the openshift-ansible package. In this video, we will walk through the process of building an inventory specifically for OpenShift, then installing a cluster using prerequisites.yml and deploy_cluster.yml.

OpenShift Administration

Users, Projects, & Roles

Managing Users

00:08:00

Lesson Description:

In this video, we discuss managing users, projects, and roles.

Managing Projects

00:03:19

Lesson Description:

Managing projects in OpenShift is very straight-forward. In this video, we will practice creating a new linuxacademy project, then add an admin user.

Managing Resource Quotas

00:07:39

Lesson Description:

View a list of resources that can be managed with quotas in the OpenShift documentation.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Application Management

Creating OpenShift Apps

00:06:08

Lesson Description:

OpenShift applications are, at their most basic level, formed from Linux container technology. These containers, mostly based on Docker-formatted container images, are lightweight basic mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. An image is a binary that includes all of the requirements for running a single container, as well as metadata describing its needs and capabilities. These containers are deployed together in something called a pod - which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. These pods have a lifecycle—they are defined, they are assigned to run on a node, then they run until the container(s) exit or they are removed. Pods, depending on policy and exit code, may be removed after exiting, or may be retained in order to enable access to the logs of their containers. OpenShift treats pods as largely immutable, so most changes cannot be made to a pod definition while it is still running. Instead, OpenShift implements changes by terminating the existing pod, then recreating it with the modified configuration, base image(s), or both. OpenShift apps can be created from the dashboard or with CLI of several different components, including but not limited to:Container imagesYAML or JSON templatesSource/binary code

Application Scaling

00:05:32

Lesson Description:

Most app developers create with the hope of reaching as many people as possible. However, that increased demand will create higher load on the app, which is where the need for scaling will come in. Fortunately, OpenShift applications can be quickly and easily scaled up/down without the added burden of creating or reconfiguring load balancing or other complex networking settings both via the web console and by using Origin clients (oc). OpenShift apps are scaled horizontally—new pods are added to an application and network requests are served using load balancing algorithms with HAProxy. When using oc, the oc scale command can be run against the deployment, deploymentconfig (dc), replicationcontroller (rc), statefulset, and replicaset for an app.

Securing OpenShift Routes

00:06:23

Lesson Description:

OpenShift container platform uses routes to expose services at a hostname so that external users can access the service in a browser, usually configured with a DNS wildcard entry pointing to the OCP node that is running the router. Routes can either be unsecured or secured. Secure routes provide ability to use several types of TLS termination to serve SSL certificates to the visitor. Router plug-ins assume they can bind to host ports 80 (HTTP) and 443 (HTTPS), by default. This means that routers must be placed on nodes where those ports are not otherwise in use, or the router must be manually configured to listen to other unoccupied ports in OCP environment variables. Like most OCP services, routes can be configured and managed using YAML templates. Supported traffic protocols:HTTPHTTPSWebSocketTLS with SNI Each route consists of a name, limited to 63 characters, that by default follows the format of Podname-namespace.subdomain.$hostname, a service selector, and an optional security configuration. Create a standard route by exposing the service attached to a pod:

# oc get svc
NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
longhorns   ClusterIP   172.30.16.95   <none>        8080/TCP   15s
# oc expose svc/longhorns
route "longhorns" exposed

About S2I

00:06:56

Lesson Description:

OpenShift is S2I-enabled and can use S2I as one of its build mechanisms (in addition to building Docker images from Dockerfiles, and "custom" builds). OpenShift runs the S2I process inside a special Pod, called a Build Pod, and thus, builds are subject to quotas, limits, resource scheduling, and other aspects of OpenShift. Create a Django app using S2I:

oc new-app 
    centos/python-35-centos7~https://github.com/sclorg/django-ex

Managing Persistent Storage

00:10:35

Lesson Description:

OpenShift Container Platform leverages the Kubernetes persistent volume (PV) framework to allow administrators to provision persistent storage for a cluster. Using persistent volume claims (PVCs), developers can request PV resources without having specific knowledge of the underlying storage infrastructure.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

The Container Registry

The Docker Registry

00:08:59

Lesson Description:

A container registry is a service for storing and retrieving Docker-formatted container images. A registry contains a collection of one or more image repositories, and each image repository contains one or more tagged images. Docker provides its own registry, the Docker Hub. You can also use private or third-party registries. OpenShift Container Platform can also supply its own internal registry for managing custom container images. The OpenShift Container Registry provides users with a built-in location for application builds to push resulting images, and adds the ability to provision new, on-demand image repositories. The OpenShift Container registry can also deploy as a stand-alone component, either on-prem or in the Cloud, acting solely as a container registry without the build and deployment integration that comes with an integrated OCR.

Creating ImageStreams

00:05:46

Lesson Description:

The OpenShift Container Registry provides users with a built-in location for application builds to push resulting images called ImageStreams, and adds the ability to provision new, on-demand image repositories.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

OpenShift Cluster Metrics

Cluster Metrics with Hawkular

00:09:10

Lesson Description:

Let’s talk about Metrics in OpenShift. As an OpenShift Container Platform administrator, you can view a cluster’s metrics from all containers and components using the Hawkular Metrics engine plugin. These metrics are also used by horizontal pod autoscalers in order to determine when and how to scale. Hawkular Metrics can be installed either by adding the openshift_metrics_install_metrics=True variable to the cluster inventory or by declaring variables directly on the CLI, like so:

 ansible-playbook [-i /path/to/inventory] <OPENSHIFT_ANSIBLE_DIR>/playbooks/openshift-metrics/config.yml 
   -e openshift_metrics_install_metrics=True 
   -e openshift_metrics_hawkular_hostname=hawkular-metrics.example.com 
   -e openshift_metrics_cassandra_storage_type=pv

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.

01:00:00

Closing

Comment, Rate, Subscribe

Join the Community!

00:02:15

Lesson Description:

Congratulations for completing the OpenShift Administration course, and thanks for dropping by! Be sure to use all of the resources you have available as a Linux Academy student and don't forget to rate the course! Your ratings are taken very seriously. We want to hear from you!

Take this course and learn a new skill today.

Transform your learning with our all access plan.

Start 7-Day Free Trial