Skip to main content

OpenShift on OpenStack


Intro Video

Photo of Treva Williams

Treva Williams

Training Architect







Hands-on Labs


Course Details

In this course, you will learn how to combine the power of OpenStack, the premier Infrastructure-as-a-Service platform that provides the administrator with a beautifully simplified way to manage virtual machines, software-defined networks, block storage, object storage, and more, with OpenShift Platform-as-a-Service software.


Course Introduction

Introduction to Linux Academy


Lesson Description:

Greetings and welcome to Linux Academy! Let me show you around a bit before we get started.

About the Authors


Lesson Description:

Curious about who belongs to the voice behind the screen? Well, take a look at this quick introduction from technical evangelist Treva Williams, along with CCO (Chief Cuddle Officer) Sir Hairold B. Goggington III.

About the Course


Lesson Description:

In this course, you can expect to learn and practice the basics of both OpenStack and OpenShift administration, as well as learn how to deploy an OpenShift cluster on OpenStack infrastructure. After completing the course, you will have a working knowledge of both OpenStack and OpenShift administration.

OpenStack Overview

OpenStack Overview


Lesson Description:

OpenStack is an open source cloud platform that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed by a dashboard that gives administrators control while empowering users to provision resources through a web interface. The project aims for simple implementation, massive scalability, and a rich set of features. Cloud computing experts from around the world contribute to the project. OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a variety of complementary services. Each service offers an Application Programming Interface (API) that facilitates this integration.

Core Projects


Lesson Description:

In this video, we will quickly review 5 OpenStack projects that are considered to be core services:Keystone (authentication and authorization) Nova (compute) Neutron (Networking-as-a-Service) Glance (Imaging service) Swift (object storage)Feel free to use these Lucidcharts diagrams to follow along with each part of the lecture.

KVM Preconfiguration

Installing and Configuring KVM


Lesson Description:

Installation and configuration of KVM:Install kvm packages:

yum install virt-manager qemu-kvm qemu-img libvirt
libvirt-python python-virtinst libvirt-client
OR groupinstall all packages:
[root@localhost]# yum grouplist hidden // This will show virtualization packages
[root@localhost]# yum groupinstall "Virtualization Client"
[root@localhost]# yum groupinstall "Virtualization Tools"
[root@localhost]# yum groupinstall "Virtualization Platform"
Enable libvirtd to start at boot-up:
systemctl enable --now libvirtd
Start virt-manager console:
sudo virt-manager
Linux Bridges Docs Before creating your bridge, it's a good idea to create a persistent Linux bridge on your host that will connect to the provider network. Libvirt network Before creating a Linux bridge in KVM, it can be helpful to create a persistent Linux bridge on your host machine that will connect to the KVM network:Create provider network on CIDR: provider.xml
<forward dev='$YOUR_NET_ID' mode='nat'>
 <interface dev='$YOUR_NET_ID'/>
<bridge name='br-provider' stp='on' delay='0'/>
<domain name='provider'/>
<ip address='' netmask='' />
Example network:
<forward dev='enp4s0' mode='nat'>
   <port start='1024' end='65535'/>
 <interface dev='enp4s0'/>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:8e:ba:f3'/>
<domain name='shadowman'/>
<ip address='' netmask=''>
   <range start='' end=''/>
Create persistent openshift1 network:
$ virsh net-define --file=/path/to/provider.xml
List networks to show status:
$ sudo virsh net-list --all
Start the network:
$sudo virsh net-start provider
Set the provider network to persistent start-up:
$ sudo virsh net-autostart --network provider

Controller and Compute Nodes


Lesson Description:

In this video, we will go through the steps for creating the OpenStack Controller and Compute nodes in KVM using the virt-manager GUI and using the virt-install commandline client. Controller nodeList supported OS types in virsh:

$ osinfo-query os
Create a controller node on the shadowman network using preconfigured kickstart: ``` $ sudo virt-install --name=controller --memory=4096 --vcpus=2 --disk size=40 -location=/path/to/Downloads/CentOS-7-x86_64-Minimal-1611.iso -os-variant=centos7.0 --initrd-inject=/path/to/centos-kickstart.cfg -network=default -w bridge=virbr1 -extra-args="ks=file:/centos-kickstart.cfg console=tty0 console=ttyS0,115200n8" ``` Update the system with yum:
yum -y update
Disable NetworkManager and firewalld (if installed):
systemctl disable --now NetworkManager firewalld
Compute node Create the Compute ``` $ sudo virt-install --name=compute1 --memory=4096 --vcpus=2 --disk size=40 -location=/path/to/Downloads/CentOS-7-x86_64-Minimal-1611.iso -os-variant=centos7.0 --initrd-inject=/path/to/centos-kickstart.cfg w=default -w bridge=virbr1 -extra-args="ks=file:/centos-kickstart.cfg console=tty0 console=ttyS0,115200n8" ```

Installing OpenStack

Installing OpenStack with PackStack


Lesson Description:

In this video, we will deploy an all-in-one OpenStack Queens node on CentOS 7.5 using the RDO PackStack utility.Install CentOS OpenStack Queens repository:

$ yum -y install centos-release-openstack-packstack
Disable NetworkManager and firewalld:
$ systemctl disable --now NetworkManager firewalld
Install Packstack:
$ yum -y install openstack-packstack
Generate answers.txt answer file:
$ packstack --gen-answer-file=/root/answers.txt
Populate answers.txt with the following:
CONFIG_HEAT_INSTALL=y ### This is optional, but **highly recommended** if you will be using O-o-O playbooks,,,
# Optional - configure Cinder storage as high as your system resources will allow
# Enable Neutron LBaaS for O-o-O-123 playbooks
# IMPORTANT - $HOST_NETWORK should match your home network name (ex: eth0)
# CIDR network address for the floating IP subnet. This should be the same subnet as your home network
Install OpenStack using PackStack:
$ packstack --answer-file=/root/answers.txt
Once installation completes, you can visit the Horizon dashboard by entering http://$CORE_IP in a web browser window.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


OpenStack Basics

OpenStack User Management


Lesson Description:

When an incoming functional call arises, OpenStack Keystone confirms if it is from an authorized user via credentials (username, password, authURL). Once identity is confirmed, a token is provided. Tokens are a string of ASCII characters representing a successful authentication request. The token includes a list of the user's projects/tenants and roles. Users can submit tokens instead of re-authenticating on each request. Token expiration time, validity, and lifespan is customizable. Project management Projects, or tenants, are containers used to group or isolate resources. A project must be specified to make requests to OpenStack services, and it may map to a customer, account, or organization. Users can be directly assigned to a particular project:List current projects;

$ openstack project list
Create a new project:
openstack project create linuxacademy
User management An OpenStack User is a digital representation of a person, system, or service who uses OpenStack services and has associated information such as username, password, and (optional) email:List current users:
$ openstack user list
Create a new user:
openstack user create --project linuxacademy --password openstack --email --enable student
Role management A role includes a set of rights and privileges that specifies what operations a user is permitted to perform in a given project they are part of:List current roles:
$ openstack role list
Create a new user role:
$ openstack role create openshift
Add existing roles (_member_, reselleradmin?, heat-user) to student:
$ openstack role add --project linuxacademy --user student $ROLE
Remove a role:
$ openstack role remove --project linuxacademy --user student openshift

Building a Network in Neutron


Lesson Description:

OpenStack Neutron allows users to create and attach interface devices managed by other OpenStack services to virtual networks. Plugins can be implemented to accommodate different networking equipment and software, providing flexibility to OpenStack architecture and deployment. View Neutron architecture diagram. To create a public Neutron network in your environment, use the following steps:Source the keystonerc_admin file (found in the home directory of your packstack user):

$ source keystonerc_admin
Create a network: ``` $ openstack network create --share --external -provider-network-type flat -provider-physical-network extnet public ``` Create a subnet attached to the public network created in the previous step. Optionally, you can set a subnet range with the --subnet-range start=$beginning_ip, end=$ending_ip flag. The --dns-nameserver for your network can be any public DNS provider you prefer or just your network gateway: ``` $ openstack subnet create --network public --subnet-range -allocation-pool start=,end= -dns-nameserver --gateway public_subnet ``` Create a router:
$ openstack router create router1
Set the public network as your router gateway to enable external networking:
$ openstack router set --external-gateway public router1

Create a Virtual Machine


Lesson Description:

To create a VM, do the following:Download the cirros image: ``` $ curl | openstack image create cirros --public --container-format=bare--disk-format=qcow2

% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 12.6M 100 12.6M 0 0 2651k 0 0:00:04 0:00:04 --:--:-- 2651k +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | f8ab98ff5e73ebab884d80c9dc9c7290 | | container_format | bare | | created_at | 2018-07-20T16:37:43Z | | disk_format | qcow2 | | file | /v2/images/a74a5c33-9859-4ab8-9e47-07eb84febb9e/file | | id | a74a5c33-9859-4ab8-9e47-07eb84febb9e | | min_disk | 0 | | min_ram | 0 | | name | cirros | | owner | 9395862f7bdf4f79a6cbb4bd074cf6d9 | | protected | False | | schema | /v2/schemas/image | | size | 13267968 | | status | active | | tags | | | updated_at | 2018-07-20T16:37:45Z | | virtual_size | None | | visibility | public | +------------------+------------------------------------------------------+
- (Optional): Create an SSH key for passwordless login to instances:
openstack keypair create --public-key .ssh/ mykey
+-------------+-------------------------------------------------+ | Field | Value | +-------------+-------------------------------------------------+ | fingerprint | 98:fc:e0:7b:58:7c:64:5f:05:0f:b9:25:d5:e8:6f:59 | | name | mykey | | user_id | ce788de55cfe4b899746afa98849825f | +-------------+-------------------------------------------------+
- Print the flavor list:
$ openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +----+-----------+-------+------+-----------+-------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | 1 | True | | 2 | m1.small | 2048 | 20 | 0 | 1 | True | | 3 | m1.medium | 4096 | 40 | 0 | 2 | True | | 4 | m1.large | 8192 | 80 | 0 | 4 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True | +----+-----------+-------+------+-----------+-------+-----------+
If lost, print out UUIDs of networks to find the UUID of the `pinenet` network:
openstack network list
+--------------------------------------+----------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+----------+--------------------------------------+ | 354be29c-796d-45d9-ba6c-f085453c110a | public | 2583cdbb-7419-45ae-9d38-ecaefbcb0e2a | | d21c5fde-0f9a-46af-9287-b554acca3538 | internal | f344543a-45e6-4ba4-b2af-50fc0adef2ac | +--------------------------------------+----------+--------------------------------------+
- Create a new server connected to `pinenet` network:
$ openstack server create myserver --image cirros --flavor m1.tiny --key-name mykey --nic net-id=d21c5fde-0f9a-46af-9287-b554acca3538 --wait
+-----------------------------+----------------------------------------------------------+ | Field | Value | +-----------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-STS:power_state | Running | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2018-07-20T17:33:02.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | internal= | | adminPass | TM6MpbjfGXwM | | config_drive | | | created | 2018-07-20T17:32:54Z | | flavor | m1.tiny (1) | | hostId | d378c27466f0b48520ad7e274cd2fafe0bbe482bbcc4ee933ce19124 | | id | 2a653a24-a2b7-4b05-8ee2-362c12192993 | | image | cirros (a74a5c33-9859-4ab8-9e47-07eb84febb9e) | | key_name | mykey | | name | myserver | | progress | 0 | | project_id | 6d941d7739884429a4c4d0857d77b8f3 | | properties | | | security_groups | name='default' | | status | ACTIVE | | updated | 2018-07-20T17:33:02Z | | user_id | ce788de55cfe4b899746afa98849825f | | volumes_attached | | +-----------------------------+----------------------------------------------------------+
- Create a floating IP from `public` network:
openstack floating ip create public
+---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | created_at | 2018-07-20T17:37:57Z | | description | | | fixed_ip_address | None | | floating_ip_address | | | floating_network_id | 354be29c-796d-45d9-ba6c-f085453c110a | | id | cef8429c-2efe-47a3-b68b-270a527ac923 | | name | | | port_id | None | | project_id | 6d941d7739884429a4c4d0857d77b8f3 | | qos_policy_id | None | | revision_number | 0 | | router_id | None | | status | DOWN | | subnet_id | None | | updated_at | 2018-07-20T17:37:57Z | +---------------------+--------------------------------------+
- Attach a floating IP from the `public` network to `myserver`:
openstack server floating ip associate myserver ``` This command will have no output.

Adding a Compute Node


Lesson Description:

To add a compute node to your OpenStack cloud, you will need to edit the answers.txt file generated during your first installation in the following sections:

#  This should be the IP address of the compute node created during the "Controller & Compute" lesson
# This will create a second `br-ex` bridge on our Compute node to enable access to Virtual Machines.

Private Neutron Networks


Lesson Description:

Private networks in Neutron are used for internal networking on a per-tenant basis. Private networks are created using a similar, but less involved process than creating Public networks. To create a private network:Create the main network

$ openstack network create pinenet
Create a subnet
$ openstack subnet create pinenet_sub 
--network pinenet --subnet-range
Add the newly created subnet to a router
$ openstack router add subnet router1 pinenet_sub

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


OpenShift Overview

About OpenShift Container Platform


Lesson Description:

OpenShift Container Platform is an Enterprise platform as a service (PaaS) offering from Red Hat that brings together Docker and Kubernetes, and provides an API to manage these services. Openshift has a microservices-based architecture of smaller, decoupled units that work together, running on top of a kubernetes cluster. While Docker provides the abstraction for packaging & creating linux-based, lightweight container images, kubernetes provides the cluster management & orchestrates containers on multiple hosts. OpenShift container platform adds:Source code management, builds, and deployments for developers Managing and promoting images at scale as they flow through your system Application management at scale Team and user tracking for organizing a large developer organizationOpenShift architecture diagram

About OpenShift Origin


Lesson Description:

OpenShift Origin - now known as OKD - is the upstream version of Red Hat OpenShift & is the basis of OpenShift Container Platform, OpenShift Online, & OpenShift Dedicated.

The Future of OpenShift on OpenStack


Lesson Description:

There's some exciting things on the horizon for OpenShift integration with OpenStack coming from Red Hat engineering & product teams that, probably by the time this video is published, will already be available for use. Check out this OpenShift Commons Briefing video on YouTube for more information about what's on the horizon for OpenShift on OpenStack.

The OpenShift Community


Lesson Description:

Now that you're familiar with OpenShift, join the community! Find information on how to become a Commons member on the OpenShift Commons website, & watch previous OpenShift Commons Briefings on the official OpenShift YouTube channel. There are several awesome SIGs (special interest groups) for you to join, including one specificaly for OpenShift on OpenStack! You can also speak directly with OpenShift engineers via IRC on Freenode in the following channels:openshift-dev openshift minishift

Installing OpenShift

About OpenShift-on-OpenStack Playbooks


Lesson Description:

With much gratitude to Red Hat Principal Solutions Architect Keith Tenzer, we will be using a modified version of his OpenShift-on-OpenStack-123 Ansible playbooks to deploy our very own proof-of-concept OpenShift environment on OpenStack. The OpenShift-on-OpenStack playbooks will build a cluster with 1 master & 3 nodes (infra, node0, node1) running OpenShift Origin v3.9 using both Ansible & OpenStack Heat.

Deploying OpenShift - Lecture


Lesson Description:

Before jumping into deployment, let's quickly review the commands we will be running to launch our OpenShift environment.

Deploying OpenShift - Prerequisites


Lesson Description:

Before installing OpenShift, we will use playbooks in the content-openshift-on-openstack Github repo to preconfigure our virtual machines. Prerequisitesclone the origin39 branch on the openshift-on-openstack-123 repo

$ git clone -b origin39
copy sample-vars.yml to vars.yml & make customizations
$ cd openshift-on-openstack-123
cp sample-vars.yml vars.yml
source keystonerc_*
$ source ~/keystonerc_admin
run deploy-openstack-infra.yml playbook to create cluster
$ ansible-playbook deploy-openstack-infra.yml 
-e @vars.yml
login to bastion
run the prepare-openshift.yml playbook
$ ansible prepare-openshift.yml -e @vars.yml
Logout of bastion, then reboot all nodes (to address firewalld bug)
$ for i in master0 node0 infra0
  openstack server reboot $i
before proceeding, verify that NetworkManager & firewalld are installed & running (this step has been added to openshift-on-openstack playbooks & can be safely skipped.)
$ ansible -i ~/openshift-inventory nodes -m yum -a "name=NetworkManager state=latest"
$ ansible -i ~/openshift-inventory nodes -m service -a "name=NetworkManager state=started"

Deploying OpenShift - OpenShift-Ansible Playbooks


Lesson Description:

After building & configuring your OpenStack virtual machines with the content-openshift-on-openstack playbooks, you are now ready to install OpenShift Origin v3.9 in OpenStack using openshift-ansible playbooks. INSTALL OPENSHIFTrun the prerequisites.yml playbook

[centos@bastion ~]$ ansible-playbook -i /home/centos/openshift-inventory 
    -vv /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml 
run the deploy_cluster.yml playbook to install openshift
[centos@bastion ~]$ ansible-playbook -i /home/centos/openshift-inventory 
    -vv /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml 
verify installation Once complete, you can reach the OpenShift web console through the Public IP attached to the load balancer created by Ansible, however you will not be able to login until authentication has been configured ;). In the meanwhile, you can login to master0 from the bastion to run oc commands, to verify installation.
[centos@master0 ~]$ oc get nodes
infra0    Ready     <none>    27m       v1.9.1+a0ce1bc657
master0   Ready     master    27m       v1.9.1+a0ce1bc657
node0     Ready     compute   27m       v1.9.1+a0ce1bc657

OpenShift Administration

OpenShift Authentication - HTPasswd


Lesson Description:

OpenShift supports several different authentication types:DenyAll (all logins denied by default) AllowAll (all logins permitted by default) HTPasswd (Users & passwords managed by a flat file stored on the master node) Keystone (users shared from an OpenStack Keystone database) LDAP (idk) GitHub GitLab Google OpenID Basic AuthenticationIn this video, we will walk through the steps required to configure HTPasswd authentication using httpd-utils to create users.

OpenShift Authentication with OpenStack Keystone


Lesson Description:

Now that we have experimented with HTPasswd authentication, we are going to set up a more enterprise-friendly authentication method by integrating our OpenStack infrastructure with our OpenShift cluster via OpenStack Keystone. To enable Keystone v3 integration in OpenShift, login to the master0 node & edit the /etc/origin/master/master-config.yaml file:Under oauthConfig, change the provider name: from deny_all to keystone_auth. This will be appended to all Keystone users to identify them in the OpenShift cluster.

    assetPublicURL:  https://master: 8443/console/
        method:  auto
    -    challenge:  true
          login:  true
          mappingMethod:  claim
          name:  keystone_auth
Change the provider kind: from DenyAllPasswordAuthentication to KeystoneAuthentication
          name:  keystone_auth
        apiVersion:  v1
        kind:  KeystonePasswordIdentityProvider
Directly below kind: add a new domainName: section & enter in your OpenStack domain name (if following course lab settings, this will be default)
        kind:  KeystonePasswordIdentityProvider
        domainName: default
Directly below domainName:, create a section named url: & add the hostname or IP address & port number of your OpenStack Keystone server
        kind:  KeystonePasswordIdentityProvider
        domainName: default
        url:  http://controller:5000
Save, then close the file. Restart the origin-master-api.service & origin-master-controllers.service services to propagate the changes through your OpenShift cluster.
$ sudo systemctl restart origin-master-api.service 
You should now be able to access your OpenShift cluster using OpenStack credentials.

Users, Projects, & Roles


Lesson Description:

Roles are collections of policy rules, which are sets of permitted verbs that can be performed on a set of resources. OpenShift (OKD) includes a set of default cluster roles that can be bound to users and groups cluster wide or locally. There are upwards of 128 different roles available in an OpenShift cluster, but we're going to focus on the following default roles for this course:Admin: The project manager. If used in a local binding, an admin user can view and modify any resource in the project, except creating new roles and enforce quotas. If the cluster-admin user wants to allow an admin user to modify roles, the cluster-admin user must create a project-scoped Policy object using a JSON file. basic-user: A user who can get basic information about projects and users. cluster-admin: A user who can perform any action in any project. When granted to a user within a local policy, that user has full control over enforcing quotas and role creation, and on every action on any resource within the project. cluster-status: A user who can get basic cluster status information. Edit: A user who can modify most objects in a project, but does not have the power to view or modify roles or bindings. self-provisioner: A user who can create their own projects. View: A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings.A full list of OpenShift v3.9 clusterroles can be found in the content-openshift-on-openstack repository under the /openshift-rbac directory. While a user is an entity making requests to an OpenShift cluster, users can be organized into one or more groups made up from a set of users. Groups are useful for managing many users at one time, such as for authorization policies, or to grant permissions to multiple users at once.Create a new group for OpenStack users

$ oc adm groups new packstack admin student 
view groups
$ oc get groups
grant cluster-admin role to OpenStack admin user
$ oc adm policy add-role-to-user cluster-admin admin
grant cluster-admin role to OpenStack student user in packstack project
$ oc adm policy add-cluster-role-to-user cluster-admin admin
install oc client on bastion
$ sudo yum -y install origin-clients
login to cluster from bastion as one of your OpenStack users
$ oc login 
    https://<your OpenShift cluster hostname/LB hostname>:8443 
    -u <user> -p <password>

OpenShift Pods & Services


Lesson Description:

OpenShift leverages the kubernetes concept of a pod, defined as "one or more containers deployed together on one host, & the smallest compute unit that can be defined, deployed, & managed." Pods are the rough equivalent of a machine instance (physical or virtual) to a container, wherein each pod is allocated its own internal IP address, therefore owning its entire port space, and containers within pods can share their local storage and networking. openshift treats pods as largely immutable; changes cannot be made to a pod definition while it is running. openshift implements changes by terminating an existing pod and recreating it with modified configuration, base image(s), or both. Pods are also treated as expendable, and do not maintain state when recreated. Therefore pods should usually be managed by higher-level controllers, rather than directly by users.

apiVersion: v1
kind: Pod
  name: mysql (1)
  labels: (2)
    name: mysql
  containers: (3)
    - resources:
        limits :
          cpu: 0.5
      image: openshift/mysql-55-centos7 (5)
      name: mysql
      env: (4)
        - name: MYSQL_ROOT_PASSWORD
          value: yourpassword
        - name: MYSQL_USER
          value: wp_user
        - name: MYSQL_PASSWORD
          value: wp_pass
        - name: MYSQL_DATABASE
          value: wp_db
      ports: (6)
        - containerPort: 3306
          name: mysql
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql/data
  volumes: (7)
    - name: mysql-persistent-storage
        claimName: claim-mysql
Pods must have a unique name within their namespace. As a note, a pod definition may specify the basis of a name with the generateName attribute, and random characters will be added automatically to generate a unique name. Pods can be "tagged" with one or more labels, which can then be used to select and manage groups of pods in a single operation. The labels are stored in key/value format in the metadata hash. In the example, our pod is simply labeled as mysql. containers specifies an array of container definitions. Environment variables (env:) can be specified to pass necessary values to each container. Each container in the pod is instantiated from its own Docker-formatted container image. The container can bind to ports which will be made available on the pod’s IP. The pod defines storage volumes that are available to its container(s) to use. In this case, it provides an PersistentVolume named mysql-persistent-storage, with a specific mountPath: in the container on /var/lib/mysql/data

Persistent Volumes on OpenStack


Lesson Description:

Another fun way to integrate your OpenShift cluster with an underlying OpenStack infrastructure is through Cinder – OpenStack’s extremely popular & resilient block storage as a service solution. In this video we will review OPenShift storage types, & a straight-forward method for integrating Cinder with our cluster. Storage in an OpenShift cluster is managed with persistentvolumes & persistentvolumeclaims. A PersistentVolume object is a storage resource in an OpenShift Container Platform cluster, provisioned by creating PersistentVolume objects from sources such as GCE Persistent Disk, AWS Elastic Block Store (EBS), and NFS mounts. Storage can be made available to you by laying claims to the resource. You can make a request for storage resources using a PersistentVolumeClaim object; the claim is paired with a volume that generally matches your request. A PersistentVolumeClaim is used by a pod as a volume. OpenShift Container Platform finds the claim with the given name in the same namespace as the pod, then uses the claim to find the corresponding volume to mount.

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


Day Two Operations

Host Backups


Lesson Description:

Creating an environment-wide backup involves copying important data to assist with restoration in the case of crashing instances, or corrupt data. After backups have been created, they can be restored onto a newly installed version of the relevant component. In OpenShift, backups can be done at the cluster level for saving state to separate storage. The full state of an environment backup includes:Cluster data files etcd data on each master API objects Registry storage Volume storageNode backups are optional, as the nature of nodes is that anything special is replicated over the nodes in case of failover, and they typically do not contain data that is necessary to run an environment. If nodes are configured with something that is necessary for the environment to remain operational, then creating a backup is recommended. To manually backup a master:

master0 ~]# MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
master0 ~]# mkdir -p ${MYBACKUPDIR}/etc/sysconfig
master0 ~]#  cp -aR /etc/origin ${MYBACKUPDIR}/etc
master0 ~]# cp -aR /etc/sysconfig/origin-* ${MYBACKUPDIR}/etc/sysconfig/
master0 ~]# mkdir -p ${MYBACKUPDIR}/etc/sysconfig
master0 ~]# mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors
master0 ~]# cp -aR /etc/sysconfig/{iptables,docker-*}  ${MYBACKUPDIR}/etc/sysconfig/
master0 ~]# cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/
master0 ~]# sudo cp -aR /etc/pki/ca-trust/source/anchors/*  ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/
To manually backup a node:
node0 ~]# MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)

node0 ~]# mkdir -p ${MYBACKUPDIR}/etc/sysconfig

node0 ~]# mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors

node0 ~]# cp -aR /etc/sysconfig/{iptables,docker-*}  ${MYBACKUPDIR}/etc/sysconfig/

node0 ~]# cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/

node0 ~]#  cp -aR /etc/pki/ca-trust/source/anchors/*  ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/
You can also use the UNSUPPORTED bash script provided in the openshift-anmsible-contrib Github repository to create both master backups and project-level backups.

Adding an OpenShift Node


Lesson Description:

New hosts (either nodes or masters) can be added to your OpenShift cluster by using the scaleup.yml playbook for advanced installations. This playbook queries the master, generates and distributes new certificates for the new hosts, and then runs the configuration playbooks on only the new hosts. First, you will need to create a new OpenStack virtual machine connected to the Neutron network named osp-network and OpenStack security group named openshift-cluster-hosts-*-node_security_group-* created by the content-openshift-on-openstack playbooks. Host preparationPrint out security groups and copy UUID for openshift-cluster-hosts-dhomadhhuv6g-node_security_group-*:

$ openstack security group list
Print out the network list and copy UUID for osp-network:
$ openstack network list
Create a new VM using openstackCLI: ``` $ openstack server create node1 --image centos7 -flavor m1.medium --key-name openshift -security-group $SECGROUP_NAME -nic net-id=$OCP_NET_UUID -file /home/centos/ --wait ``` Create a new Cinder volume using CLI:
openstack volume create node1-dockervol --size 5
Attach the volume to the node1 instance:
openstack server add volume node1 node1-dockervol
Execute the script on node1 to install base dependencies and configure Docker storage:
$ sudo bash
OpenShift InventoryOn bastion, add new_nodes to inventory file under [OSEv3:children]. To add a masters, add new_masters:
Create a new_nodes section under [OSEv3:vars] to specify host information for the new hosts. Format this section as you would an existing section. If adding masters, you will need to add both a new_masters and new_nodes section:
# host group for masters
master0.ocp3.lab openshift_hostname=master0 server_type="master"

# host group for etcd
master0.ocp3.lab openshift_hostname=master0 server_type="etcd"

# host group for nodes, includes region info
master0.ocp3.lab openshift_hostname=master0 openshift_schedulable=True
infra0.ocp3.lab openshift_hostname=infra0 openshift_schedulable=True openshift_node_labels="{'region': 'infra', 'zone': 'test', 'fluentd': 'true'}"
node0.ocp3.lab openshift_hostname=node0 openshift_schedulable=True openshift_node_labels="{'region': 'primary', 'zone': 'test', 'fluentd': 'true'}"

# host group for new nodes
node1.ocp3.lab openshift_hostname=node1 openshift_schedulable=True openshift_node_labels="{'region': 'primary', 'zone': 'test', 'fluentd': 'true'}"
Node InstallationTo deploy new nodes:Run the /openshift-node/scaleup.yml playbook:
$ ansible-playbook [-i /path/to/file] 
To deploy new masters, run the /openshift-master/scaleup.yml playbook:
$ ansible-playbook [-i /path/to/file] 
Once deployment completes, verify your installation then move any hosts defined under new_<host type> under their appropriate section in the inventory file.

Evacuating an OpenShift Node


Lesson Description:

Evacuating pods allows you to migrate all or selected pods from a given node or nodes, but nodes must first be marked unschedulable to perform pod evacuation. Only pods backed by a replication controller can be evacuated; the replication controllers create new pods on other nodes and remove the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. To perform a node evacuation:First, mark the decommissioned node as unschedulable using the OpenShift Client:

$ oc adm manage-node $NODE_NAME --schedulable=false
Once the node is marked as SchedulingDisabled, evacuate pods to other non-infra and non-master nodes using OpenShift Clients:
$ oc adm drain $NODE_NAME

Host Restoration


Lesson Description:

Restore an OpenShift cluster and its components by recreating cluster elements, including nodes and applications, from separate storage:Reinstall OpenShift using the same method as previous installation (use the same inventory file). Run any custom post-installation scripts. Restore the node[s].Restoring MastersExport the backup directory:

# MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)*
Rename new config files created during installation:
# cp /etc/origin/master/master-config.yaml /etc/origin/master/master-config.yaml.old
Copy configuration files from backup dir to /etc/origin/master/:
# cp /backup/$(hostname)/$(date +%Y%m%d)/origin/master/master-config.yaml /etc/origin/master/master-config.yaml
Restart origin-master-controllers and origin-master-api services:
# systemctl restart atomic-openshift-master-api 
Restoring NodesExport the backup directory:
# MYBACKUPDIR=*/backup/$(hostname)/$(date +%Y%m%d)* 
Rename the config files created during installation:
# cp /etc/origin/node/node-config.yaml 
Copy configuration files from backup to /etc/origin/node/:
# cp /backup/$(hostname)/$(date +%Y%m%d)/etc/origin/node/node-config.yaml 
Restart origin-node service:
# systemctl restart origin-node.service

Hands-on Labs are real live environments that put you in a real scenario to practice what you have learned without any other extra charge or account to manage.


What Next?

Course Review


Lesson Description:

Congratulations everyone for reaching the end of the OpenShift on OpenStack course! I hope you had as much fun watching the videos and completing the labs as I did putting them together for you. Now that you’ve reached the end, I hope you continue your OpenShift on OpenStack adventures. Thanks for hanging out with me, and I hope to see you again in my next course.

Join the Community!


Lesson Description:

While you're out making new friends in the OpenStack and OpenShift communities, don't forget that Linux Academy also has a community just waiting for you to join! You can create questions, search for previous answers, and even post your own how-to guides on the Linux Academy Community, and you can chat with other students and instructors on the Linux Academy Community Slack. I hope to see you there!

Take this course and learn a new skill today.

Transform your learning with our all access plan.

Start 7-Day Free Trial