We deploy the real environment, you take the scenario-based labs on us. Self-paced, from anywhere, at any time.
This lab will present you with a single CentOS 6 'sysvinit' system. We will talk about how to work with boot targets and runlevels for sysvinit and change them.
NOTE: Do NOT reboot your lab server during this process to test any setting. Doing so will END the lab!
This lab will present you with a single CentOS 7 'systemd' system. We will talk about how to work with boot targets and runlevels for sysvinit and change them.
NOTE: Do NOT reboot your lab server during this process to test any setting. Doing so will END the lab!
In this lab, you will follow along with the accompanying video to practice using a number of commands that allow you to filter text streams to standard output.
NOTE: You will be provided a number of files to use during these demonstrations, downloadable through GitHub (the first video will show you how).
In this lab, we will explore the various tools and options for creating filesystems of various types on partitioned disk devices. You will be presented with a single CentOS 6 image containing an additional three disk devices, 20 GB in size, to follow along with the video examples or downloadable study guide.
In this lab, you will be presented with a single CentOS 6 image containing an additional three disks that can be partitioned and formatted for use during the video instruction or downloadable guide. You will configure disk usage quotas on one or more partitions as part of this lab.
This Hands On Lab will show you how to set up a remote VNC connection to your Debian (Ubuntu, Cinnamon or Plain Debian) or RPM based (CentOS, Red Hat) Linux distributions. The two methods differ in the server configuration, although securing the connection locally and remotely are the same. Where differences exist, they will be clearly noted.
This Hands On Lab will show you how to set up Windows Server 2008 shares so that they are available to a Linux client. Additionally, we will demonstrate how to set up Ubuntu Linux as a file share server for Windows clients. We will talk about how to secure those shares in both directions by setting up account access and using credentials files to secure the password exchange between our systems. Finally, we will discuss all the services that have to be installed and running on both Windows and Linux for both operating systems to function as either file servers or clients.
You will be provided with a single Linux Academy Ubuntu 16.04 LTS image. This will allow you to:
1. Install Docker and start the service
2. Add the tools and utilities to build the image
3. Download the associated files from Github
4. Create the Docker image
5. Deploy and Test the Docker container from that image
In this lab, we will be walking through the installation and configuration of Apache 2 web server to enable answering secure HTTP connections. We will demonstrate how to generate a SSL Certificate key to be used to exchange with a Certificate Authority to obtain a valid SSL Certificate. Additionally, we will use that key to generate our own Self-Signed certificate and then install and enable that certificate in our default SSL vhost. Finally, we will verify locally as well as using our internet browser from our client machine that the certificate is available and is presented correctly when queried over HTTPS.
In this lab, we put into practice some of the Docker commands that we have learned. Using what we know, we will instantiate a Docker container running MySQL and then use a typical client to connect and verify functionality, then end the lab by stopping and deleting the container instance.
This lab will be covering something that is most commonly associated with Linux, the LAMP Stack. In this lab, we will learn how to install and configure each of the components in our application stack and make them available for use. Once we verify the appropriate services are listening, we will install and configure PHPMyAdmin on our server. This web application will allow us to demonstrate not only that all the pieces of our stack are talking to each other, but allow us to talk about securing access to our precious data.
In this lab, we will put into practice some of the Docker commands that we have learned. Using what we know, we will instantiate a Docker container running MySQL and a Docker container running WordPress. We will then link these containers together, before verifying functionality and ending the lab by stopping and deleting the container instances.
In this lab, we will put into practice some of the Docker commands that we have learned. Using what we know, we will instantiate a Docker container running apache and use a mounted host directory as the html folder.
We will use a typical client to connect and verify functionality. Then make a change to the contents of the data folder. Changes should be reflected on the instance. Then end the lab by stopping and deleting the container instance.
In this lab, we will put into practice some of the Dockerfile commands that we have learned. Using what we know, we will create a Dockerfile that uses a base CentOS image, install apache on this image, then extract the contents of a tar file that we created as for Apache to use. We will then build out image and run it, before logging in to the running Docker instance and checking that it does what is required. We will then use a web client on the host to ensure it works.
In this lab, we will put into practice some of the Dockerfile commands that we have learned. Using what we know, we will create a Dockerfile that uses a base Nginx image from hub.docker.com. We will use the ENV, COPY, ADD commands.
We will then build out the image and run it. We will then log in to the running Docker instance and check that it does what is required. We will also use a web client on the host to ensure it works.
In this lab, we will create a private repository and a self-signed certificate for the repository. We will run the repository, get an image from hub.docker.com, and tag this image for use in the private repository. We will then push the image to the private repository. To check the image is there, we will pull the image back down and instantiate a container from it.
In this lab, we will put into practice some of the Dockerfile commands that we have learned. Using what we know, we will download an image, back it up, then restore it. We will instantiate a container from this image, we make changes to the contents of the container, then back up the running container again, restore it.
This lab is going to cover something you have likely heard about relatively recently. We are going to take a look at the "HeartBleed Bug" and how it may affect servers that you have to manage. We will discuss how to assess your server under a specific but common scenario and then how to compile the available packages that are vulnerable so as to mitigate the vulnerability and your system's exposure to it.
Dealing with legacy services that are less secure is a constant challenge in the enterprise. Understanding how to best provision that service as needed while maintaining some level of security will go a long way to mitigating risks in your environment.
Nagios Core Engine Compilation and Installation with Plugins Enterprise server and service monitoring has become more and more important as virtualization becomes the norm. This lab will show you how to download and compile both the Nagios Core monitoring engine as well as the common plugins that allow service and performance monitoring. We will then set up a web server that will display those statistics in a dashboard for easy viewing.
In this lab, we identify and report on high CPU/memory utilization and kill or adjust process priorities.
In order to do that, we need to generate a process that will cause high CPU utilization. Before you begin the scenario below, be sure to do the following. Note that this is also covered in the video and lab guide.
1. Create a file with the following command:
dd if=/dev/urandom of=testfile count=20 bs=1024k
2. Create a file called cputest
.sh with the following code:
while [ 1 ]
i=`expr $i + 1`
echo "Iteration: $i"
3. Make the script executable:
chmod 755 cputest.sh
4. Run the command:
In this lab, you will regain access to a locked system by resetting the root password using the console of an inaccessible virtual machine.
Using your VNC client, connect to the IP address provided in the "SSH" section on port 5901. The password for the connection is "vncpass" without the quotes.
This lab covers how to find and install a new package onto a server, how to check that a package installed correctly, and how to add and delete packages from a server. For this lab, we will be extracting the packages Apache and Audacity, as well as deleting and then re-creating an EPEL package.
name=Extra Packages for Enterprise Linux 7 - $basearch
Repository Key File Command:
wget -O /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 https://getfedora.org/static/352C64E5.txt
This lab will provision a Red Hat Enteprise Linux instance that will have a network interface, but that interface will be unconfigured and will not be able to reach any public network. You will use the appropriate network management utilities to configure both the network and DNS information in order for it to do so. You must connect to this instance using a VNC client. Connect to the public SSH IP address on port 5901. The password is "vncpass".
Using the provisioned firewall service on your Red Hat Enterprise server, you will be allowing an HTTP web server to be access from external systems by applying one or more rules to the appropriate security zone(s). You must connect to this instance using a VNC client. Connect to the public SSH IP address on port 5901. The password is "vncpass".
Security is a commonly overlooked topic of conversation when discussing Linux servers. Although it is not uncommon to talk about hardening the server itself, client access to the server can sometimes be an afterthought. Securing your Nginx web server’s content can be accomplished simply by using an encrypted web session. This is accomplished by installed an SSL Certificate in Nginx and then activating the configuration. We will talk about the security around generating SSL keys and then how to install a Self Signed Certificate can be a means to that end (and keep in mind, a certificate from a Third Party issuer is accomplished the same way).
In this lab, we are going to walk through the basics of creating physical disks and volume groups, and using the logical volume manager to create logical volumes. At the end of this lab, you should have an understanding of LVM management on a Redhat Enterprise Linux server.
In this lab you will be creating a new external shared network and creating a new project network with subnets and attaching it to the new external network in both the command-line and in the dashboard. You will delete the network on the command line before moving to the dashboard, note that the external network will need to be deleted by admin.
Learn how to start using the FirewallD tool within Red Hat Enterprise 7. Install and configure firewall rules, test and verify how the rules work between the two servers provided and your remote machine.
Before beginning this scenario, please perform the following steps once your system is provisioned:
A. Log into Lab Server 1 and change the password for the linuxacademy user.
B. Log into Lab Server 2 and change the password for the linuxacademy user.
Note: all tasks to be performed as the root user. Or via sudo.
In this lab, we are going to learn to connect a Red Hat 7 machine to an AD LDAP domain.
NOTE: You will need to refer to the following information during your scenario:
Domain FQDN: ad.linuxacademy.com
Domain Administrator: Administrator
Domain Admin Password: LinuxAcademy123!
SSO Username: test
SSO User password: LinuxAcademy123
This lab introduces you to the concepts of DNS/BIND services from a client and server perspective. By the end of this document, you will have built an Ubuntu 14.04 LTS BIND Server and created a client configuration to test any zones or alternate DNS entries for the domain you set up. We then verify that DNS responds via standard DNS client tools as well as a console-based web browser.
Use Open LDAP and autofs to create a single sign on-solution. When completed, you can log in to your Linux server using SSO from the OpenLDAP server. Upon logging in using your SSO the user's home directory is also automatically added to the Linux system.
This lab will introduce you to the concepts of NFS v4 services from a client and server perspective. By the end of this document, you will have built an Ubuntu 13.10 NFS Serve, created a client Configuration and tested your server configuration and any shared directories for the network client you choose to set up.
This lab is designed to follow along with the RHCE course and the section for configuring a server to forward email to a central mail server. This will start a Red Hat 7 system with the appropriate outgoing ports available for forwarding to SMTP or Secure SMTP servers.
In this lab, you will be presented with a Red Hat server configured to allow you to practice the concepts reviewed during the RHCE course videos of the same name for bonding network links. Remember to watch the video on the network configuration.
You will only be presented with TWO network adapters due to configuration limitations. This will cause the the final step for bringing the adapters up iin this environment to fail. It is expected and normal, but the steps will be the exact same for a live system with more adapters.
If you have your own lab environment, you can follow along with the videos using a VM with a minimum of 3 network adapters.
Create and configure a Kerberos server, client, and test user. Register the client with the KDC server, and authenticate the test user. Due to Kerberos's FQDN requirement, use the following information to update the /etc/hosts file on each server: Server 1 ====== FQDN: kdc-server.mylabserver.com PRIVATE IP: 10.0.0.100 Server 2 ====== FQDN: kerb-client.mylabserver.com PRIVATE IP: 10.0.0.101
This lab provides users with three servers, that should be used as follows: - Server 1: KDC Kerberos Server (You will see this listed) - Server 2: NFS Server and Kerberos Client (You will see this listed) - Server 3: NFS Client and Kerberos Client (It will be available, but will NOT be listed) We will setup a KDC server, configure a client for authentication, configure a simple NFS share for an NFS client, and then protect that share with Kerberos by adding the appropriate principles to the KDC server and mounting the share and writing to the drive. NOTE: Do to a configuration limitation, your third server will not show once the lab is started. However, it will be up and available on internal IP 10.0.0.102 (whereas the two showing will be 10.0.0.100 and 10.0.0.101). In order to connect, use the same credentials but login first to one of the first two servers and then SSH to the 10.0.0.102 from there.
Configure iSCSI targets and initiators; learn how to mount the targets consistently. This scenario will present you with two servers that are intended to allow you to create an iSCSI target and iSCSI initiator: Server 1: iSCSI target - 1GB partition provided to create a BLOCK iSCSI device Server 2: iSCSI initiator - Configure to mount the created remote iSCSI target
NOTE: You will be provided with TWO Lab Servers for these scenarios. You will only see the public and private IPs for ONE of them. You must use the displayed public IP to connect to BOTH servers. Server one will be displayed, in order to SSH to Server two, you must connect first to the public IP on Server one and then SSH from there to the private IP for Server two (see example in the videos and guide).
NOTE: In labs where you are provided with TWO Lab Servers for these scenarios, you will only see the public and private IPs for ONE of them. You must use the displayed public IP to connect to BOTH servers. Server one will be displayed, in order to SSH to server two, you must connect first to the public IP on server one and then SSH from there to the private IP for server two (see example in the videos and guide). Again, this will ONLY apply to scenarios where you have two lab servers involved (which you will be told in your Lab Guide document as well as the first scenarion video).
Use ansible-vault to manage a vault encrypted file. Do the following tasks that build on tasks performed in lab 1. If using a different or new server then first perform the tasks in Lab 1.
Edit an encrypted file.
Rekey an encrypted file.
View an encrypted file.
Decrypt an encrypted file.
Encrypt the file again.
Use the file in a playbook again.
Create a playbook that creates a new user on the server. Check to ensure that user has been created. Then copy a file you have created to that user's home folder. Change the permissions of the file to 0755. Check to ensure it's been created there. The file is called program1.sh and should be in /opt/ folder.
Create a playbook that uploads a file (you create empty) to the target server with 0755 permissions. Adds a line to the file with the ansible hostname and use the lineinfile module to look for the words hostname2= and if not there, then add that to the file.
Check to see if the file was created and has the content.
Create a template with a variable for server hostname. Create a playbook that installs apache and uses a template to create the /var/www/html/index.html file on the target server. The hostname must be in the index.html file. ADD multiple variables to the template and export them to the index.html
In this lab, we install the yum security plugin, then use the various yum commands to check for various levels of security iupdates. We also update the software on the servers.
Use the following commands, and list the output to a file. You may use any method to get the information in a file.
yum install yum-plugin-security
yum update info
yum updateinfo list
yum updateinfo list --sec-severity=Critical
yum updateinfo list --sec-severity=Moderate
Rsync is a common method for backing up and/or synchronizing filesystems across multiple systems or servers. It literally stands for "remote synchronization". It uses algorithms that minimize the amount of data copied from job to job by only moving new files or new portions of files (changes only).