Skip to main content

Red Hat Certified Specialist in Security (EX415) Practice Exam

Hands-On Lab

 

Photo of

Training Architect

Length

02:00:00

Difficulty

Intermediate

This practice test is designed to assess your readiness to take the Red Hat Certified Specialist in Security (EX415) exam. This test covers securing Red Hat servers in a production environment, as well as the many objectives listed in the official Red Hat curriculum. This test is 4 hours long, just like the real EX415 exam.

What are Hands-On Labs?

Hands-On Labs are scenario-based learning environments where learners can practice without consequences. Don't compromise a system or waste money on expensive downloads. Practice real-world skills without the real-world risk, no assembly required.

Red Hat Certified Specialist in Security (EX415) Practice Exam

Introduction

This practice test is designed to assess student readiness to take the Red Hat Certified Specialist in Security (EX415) exam. This test covers multiple facets of securing Red Hat servers in a production environment and covers the many objectives listed in the official Red Hat curriculum. This test is 4 hours in length, just as the official exam is.

Solution

  1. Begin by logging in to the lab servers using the credentials provided on the hands on lab page:

    • Log in to Host1, Host2, Host3, and Control1 using SSH, then become the root user:
    ssh cloud_user@PUBLIC_IP_ADDRESS
    sudo su
    • Log in to Control1 using VNC:
    • For Mac users:
      • Open Finder
      • Press Command+K on your keyboard to bring up the Connect to server window
      • Alternatively, expand Go in the menu at the top of the screen and click Connect to Server
      • In the Connect to Server window, connect to vnc://<IP_ADDRESS>:5901, making sure to replace <IP_ADDRESS> with the IP address you are provided on the hands-on lab page
    • Windows users will need to install an application like VNC Viewer to connect.

1. On Host2, set up auditing for low disk space alerts to email root when the available disk space reaches 100 MB. Also, restrict audit logs to consume no more than 100 MB of disk space, and limit the number of audit buffers to 2560.

On Host2:

  1. Edit the /etc/audit/auditd.conf file:

    nano /etc/audit/auditd.conf
  2. Set the following:

    • space_left = 100
    • space_left_action = email
    • max_log_file = 10
    • num_logs = 10

    Save and exit the file by pressing Ctrl+X, Y to save, and press Enter to use the same file name.

  3. Edit the file /etc/audit/rules.d/audit.rules:

    nano /etc/audit/rules.d/audit.rules
  4. Include the following:

    # Limit audit buffers
    -b 2560

    Save and exit the file by pressing Ctrl+X, enter Y to save, and press Enter to use the same file name.

2. On Host2, configure the audit rules to meet STIG compliance, then make sure all the audit changes are put into effect.

  1. Make a backup of the current audit rules using the following command:

    cp /etc/audit/rules.d/audit.rules /etc/audit/rules.d/audit.rules-backup

    Copy the STIG audit rules into the audit.rules file with the following command:

    sudo su
    cd /usr/share/doc/audit-2.8.4/rules
    cat 10-base-config.rules 30-stig.rules 99-finalize.rules >> /etc/audit/rules.d/audit.rules

    Enter y to overwrite the file.

    The 10-base-config.rules we copied in also included a buffer size setting. We will need to remove this:

    nano /etc/audit/rules.d/audit.rules

    Remove the line:

    -b 8192

    Save and exit the file by pressing Ctrl+X, enter Y to save, and press Enter to use the same file name.

  2. Restart the auditd service:

    service auditd restart  

3. On Host2, create an audit report for all executed events in the logs. Name the report host2-audit-report.txt and save it to the cloud_user's home directory.

  1. Create an audit report for all executed events:

    aureport -x > /home/cloud_user/host2-audit-report.txt

4. On Control1, create a custom OpenSCAP policy to check to ensure the Telnet and FTP servers are removed, and firewalld is installed and running. Name the customized policy control1_custom.xml in the cloud_user's home directory.

On our Control1 VNC session:

  1. VNC to Control1 (instructions are at the top of this guide)
  2. Open SCAP Workbench:
    • Applications > System Tools > SCAP Workbench
  3. Select RHEL7 next to Select content to load:.
  4. Click the Customize button next to Profile.
  5. Provide a New Profile ID of xccdf_org.ssgproject.content_profile_C2S_control1.
  6. In the customizing window:
    1. Click the Deselect All button at the top.
    2. Under Services > Obsolete Services > Telnet, check the box next to Uninstall telnet-server Package.
    3. Under Services > FTP Server > Disable vsftpd if Possible, check the box next to Uninstall vsftpd Package.
    4. Under System Settings > Network Configurations and Firewalls > firewalld > Inspect and Activate Default firewalld Rules, check the box next to Verify firewalld Enabled and Install firewalld.
  7. Click the OK button at the bottom of the customization window.
  8. Now, in the SCAP Workbench window, click on File, Save Customization Only, and name the customization control1_custom.xml.
  9. Close SCAP-Workbench

5. On Control1, use SCAP-Workbench to scan Control1 (Local Machine) using the newly created control1_custom profile. Then create a report of the scan results named control1_scan_report.html.

  1. From within the SCAP Workbench window, select Local Machine as the target, then click the Scan button at the bottom to start a scan using the custom profile.
  2. Once the scan is finished, click the Close button in the Diagnostics window.
  3. Click the Save Results button at the bottom, and select HTML Report.
  4. Enter control1_scan_report.html as the name of the report, and click Save.

6. On Control1, generate an SSH key for the ansible user, then copy that key to Host2 in order to use Ansible later on.

On our Control1 SSH session:

  1. To create a keypair for the ansible user on the Control1 host, run the following:

    • Become the Ansible user:

      sudo su - ansible
    • Generate the SSH keypair (press Enter to accept all defaults):

      ssh-keygen
    • Copy the public key to Host2 (accept the host key if prompted, authenticate as ansible user):

      ssh-copy-id Host2

7. On Control1, SCAP-Workbench was used to create an Ansible playbook to remediate Host2 issues. Add Host2 to an inventory file in the Ansible home directory, then download and run the remediate.yml playbook against Host2.

  1. On Control1, create an inventory file in the ansible users home directory and add Host2 to it:

    nano inventory
    [Host2]
    X.X.X.X   (Private IP Address of Host2)

    Save and close the file.

  2. Download the remediate.yml playbook by running:

    wget https://raw.githubusercontent.com/linuxacademy/content-security-redhat-ex415/master/remediate.yml /home/ansible/
  3. Run the remediate.yml playbook against Host2:

    ansible-playbook -i inventory remediate.yml

8. On Host3, set up AIDE to monitor the /accounting directory using the DIR settings group and monitor the /applications/payroll directory for all access events. Configure AIDE to run a check every morning at 1 AM.

On Host3:

  1. Install AIDE:

    yum install -y aide
  2. Initialize AIDE:

    /usr/sbin/aide --init

    > Note: This will take about 5 minutes to complete.

  3. Copy initialized database to production:

    cp /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz
  4. Define directories to monitor:

    nano /etc/aide.conf
    APP_ACCESS = a
    
    /accounting     DIR
    /applications/payroll   APP_ACCESS  

    > Note: These lines should be added directly above the line /boot/ CONTENT_EX line.

    Save and close the file.

  5. Create a cronjob to run aide --check at 1 AM daily:

    nano /etc/crontab
    0 1 * * * /usr/sbin/aide --check 

    Save and close the file.

  6. Now we need to update the AIDE database since we made changes to what was monitored:

    /usr/sbin/aide --update

    > Note: This will take about 5 minutes to complete.

    cp /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz  

    Enter y to overwrite the file.

9. On Host1, only permit SSH access for root from host Control1; be sure root SSH access is enabled globally as well. Also permit user cloud_user SSH access from anywhere. Ensure these changes take effect immediately.

On Host1:

  1. The first step is to permit root logins by removing the comment in front of the line #PermitRootLogin yes in the /etc/ssh/sshd_config file.

    Edit the sshd_config file:

    /etc/ssh/sshd_config

    Remove the # before the following line:

    PermitRootLogin yes
  2. Secondly, we need to add root@control1 and cloud_user to the AllowUsers line in the /etc/ssh/sshd_config file.

    Add the following line to the file:

    AllowUsers root@control1 cloud_user

    Save and close the file.

  3. Now we need to restart the sshd service so the changes we made will take affect:

    systemctl restart sshd  

10. On Host1, install USBGuard and configure it to allow devices with the name "Yubikey-Waddle" or serial number "1337h4x0r". Configure it to block all devices that don't match these rules. USBGuard will need to run at boot.

  1. Install USBGuard

    yum install -y usbguard
  2. Start the USBGuard service

    systemctl start usbguard.service
  3. Generate a base policy for USBGuard

    usbguard generate-policy > /etc/usbguard/rules.conf
  4. Restart the USBGuard service after creating the base policy

    systemctl restart usbguard.service
  5. Enable the USBGuard service to start at boot

    systemctl enable usbguard.service
  6. Create a local file named rules.conf and add two allow lines

    nano rules.conf

    Enter these two lines:

    allow name "Yubikey-Waddle"
    allow serial "1337h4x0r"

    Save and close the file.

  7. Commit the USBGuard rule changes by running the following command

    install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf
  8. Edit the /etc/usbguard/usbguard-daemon.conf file

    nano /etc/usbguard/usbguard-daemon.conf

    Set the ImplicitPolicyTarget to block:

    ImplicitPolicyTarget=block

    Save and close the file.

  9. Restart the USBGuard service

    systemctl restart usbguard.service

11. On Host1, ensure the Helpdesk group has permissions to edit USBGuard rules.

  1. Update USBGuard to permit the USBGuard-Users group to make changes to USBGuard

    nano /etc/usbguard/usbguard-daemon.conf

    Change the IPCAllowedGroups line to read:

    IPCAllowedGroups=Helpdesk

    Save and close the file.

  2. Restart the USBGuard service

    systemctl restart usbguard.service

12. On Host3, install PAM and configure an account lockout policy to lock accounts out for 15 minutes after 3 failed login attempts. Do not include root in the account lockout policy.

On Host3:

  1. To install PAM, run the following command:

    sudo yum install -y pam-devel
  2. Edit the /etc/pam.d/password-auth file:

    vi /etc/pam.d/password-auth
  3. Add the following as the second uncommented line in the file:

    auth        required      pam_faillock.so preauth silent audit deny=3 unlock_time=900
  4. Add the following as the fourth uncommented line in the file:

    auth        [default=die] pam_faillock.so authfail audit deny=3 unlock_time=900
  5. Next, add the following as the first line in the account section:

    account     required      pam_faillock.so 
  6. Save and close the /etc/pam.d/password-auth file.

  7. Edit the /etc/pam.d/system-auth file:

    vi /etc/pam.d/system-auth
  8. Add the following as the second uncommented line in the file:

    auth        required      pam_faillock.so preauth silent audit deny=3 unlock_time=900
  9. Add the following as the fourth uncommented line in the file:

    auth        [default=die] pam_faillock.so authfail audit deny=3 unlock_time=900
  10. Next, add the following as the first line in the account section:

    account     required      pam_faillock.so 
  11. Save and close the /etc/pam.d/system-auth file.

13. On Host3, create a password complexity policy that requires all new passwords to be at least 14 characters in length, contain at least 4 different character classes, and have at least 4 numbers in it.

  1. To create the password requirements in the policy, we need to edit the /etc/security/pwquality.conf file and include the following:

    Edit the pwquality.conf file:

    nano /etc/security/pwquality.conf

    Change the following lines (be sure to uncomment any necessary lines):

    minlen = 14  
    minclass = 4  
    dcredit = -4    

    Save and close the file.

  2. In order to put the new policy into effect, we need to add the following line to the /etc/pam.d/passwd file:

    Edit the passwd file:

    nano /etc/pam.d/passwd

    Add the line:

    password    required    pam_pwquality.so retry=3
    • This line should be inserted as the first line with the word "password", the third uncommented line in the default configuration of the file.

    Save and close the file.

14. On Host3, ensure users dschrute and mscott have full sudo access.

  1. Add the following lines to the /etc/sudoers file via visudo:

    visudo

    Add the following directly below the line root ALL=(ALL) ALL:

    dschrute     ALL=(ALL)       ALL
    mscott       ALL=(ALL)       ALL

    Save and close the file.

15. On Host1, create a new volume 100 MB in size named data_lv, which is to be part of the luks_vg volume group.

On Host1:

  1. View a list of available volume groups:

    vgs
  2. Create a new logical volume:

    lvcreate -L 100M -n data_lv luks_vg
  3. Verify that the new logical volume was created:

    lvs

16. On Host1, encrypt the new data_lv volume with LUKS, then format it using ext4 and mount it to the /data directory. Lastly, write a test file to the /data directory named test.txt.

  1. Encrypt the volume:

    cryptsetup luksFormat /dev/mapper/luks_vg-data_lv
    • Type YES at the prompt.
    • Enter the passphrase Pinehead1! at the next two prompts.
  2. Check for TYPE=crypto_LUKS in the output of this command:

    blkid | grep data
  3. Open the volume:

    cryptsetup luksOpen /dev/mapper/luks_vg-data_lv data_lv
    • Enter the passphrase Pinehead1! at the prompt.
  4. Check for data_lv in the output of this command:

    ls /dev/mapper
  5. Run the following command to overwrite all of the storage on the new volume:

    shred -v -n1 /dev/mapper/data_lv
  6. Next, format the new volume using ext4 with the following command:

    mkfs.ext4 /dev/mapper/data_lv
  7. Next, mount the volume to /data:

    mount /dev/mapper/data_lv /data
  8. Check for lost+found in the output of this command:

    ls /data
  9. Check the status of the new encrypted volume:

    cryptsetup -v status data_lv
  10. Create the test file:

    touch /data/test.txt

17. On Host2, change the LUKS passphrase for the patient_lv volume to "Itscoldinthesnow!32". The original passphrase is "Pinehead1!"". No data must be lost during this process.

On Host2:

  1. We first need to identify what volume patient_lv is part of. When a LUKS-encrypted volume is created, its original name includes the volume group name.

    Run the following command, and look for device in the output:

    cryptsetup -v status patient_lv
  2. Run the following command to change the passphrase:

    cryptsetup luksChangeKey /dev/mapper/luks_vg-patient_lv
    • Enter the original passphrase (Pinehead1!) at the prompt.
    • Enter the new passphrase (Itscoldinthesnow!32) at the prompt.
    • Re-enter the new passphrase (Itscoldinthesnow!32) to confirm.

18. In preparation for deploying NBDE, set up Control1 as an NBDE Tang server.

On our Control1 SSH session, ensure we are the root user:

sudo su
  1. On Control1, install Tang:

    yum install -y tang
  2. Configure Tang to run at boot:

    systemctl enable tangd.socket --now
  3. Verify that two Tang keys were created:

    ls /var/db/tang

    There should be two files in that directory with the file extension .jwk.

  4. Lastly, copy the IP address of Server 2 to your clipboard (we'll need it later).

    ip addr

    > Note: We are looking for the inet line in the eth0 section.

19. On Host3, encrypt the /dev/xvdg disk using the NBDE Tang keys on Control1. Then, ensure the NBDE keys are set to retrieve automatically at boot.

On Host3:

  1. First, install the necessary Clevis packages:

    yum install -y clevis clevis-luks clevis-dracut
  2. Next, encrypt the /dev/xvdg disk with the Tang key from Control1:

    clevis bind luks -d /dev/xvdg tang '{"url":"http://CONTROL1_IP"}'

    > Note: Be sure to replace CONTROL1_IP with the IP address we copied in the previous objective.

    • Y to trust the keys
    • y to initialize
    • Use Pinehead1! as the existing LUKS passphrase
  3. Verify that the key was entered into the LUKS header of /dev/xvdg:

    luksmeta show -d /dev/xvdg
  4. Verify that slot 1 is active and there is a key value next to it.

  5. Lastly, force the retrieval of the Tang key at boot (will take about 2 minutes to complete).

    dracut -f

20. On Host1, ensure SELinux is put into enforcing mode and the host boots into enforcing mode.

On Host1:

  1. Check the SELinux state

    getenforce

    This will show that it is in disabled mode, so we need to change it to permisive mode in /etc/selinux/config.

  2. Edit /etc/selinux/config and change SELinux to be in permissive mode:

    nano /etc/selinux/config
    SELINUX=permissive

    Save and close the file.

  3. Reboot the host

    shutdown -r now

    Log in to the server once it has finished rebooting:

    ssh cloud_user@PUBLIC_IP_ADDRESS

    Become the root user:

    sudo su
  4. Check the SELinux state

    getenforce
    • This will show that it is in permissive mode, so we need to change it to enforcing mode
  5. Put SELinux into enforcing mode

    setenforce 1
  6. Check to make sure SELinux is now in enforcing mode

    getenforce

    We can see our change worked and SELinux is now in enforcing mode.

  7. Ensure SELinux boots into enforcing mode

    Edit the SELinux configuration file:

    nano /etc/selinux/config
    SELINUX=enforcing

    Save the changes.

21. On Host1, configure SELinux confined users by mapping Linux user jhalpert to SELinux user user_u and Linux user pbeesly to SELinux user staff_u.

  1. Map Linux user jhalpert to SELinux user user_u:

    semanage login -a -s user_u jhalpert
  2. Map Linux user pbeesly to SELinux user staff_u:

    semanage login -a -s staff_u pbeesly  
  3. Check the user mappings:

    semanage login -l  
    • We can see our Linux users successfully mapped to the assigned SELinux users.

Conclusion

Congratulations — you've completed this hands-on lab!