Skip to main content

Ansible Archiving

Hands-On Lab

 

Photo of

Training Architect

Length

00:30:00

Difficulty

Intermediate

Being able to do normal system tasks, such as performing archives and backups, is one of the key skills for success with using Ansible. The ability to do this is also an objective now, on the Red Hat Certified Ansible Specialist exam. In this hands-on lab we will use a playbook with the archive module. We'll make it archive a folder, then transfer the resulting archives from several servers back to the Ansible control server. This will be used for backup purposes. Note: Ansible has been setup and configured for use on the Control server and two nodes. This will save time when doing the hands-on lab.

What are Hands-On Labs?

Hands-On Labs are scenario-based learning environments where learners can practice without consequences. Don't compromise a system or waste money on expensive downloads. Practice real-world skills without the real-world risk, no assembly required.

Ansible Archiving

Introduction

In this Hands-On Lab we will use a playbook with the archive module for backup purposes.

Note: Ansible has been setup and configured for your use.

Solution

To access the lab environment, log into the control node as cloud_user, using the IP address and login credentials provided in the hands-on lab page.

Sign onto the Ansible Control Node Server as cloud_user and Change to the ansible User. Test to Ensure Ansible Has Been Set up for You

Sign into the server called Ansible Control Node using the cloud_user and change to the ansible user via the su - ansible command.

Test that Ansible is working via an ad-hoc command.

A possible command to use is the following:

ansible all -m ping

Create an Ansible Playbook Called backup-logs.yml. The Playbook Should Create a Compressed Backup of the /var/log Folder on Each of the Nodes

Before we do anything, let's have a look at our hosts file:

cat /etc/ansible/hosts

In there, we'll see a couple of hosts (down near the bottom of the file):

[dbsystems]
db1
db2

The Ansible playbook should create a backup of the /var/log folder to the /home/ansible folder and call the compressed copy of the log files, logs.tar.gz. To make this happen, we've got to create a YAML file (we'll call it backup-logs.yml. Here's that file:

---
- hosts: dbsystems
  user: ansible
  become: yes
  gather_facts: no
  tasks:
​
  - name: Compress directory /var/log/ into /home/ansible/logs.zip
    archive:
      path: /var/log
      dest: /home/ansible/logs.tar.gz
      owner: ansible
      group: ansible
      format: gz

Now let's run that playbook and see what happens with: ansible-playbook backuplogs.yml. If everything looks good, we can log in and check it: ssh node1. run a quick ll, and we'll see that there is a logs.tar.gz sitting in our home directory. Go ahead and get out of node1 (with the exit command) and we'll be back in the control1 machine.

Using Another Ansible Module, Copy the Compressed Log Files to the Control Node. Ensure the Name Is Unique and Related to Which Node It Has Come From

So, we're sure that the files we need do in fact exist on the node, now we need to bring them back to control1. We'll do that with the fetch module, fetch from each node a copy of the compressed log file back to the control node.

We've got to go out and grab the remote logs. Let's add this to backup-logs.yml:

  - name: Fetch the log files to the local filesystem
    fetch:
      src: /home/ansible/logs.tar.gz
      dest: logbackup-{{ inventory_hostname }}.tar.gz
      flat: yes

The whole file should look like this when we're done:

---
- hosts: dbsystems
  user: ansible
  become: yes
  gather_facts: no
  tasks:

  - name: Compress directory /var/log/ into /home/ansible/logs.zip
    archive:
      path: /var/log
      dest: /home/ansible/logs.tar.gz
      owner: ansible
      group: ansible
      format: gz

  - name: Fetch the log files to the local filesystem
    fetch:
      src: /home/ansible/logs.tar.gz
      dest: logbackup-{{ inventory_hostname }}.tar.gz
      flat: yes

Now when we run ansible-playbook backuplogs.yml, and then run a quick ll, we should see tar.gz files sitting there that came from both db nodes.

Uncompress One of the Files That Has Been Downloaded and Make Sure It Is a Copy of the Files That Are in the /var/log Folder on the Node

To really check what we got, let's extract contents from one of the archived files that has been copied to the control node, and test to ensure it is a valid copy of the files as they exist on the node:

tar -xvf logbackup-db1.tar.gz

Once that's done extracting, run ll log (log being the directory that was extracted from the tar.gz file), and we should see the same thing we'd see when running ll /var/log on the relevant node.

Conclusion

That's really all there is to it. We've created an Ansible playbook that grabs log file directories from two different node servers. Congratulations!