Scenario Based LXD/LXC Security
Greetings! Welcome to the Scenario-Based LXD/LXC Security course. This is the third course of a four-course learning path related to securing containers. The first course focused on teaching how to prepare and harden our operating system so that it is secured as much as possible. The second course was all about Docker containers and taught how to navigate through different situations within the context of Docker security. This time, we move on to another popular container known as Linux Containers (LXC) where we will get some in-depth insight to how to use them. LXD is a system container manager and an extension of LXC. Today, one is not used without the other.
In this course, we go over a series of best security practices. In order to do anything, we need to install LXC and initialize LXD. One useful security feature of the Linux kernel is seccomp, and we will talk about it within the context of LXC. Seccomp is used to allow and deny system calls with blacklists and whitelists. For unprivileged containers, it adds another layer of security and it is a good practice. Another useful thing we can do with LXC/LXD containers and container groups is limit their resource consumption. We can pose constraints on different parts of the system such as CPU usage, RAM usage, the amount of network traffic going in and out, etc. These restriction options are important because if our container starts overconsuming the resources of the system for any reason and we don't have an alert system to warn us, that can result in a very slow system or our container can end up DoSing everything running on the system.
One practical example that we will go through is the process configuring a container to be a tunnel through which we will redirect traffic. It will be an SSH tunnel with private and public keys, and all traffic will go through it. This shields us as the end user from being exposed and encrypts our traffic when traveling between two points.
Another practical example that we will go through is the process of configuring and securing a container to run Apache Web Server for an application back end and front end. We will be creating three separate containers for this, one for the back end, one for the database, and one for the front end. We will need to configure proper communication between them and ensure that we end up using best security practices and strong encryption.
About the Author
This video introduces the course author who will be the guide for all the lectures.
About the Course
This is an important video related to this course because it goes over what knowledge and skills one can gain upon completing it. This container course focuses on its security aspect, and everything we will be doing will be within that context. Do not skip this video. Use it to be sure this is the right course to take.
How to Get Help
Here we provide instructions on how to get the help from Linux Academy that it provides in its Community section. The Interact with Students, Slack, and Instructor/Site Support pages enable interaction with the instructor and other users that can help with any problems or questions.
This lecture talks about the prerequisites to get the most out of this course. For this course, students should have basic knowlege in Python and bash scripting, as well as know how to use the VIM text editor, the Linux file system, and the Linux command line. A basic knowlegde in computer networks is also needed.
Text Editor Vim Basics (Optional)
This video covers a few basics about the VIM text editor. VIM stands for Vi IMproved, and it is an improved version of the older Vi editor. It is used for any kind of Linux interactions through the shell in terms of editing files. VIM will be used throughout this course in several videos, so it would be beneficial to know how to navigate through it. We go over some of the necessary basic commands. First, we install VIM on our machines and then see how we can open or create a file, enter insert mode, write to a file, save it, and quit edit mode.
Job Market (Optional)
In this video, we provide an overview of the kinds of jobs to look out for with the knowledge gained from this course. We talk about the salary ranges for these jobs and review some websites to use to look for these positions.
In this video, we get started talking about LXC/LXD best practices. First, we install LXC. On Ubuntu 18.04, it is already installed by default, while with other distributions you need to install it manually. We also need to initialize LXD while LXC is run as a non-root user with lower privileges and not elevated privileges. To enable any user to use LXC, we just need to add it to a group. After that, we will initialize LXD by using the default options.
Resource Limits Part 1 - CPU Cores
Here we talk about why putting constraints on the consumption of resources by specific containers and groups of containers is important and demonstrate how we can do that. One reason we might do this is because if our container starts overconsuming the resources of the system on which it is running and we don't have an alert system to warn us, it will end up basically DoSing everything else running on the system or everything will start working really slow. We will be implementing some restrictions on a basic CentOS container. In this video, we demonstrate how we can limit a container’s CPU usage.
Resource Limits Part 2 - CPU Time
Now we can give an allowance of CPU time in terms of percentage. We can allot the percentage of whatever computational power we have at our disposal. We can also assign different priorities to different containers, where it is important to give a higher priority to more crucial services. This is important both for security and convenience reasons. Priorities go from 0 to 10. Another way we can limit our resources is in terms of milliseconds. We can configure services to take a certain amount of milliseconds out of every larger number of milliseconds. For example, take 20ms out of every 200ms.
Resource Limits Part 3 - RAM
This time, we impose limits on the usage of RAM. We can limit the usage in kilobytes, megabytes, gigabytes and so on. We will limit our container to use 256MB. We will talk about the swap memory as well, which is used as a reserve option in cases where we run out of RAM. The swap memory uses a part of the disk as RAM. Like with limiting the CPU, we can set priorities when limiting RAM as well as set priorities for the swap memory. We will also talk about hard blocks and soft blocks.
Seccomp - Whitelists and Blacklists
In this lecture, we talk about seccomp within the context of LXC. For unprivileged containers, it adds another layer of security and is a good practice. Seccomp is used to allow and deny system calls being made. By being able to do that, we are effectively able to know exactly which system calls are being made in the container. If we know that, then we are very familiar with the container. With seccomp, we can create blacklists and whitelists within LXC. Blacklists list system calls that are to be denied while all other calls are allowed. Whitelists contain allowed system calls, while all others are denied. In this video, we perform some customizations to our seccomp profile.
Applications and Services
SSH Tunnel Part 1 - SOCKS5 Proxy, VPN, Container Port Forwarding
In this video, we configure a container to be a tunnel, through which we redirect traffic. We create an SSH tunnel with private and public keys, and all traffic will go through it. This shields us from being exposed and encrypting our traffic when traveling between two points. Here we will see how we can properly expose a container to the outside world, create redirects, and establish connections from outside, through the host, and with the container. For showing this, we will create a SOCKS5 proxy that is practically a VPN where we have encrypted traffic between us and this point. We will launch a container and call it
SOCKS5-Proxy. We will create a new user in the container and create a home directory for it. Then we will install everything we need like openssh, net-tools, and vim. Then we will configure SSH. We will create an
authorized_keys file and change the ownership of the
.ssh file from
tunnel, as well as set some permissions. After that, we will generate our key pair and add the public key to
SSH Tunnel Part 2 - SOCKS5 Proxy, VPN, Container Port Forwarding
In this video, we create a redirect so that our container can become visible to the outside world. We add a new proxy to our SOCKS5-proxy container where the host will listen to all incoming traffic on port 61613. It will redirect the traffic to the IP address of our container. When that is done, we can publish the container. After that, we can attempt to connect to our server from our client.
SSH Tunnel Part 3 - SOCKS5 Proxy, VPN, Container Port Forwarding
In this video, we test out our tunnel. Data should be forwarded from the client, through the tunnel, to the server, then from the server to the container, after which the container will forward it to the rest of the Internet or wherever it is going. We need to modify the network settings in our browser, pick manual configuration, set the SOCKS Host to SOCKS v5, and choose an unused port before we test out if our tunnel is working.
Configure and Secure a Container to Run Apache Web Server for Application Front End Part 1 - Web Server Container Front End Setup and Configuration
Now that our backup is set, we move on to configuring the front-end part of our application. We set up another container with a CentOS image called
Frontend. We won’t have to redownload the image because we already have it stored. We install a web server that we will use as a reverse proxy in order to serve the front-end content of our application. We will then proceed to create a few directories which we will use to create our website configuration. After installing VIM, we will edit the
mylabserver.com configuration file and apply the changes.
Configure and Secure a Container to Run Apache Web Server for Application Front End Part 2 - Web Server Container Front End Setup and Configuration
Our front end is still not visible from outside. We can access it from the host machine, but the IP address of the
Frontend container does not have any redirect to it even though it is running Apache. We need to create proxies to redirect traffic. The first redirect we create is for port 80. We configure it as such that any traffic coming on port 80 on the host OS redirects it to the container IP on port 80 of the container. We do the same for port 443. In the container, we edit the Apache configuration to include the configuration which we have already created for the virtual host. After that, we install some prerequisites such as the epel repos and certbot. We then run certbot which geenrates an SSL certificate. When everything is set, all that's left to do is test if it's working properly.
Configure and Secure a Container to Serve Application Back End via Apache Part 1 - Apache Web Server, Initial Setup Configuration
In the next few videos, we will be learning how to configure and secure a container to run Apache Web Server for an application back end. In this video, we perform the initial web server container back end setup and configuration. We start with a clean server, so we will have to set everything up again from the beginning as we did in some previous videos. We will create our container instance and call it
Backend. After that, we will install vim and Apache, create and edit some configuration files, as well as inform Apache of these configuration files.
Configure and Secure a Container to Serve Application Back End via Apache Part 2 - Web Server Container Back End Setup, Configuration, Nonstandard Ports, and SSL
Continuing on from the last video, this time our task is to create redirects for our back end. Initially, we create a proper redirect because we don’t want the certbot to give us any problems. However, later on we will change the ports. In essence, it can work with other ports straight away, but the best is to set it to default when obtaining the certificate and then later change it to whatever we want. After that, we install the certbot, run it, get the certificate, and finally change the ports.
Configure and Secure a Container to Serve Application Back End via Apache Part 3 - Install and Create a Database
In this video, we do a few more configurations such as removing the
mylabserver.com.conf configuration file from
sites-enabled because we do not have much need for it in our situation. After that, we proceed to launch another container that will be our database on the server. On this container, we install the client and the server. When everything is set up, we need to start the database server so that we can proceed to create a database, create a user, grant privileges on the database to the user, and then specify who can access the server and the database as well as how they can.
Configure and Secure a Container to Serve Application Back End via Apache Part 4 - Database User Creation and Access Rights
In this video, we deal with database user creations and access rights. The first thing that we do is create a user. Make sure not to put a weak password in our production environment. Passwords with 12 characters significantly increase security. We then allow our created user to to connect from a specific host which is the container where our backend is running. Then we need to grant the user all privileges to our MyDB database, as well as set the password for
root. Lastly, we need to apply everything we have instructed the database to do for us and confirm that everything works the way we set it up. We will test out the database and the connection and remove port 61613 because we no longer need that redirect.
Configure and Secure a Container to Serve Application Back End via Apache Part 5 - Database, Firewall, and Access rights
In this video, we focus on setting up a firewall and access rights. First of all, we install the client for MariaDB which we will use to connect to an existing database. Then, we test out the connection. After that, we configure a firewall on our MariaDB container. We need to install the firewall first and enable it. Then we add a firewall rule that is permanent, has the zone set to public, and allows connections coming from a specific IP address at port 3306 under the TCP protocol to be accepted.
Configure and Secure a Container to Serve Application Back End via Apache Part 6 - Get and Configure Flask App Rest API
Now we will play around with the host. The host has a specific requirement that only the front end should be able to access it at certain ports, as we have only intended certain ports to be used by the front end and not by any other users or any other IP addresses. On the back end, we need to connect only from a single host to the back end which is being served at port 65535. We will do this using the ufw firewall interface where we will allow all traffic coming from the IP address of our front end to anywhere as long as the destination port is 65535. When we are finished with the firewall, we will need to configure the reverse proxy and acquire our back end code from GitHub.
Configure and Secure a Container to Serve Application Back End via Apache Part 7 - Get and Configure Flask App Rest API
In order to run the application that we downloaded in the last video, some adjustments must be made. First, we have to download and install some prerequisites. We need Python 3.6 and Python 3.6 development files. Then, we will install flask, flask_cors, mysql-connector, and the waitress server. When we are done with all the installations, we will move on to edit several parts of our
app.py file. We will need to import serve from waitress, change some database credentials, specify an interpreter for our file and other modifications. When everything is set, we will run our app.
Configure and Secure a Container to Serve Application Back End via Apache Part 8 - Proxy Config, Troubleshooting, and Testing
Now that our
app.py file is running, we need to set up a systemd service file for it where we will relinquish the control of this process to systemd. Systemd will will control it from that point forward. So we will be able to clearly see what is happening and it will restart our file in case of failure or similar situations. We have already downloaded one file called
our-program.service, and we will need to modify its items. Then we will transfer it to a new directory and check if it is running properly.
Configure and Secure a Container to Run Apache Web Server for Application Front End Final - Wrapping Up Loose Ends
We have pretty much finished everything that needs to be done regarding configuring a container to run Apache Web Server for our application. Now we must do a few more things to wrap it all up. First, we enter our back-end container and log in to the database with the user
chronic. We add one more feature to our application called
Message of the Day for generating a random message. For this, we need to create a table
motd. This will all be for our back end. Furthermore, we must install git on the front end and pull a few necessary files. We also need to install php. All that is left to do is go to the domain of our server and see if everything is working properly.
Here we go over what we have seen in the course and get a clear perspective of the knowledge provided therein!
Take this course and learn a new skill today.
Transform your learning with our all access plan.Start 7-Day Free Trial