NGINX Web Server Deep Dive

Course

Intro Video

Photo of Keith Thompson

Keith Thompson

DevOps Training Architect II in Content

Length

10:19:18

Difficulty

Intermediate

Course Details

In this course, you will learn how to install, configure, and customize NGINX for a wide variety of uses. While following along with lessons, you will be educated in how to use the NGINX documentation to assist you as you work with NGINX. By the end of the course, you will have experienced configuring NGINX as a web server, reverse proxy, cache, and load balancer while also having learned how to compile additional modules, tune for performance, and integrate with third-party tools like Let's Encrypt.

Syllabus

Course Introduction

Getting Started

Course Introduction

00:01:03

Lesson Description:

NGINX is an amazing web server, reverse proxy, and load balancer. This lesson gives an overview of the topics covered in this course, and what you should already know before you start.

About the Course Author

00:00:57

Lesson Description:

A little about me, Keith Thompson.

Course Features and Tools

00:03:39

Lesson Description:

It's important for you to understand the tools and resources available to you as a Linux Academy and Cloud Assessments student. This video will cover the course features used by this course and other tools you have at your disposal, such as flash cards and course schedules.

Introducing NGINX

What is NGINX?

00:02:56

Lesson Description:

Before we start learning about NGINX, it's important that we have an understanding of its history, goal, and use cases. Here is the slideshow from this lesson.

What is HTTP and How Does It Work?

00:05:21

Lesson Description:

HTTP is the language of the internet and one of the primary protocols that NGINX uses. In this lesson, we'll go over what HTTP is and take a look at a pure HTTP request since we normally don't see them. Here is the slideshow from this lesson.

NGINX vs. Apache

00:04:24

Lesson Description:

People often ask whether they should use NGINX or Apache as their web server. In this lesson, we'll look at the features of both of the web servers and discuss what makes them different. This lesson will hopefully reassure you that NGINX is the right choice. Here is the slideshow from this lesson.

Differences Between NGINX and NGINX+

00:04:22

Lesson Description:

Before we start using NGINX, we want to acknowledge NGINX Plus and go through some of the key differences. We won't be using NGINX Plus in this course, but it might serve your use case so knowing its features is useful. Here is the comparison page used in this video.

Installing and Running NGINX

Installing NGINX on CentOS 7

00:05:03

Lesson Description:

NOTE: Use at least a 2 unit server to avoid low memory errors. To start digging into NGINX, we first need a server. In this video, we’ll set up a CentOS 7 cloud server with Nginx. Documentation For This Video Official NGINX Installation Documentation systemd Adding the Official NGINX Repository For us to install NGINX using yum we will add the official NGINX repository by creating a file at /etc/yum.repos.d/nginx.repo: $ sudo vim /etc/yum.repos.d/nginx.repo /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 enabled=1 This repo will give us access to the “stable” branch of NGINX to install. If you’d like to install the “mainline” branch of NGINX, change the “baseurl” to “http://nginx.org/packages/mainline/centos/7/$basearch/”. Before we can install, we need to run yum update. $ sudo yum update Installing NGINX Now that we have the repository, let’s install NGINX now: $ sudo yum install -y nginx Note: This command will install the “stable” version of NGINX, which might not be the newest version. Starting and Enabling NGINX Although we’ve installed NGINX, it is not currently running. We’re going to use systemd to run NGINX as a service and set it to start when our server boots up. $ sudo systemctl start nginx $ sudo systemctl enable nginx Next, we’ll stop and start the server from the Linux Academy cloud server window. Once the reboot has finished, we should be able to visit the server’s IP address in our browser and see the default landing page.

Installing NGINX on Ubuntu 16.04

00:04:41

Lesson Description:

NOTE: Use at least a 2 unit server to avoid low memory errors. To start digging into NGINX, we first need a server. In this video, we’ll set up an Ubuntu 16.04 cloud server with Nginx. Documentation For This Video Official NGINX Installation Documentation systemd Update the Server’s Applications Before we get too far, let’s update the software that’s already installed on our server (you can skip this if you’d rather not or know that nothing is out of date): $ sudo apt-get update $ sudo apt-get upgrade -y Installing NGINX For us to install NGINX, we’re going to need to add an apt repository for it. Thankfully, we can follow the official documentation for how to do this. The first thing that we need to do is download the signing key and add it using apt-key: $ curl -o nginx_signing.key http://nginx.org/keys/nginx_signing.key $ sudo apt-key add nginx_signing.key Now that we’ve added the signing key, we can set up the apt repository by appending the following lines to /etc/apt/sources.list: $ sudo vim /etc/apt/sources.list /etc/apt/sources.list (partial) ## Add official NGINX repository deb http://nginx.org/packages/ubuntu/ xenial nginx deb-src http://nginx.org/packages/ubuntu/ xenial nginx With that file updated, we can install NGINX using apt-get after we update: $ sudo apt-get update $ sudo apt-get install -y nginx Note: This command will install the “stable” version of NGINX, which might not be the newest version. Starting and Enabling NGINX Although we’ve installed NGINX, it is not currently running. We’re going to use systemd to run NGINX as a service and set it to start when our server boots up. $ sudo systemctl start nginx $ sudo systemctl enable nginx Next, we’ll stop and start the server from the Linux Academy cloud server window. Once the reboot has finished, we should be able to visit the server’s IP address in our browser and see the default landing page.

NGINX as a Web Server

Basic Web Server Configuration

Understanding the Default NGINX Configuration

00:10:34

Lesson Description:

Working with NGINX is all about configuration, and in this video, we’re going to learn what the default NGINX configuration is to see how it works. Note: the commands in this video are run as the root user. Documentation For This Video user directive worker_processes directive error_log directive pid directive events directive worker_connections directive include directive default_type directive http directive log_format directive sendfile directive access_log directive keepalive_timeout directive Default NGINX Configuration We didn’t need to set up any configuration for NGINX to serve up the default index page. That is because NGINX comes with a default configuration that can be found at /etc/nginx/nginx.conf. Most of what we’re going to be doing would require sudo, so let’s switch to being the root user before we begin: $ sudo su [root] $ cd /etc/nginx Let’s take a look at what’s in the default configuration file: /etc/nginx/nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } Let’s unpack this section by section to understand what’s going on. The first few lines of the file exist outside of any specific context: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; user directive - specifies which user the worker processes should run under. worker_processes directive - lets us specify the number of worker processes to use. error_log directive - sets the file to log errors and what level of messages to log. pid directive - defines the files that store the PID of the main process. The next section is our first “context” called events. events { worker_connections 1024; } A context in NGINX is expressed using curly braces ({ and }), and certain directives only work in very specific types of contexts. The events context exists to allow us to specify how a worker process should handle connections. In this case, the worker_connections specifies the maximum number of connections that a single worker can have open at one time. The http Context The remainder of the default /etc/nginx/nginx.conf file is a single http context. Here’s the entire section: http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } The first directive that we see is include, which lets us separate our configurations into different files and load them in where we need them. The mime.types file defines the content return types that should be used for various files that the server might render. The default_type directive states the content type to use if the content is not defined in the types file. The next two directives are all about logging. Looking at the log_format directive, we see a lot of tokens that start with a dollar sign ($). These tokens are variables that are available when a request is being processed, and this directive defines which should be used (and in what position) to log the request information. access_log defines the file to log request information to. The next directive, sendfile is a little complicated, but it specifies whether or not to use the sendfile() system call when delivering content. keepalive_timeout sets how long to hold onto the TCP connection for a client before opening up the connection for a new client. Lastly, the include directive is used one last time to load all of the .conf files in the /etc/nginx/conf.d/ directory. This is where we’ll be defining our virtual hosts as we learn throughout this course.

Simple Virtual Host and Serving Static Content

00:18:56

Lesson Description:

In this lesson, we’re going to learn how NGINX virtual hosts work by building a default server configuration. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http module NGINX server directive NGINX listen directive NGINX server_name directive NGINX root directive Default NGINX Virtual Host Configuration We didn’t need to set up any configuration for NGINX to serve up the default index page. That is because NGINX comes with a default configuration that can be found at /etc/nginx/conf.d/default.conf. There isn’t a whole lot in this file, but we’re going to remove it and build it up from nothing. Most of what we’re going to be doing would require sudo so let’s switch to being the root user before we begin: $ sudo su [root] & cd /etc/nginx [root] $ rm conf.d/default.conf If we curl our browse to our IP address now, it still works even though we removed the configuration file. The configuration is read in at the time the server starts and for changes to configuration to be picked up the service needs to be reloaded. Let’s start working on our configuration. Let’s reload NGINX’s configuration using systemctl and try again: [root] $ systemctl reload nginx [root] $ curl localhost curl: (7) Failed connect to localhost:80; Connection refused Now we can start building up our servers from scratch. Simplest Configuration for HTTP Server Here’s the simplest possible server that we can create that listens on port 80 and serves content from NGINX’s default HTML directory: /etc/nginx/conf.d/default.conf server { listen 80; root /usr/share/nginx/html; } This file is going to be included inside of the http context within /etc/nginx/nginx.conf, so we don’t need to define the http context ourselves. Configuring a virtual host with NGINX requires us to use the server directive. We’ll set up the port to listen on port 80 using the well named listen directive. The root directive allows us to specify where we’re going to put our HTML to be loaded. Let’s stop here, for now, to see if our configuration is valid. The nginx binary provides a testing flag for us to test the configuration that would be loaded if we reloaded; let’s try that before reloading the server: [root] $ nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful It looks like our configuration is fine, so we can now reload the configuration and curl localhost again. output truncated [root] $ systemctl reload nginx [root] $ curl localhost Welcome to nginx!

Error Pages

00:05:41

Lesson Description:

We now have an incredibly simple HTTP server running, but when someone goes to a page that doesn’t exist or some other type of error occurs they don’t see anything useful. In this lesson, we’ll look at how we can customize our server’s HTTP error responses. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http module NGINX server directive NGINX error_page directive Current Error Pages The most common error that we’ll see with HTTP servers is going to be the 404 error when someone attempts to access a URL that doesn’t exist. We can see our current 404 error page by using curl or navigating to a fake URL: [root] $ curl localhost/fake.html 404 Not Found 404 Not Found nginx/1.12.2 This is the error page that NGINX generates for us, but it gives almost no useful information and we’ll probably want to set up a different error page. Thankfully, we can do this with error_page directive. Let’s specify that our server should use a 404.html file that we’ll add to /usr/share/nginx/html: /etc/nginx/conf.d/default.conf server { listen 80 default_server; server_name _; root /usr/share/nginx/html; error_page 404 /404.html; } Let’s add our custom 404 page and reload our configuration: [root] $ echo "404 Page Not Found" > /usr/share/nginx/html/404.html [root] $ systemctl reload nginx [root] $ curl localhost/fake.html 404 Page Not Found Now we can see our custom 404 page. Adding the 50x Error Pages Another common type of error is an internal server error (any response code above 500). We’ll add another error_page directive that uses more than one response code: /etc/nginx/conf.d/default.conf server { listen 80 default_server; server_name _; root /usr/share/nginx/html; error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; } The 50x.html file already exists, and we needed only to specify the list of status codes that should cause it to be rendered.

Access Control with HTTP Basic Auth

00:07:23

Lesson Description:

In this lesson, we’re going to take a look at how we can require authentication for specific paths using HTTP Basic Auth. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http module NGINX server directive NGINX location directive NGINX auth_basic module Utilizing the Basic Auth Module HTTP basic authentication allows us to require a username and password before we allow a user to access a specific URL. To set up NGINX with HTTP basic auth, we’ll be using the auth_basic module. We can specify that specific location blocks utilize basic auth while others do not. Let’s create a new location block that will prevent a specific HTML file from being accessible: /etc/nginx/conf.d/default.conf server { listen 80 default_server; server_name _; root /usr/share/nginx/html; location = /admin.html { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; } The location = /admin.html is how we specify what should happen if someone navigates to the /admin.html path and it matches exactly. From there, we’re able to specify a value for auth_basic using a string (any string will do), and we give the path to the file containing the username/password pairs that have access. Now we need to create the admin.html file that we can access, and if we test now it will be accessible: [root] $ echo "Admin Page" > /usr/share/nginx/html/admin.html Generating a Password File Our configuration is currently valid, but we don’t have a password file to work with yet. We’re going to install the httpd-tools to create our .htpasswd file: [root] $ yum install -y httpd-tools Note: On a Debian based system, you’ll install the apache2-utils package instead. With these tools installed we now have access to the htpasswd utility that can generate a file for us with encrypted passwords for our users. Let’s generate the file now: [root] $ htpasswd -c /etc/nginx/.htpasswd admin New password: Re-type new password: Adding password for user admin With this file created we’re now ready to reload our NGINX configuration and test our the authentication in our browser. [root] $ systemctl reload nginx If you navigate to the /admin.html route in the browser now, you’ll be prompted for a password. Using HTTP Basic Auth from cURL If we want to access the admin page without a browser, then we can pass in the Authorization header in the request using curl. Here’s what the request looks like without the header: [root] $ curl localhost/admin.html 401 Authorization Required 401 Authorization Required nginx/1.12.2 If we use the -u flag for curl, then we can pass in the username and password divided by a :: [root] $ curl -u admin:password localhost/admin.html Admin Page Note: the example user here is admin with the password being password.

Basic NGINX Security

Generating Self-Signed Certificates

00:02:52

Lesson Description:

We’re going to take a break momentarily from configuring our server to look at how to generate a self-signed certificate. Before we can start setting our server up to handle SSL/TLS/HTTPS connections, we need to have a certificate to use. Note: the commands in this video are run as the root user. Documentation For This Video The openssl documentation Generating a Self-Signed Certificate The task of generating a certificate gets a stand-alone lesson because the openssl utility can be a little complicated to use, so we’re going to go through it slowly. We’ll make a directory to put our certificate’s keys in and then generate those keys. Here’s the command: [root ] $ mkdir /etc/nginx/ssl [root ] $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/private.key -out /etc/nginx/ssl/public.pem Generating a 2048 bit RSA private key ........................................+++ ....+++ writing new private key to '/etc/nginx/ssl/private.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:. State or Province Name (full name) []: Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []: Email Address []: Let’s break down what’s going on in this command: [root] $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/private.key -out /etc/nginx/ssl/public.pem req - We’re making a certificate request to OpenSSL -x509 - Specifying the structure that our certificate should have. Conforms to the X.509 standard -nodes - Do not encrypt the output key -days 365 - Set the key to be valid for 365 days -newkey rsa:2048 - Generate an RSA key that is 2048 bits in size -keyout /etc/nginx/ssl/private.key - File to write the private key to -out /etc/nginx/ssl/public.pem - Output file for public portion of key With the public and private keys generated, we’re now able to move on and add SSL/TLS/HTTPS support to our web server.

Configuring the Host for SSL/TLS/HTTPS

00:05:01

Lesson Description:

It is time to add support for TLS/SSL/HTTPS to our server. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http module NGINX server directive NGINX ssl module Configuring HTTPS Server When working with NGINX, there are a lot of options that you can utilize when setting up SSL/TLS for your server. We’re going to start simple and build in more of the configuration options. Let’s set our existing server to also handle HTTPS connections: /etc/nginx/conf.d/default.conf server { listen 80 default_server; listen 443 ssl; server_name _; root /usr/share/nginx/html; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; location = /admin.html { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; } This is the simplest way that we can create a server to work with SSL/TLS. Our server is now listening on two different ports (80 and 443) with port 443 specified as an ssl connection. From here, we needed to tell our server where it could find the SSL certificate to use. If we navigate to https://OUR_IP_ADDRESS now, we’ll be met with an error in the browser telling us that the connection is not secure, but if we select “Advanced”, there should be a way to tell the browser to proceed. That warning comes from the certificate being self-signed, but it doesn’t change the fact that this connection is over HTTPS. To view the site using curl we’ll need to pass the -k flag to specify that we should accept insecure certificates. [root] $ curl -k https://localhost Welcome to nginx! ...

NGINX Rewrites

Cleaning Up URLs

00:13:44

Lesson Description:

It’s fairly common to want to tidy up URLs (whether for the look, or security reasons). In this lesson, we’ll take a look at how we can render the proper page without the file extension being sent to us. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http module NGINX server directive NGINX try_files directive NGINX rewrite directive Removing HTML File Extension For our server, we don’t want our URLs to include the .html at the end of our files. To remove this, we’re going to utilize the rewrite and try_files directives. We’ll divide this into two steps: Convert URLs using a rewrite rule. Going to /admin.html should redirect to /admin. Use try_files to ensure that we’re rendering the proper page or rendering a 404. Let’s write our rewrite rules before we make any other changes to see what happens: /etc/nginx/conf.d/default.conf server { listen 80 default_server; listen 443 ssl; server_name _; root /usr/share/nginx/html; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; rewrite ^(/.*).html(?.*)?$ $1$2 redirect; rewrite ^/(.*)/$ /$1 redirect; location = /admin.html { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; } The rewrite lines follow this pattern: rewrite REGEX REPLACEMENT [flag]; Let’s break down our first regex: ^(/.*).html(?.*)?$. The starting ^ indicates that we’re going from the very beginning of the URI. We create a capture group using parentheses. The pattern /.* means starting with a / and including all characters after that. Next, we expect to find a .html (notices that we needed to escape it using a ). Lastly, we create another capture group that gets an optional query string (anything after a ?) until the end of the URI (as indicated by the $ character). The capture groups will be usable in the REPLACEMENT portion of the direction using $1, $2, etc. Let’s reload the server: [root] $ systemctl reload nginx If we visit our server from a private window (we don’t want anything to be cached), we’ll run into an infinite redirect loop. This is happening because we’re even reloading the 404.html page so we can’t even render that properly. To fix this redirect loop, we’ll need to use try_files. /etc/nginx/conf.d/default.conf server { listen 80 default_server; listen 443 ssl; server_name _; root /usr/share/nginx/html; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; rewrite ^(/.*).html(?.*)?$ $1$2 redirect; rewrite ^/(.*)/$ /$1 redirect; location / { try_files $uri/index.html $uri.html $uri/ $uri =404; } location = /admin.html { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; try_files $uri/index.html $uri.html $uri/ $uri =404; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; } We’ve added a try_files directive to both of our location blocks with slightly different rules. We want to be able to handle someone going to the index.html within potentially nested directories. The try_files call will look for the first file (or directory) that matches the request moving from left to right. If nothing is found, then the rightmost file or status code will be returned. If we reload the server configuration and try to navigate to the root URL or the status page, we will now see the content. Unfortunately, we appear not to be hitting our HTTP basic authentication anymore. The reason for this is that the $uri is changed before we process the location blocks and we’re never going to have a $uri that matches /admin.html. We need to change that location check to match against /admin instead. Here’s our final configuration file for this video: /etc/nginx/conf.d/default.conf server { listen 80 default_server; listen 443 ssl; server_name _; root /usr/share/nginx/html; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; rewrite ^(/.*).html(?.*)?$ $1$2 redirect; rewrite ^/(.*)/$ /$1 redirect; location / { try_files $uri/index.html $uri.html $uri/ $uri =404; } location = /admin { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; try_files $uri/index.html $uri.html $uri/ $uri =404; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; } Now when we reload and try to access the page from a fresh private browser, we’ll be prompted for our password.

Redirecting All Traffic to HTTPS

00:07:28

Lesson Description:

HTTPS is incredibly important because it helps us secure the data that is being transmitted. It’s so important in-fact that browsers are starting to labels sites as “Not Secure” if the user is connected via HTTP. Let’s change our server configuration to redirect all traffic to go over HTTPS instead of HTTP. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http module NGINX server directive NGINX return directive NGINX $request_uri variable NGINX $host variable Separating HTTP and HTTPS Servers Before we can redirect all traffic to go over HTTPS, we’re going to separate out the HTTP and HTTPS handling for our server by creating two separate server blocks. One will listen on port 80 and not display any content. The other will hold our current configuration but only listen on port 443. Here’s what our configuration looks like with them divided out: /etc/nginx/conf.d/default.conf server { listen 80 default_server; server_name _; return 404; } server { listen 443 ssl default_server; server_name _; root /usr/share/nginx/html; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; rewrite ^(/.*).html(?.*)?$ $1$2 redirect; rewrite ^/(.*)/$ /$1 redirect; location / { try_files $uri/index.html $uri.html $uri/ $uri =404; } location = /admin { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; try_files $uri/index.html $uri.html $uri/ $uri =404; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; } After a server reload, if we visit our IP address with http://, we should receive a 404 error, but if we visit the same page with https:// we’ll see the content that we expect. Redirecting To Completely Different URL Now that we no longer accept HTTP traffic, we want to start redirecting the user to the proper page via HTTPS. Thankfully, we’re 95% to where we want to be with what we have already done using the return directive. Looking at how return works, we need to specify two things to redirect to a completely different URL: return STATUS_CODE URL; Since we’re doing a redirect, we’re going to use the 301 status code. As for the URL portion of the statement, we’re going to use variables that are present when processing the request to ensure that we go to the proper spot. To ensure that it goes to the proper server we’ll use the $host variable in our destination URL to pass along the domain name. Using $host will make this server redirect to the proper server after we have multiple HTTPS, virtual hosts. We’ll also use the $request_uri variable which represents the path portion of the request. The only other thing that we need to do is start the URL with https://. Let’s change our return directive now: /etc/nginx/conf.d/default.conf server { listen 80 default_server; server_name _; return 301 https://$host$request_uri; } server { listen 443 ssl default_server; server_name _; root /usr/share/nginx/html; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; rewrite ^(/.*).html(?.*)?$ $1$2 redirect; rewrite ^/(.*)/$ /$1 redirect; location / { try_files $uri/index.html $uri.html $uri/ $uri =404; } location = /admin { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; try_files $uri/index.html $uri.html $uri/ $uri =404; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; } When we reload our server and visit the /admin route using http:// we should be redirected to the https:// version of the URL and prompted for our password.

NGINX Modules

Overview of NGINX Modules

00:05:31

Lesson Description:

As we’ve already seen, NGINX’s functionality is broken into modules based on the functionality (rewrites, SSL, auth_basic, etc.). In this video, we’ll investigate how NGINX’s module system works. Note: the commands in this video are run as the root user. Documentation For This Video The load_module directive Dynamic vs Built-in Modules Up until now, we’ve been using modules that are already built-in, but that doesn’t mean that all modules are built-in. As of NGINX 1.9.11, there are dynamic modules that can be loaded in when the configuration is loaded and don’t require the NGINX binary to be recompiled. This change allows NGINX to be more flexible, and the NGINX core package comes with some dynamic modules packaged up, that we haven’t loaded yet. To see the dynamic modules that are installed when we install NGINX, look in /usr/share/nginx/modules (The /etc/nginx/modules directory is a symbolic link to this path also). If we look there now, we won’t see anything: [root] $ ls -l /etc/nginx/modules/ [root] $ There’s nothing here because the official precompiled build that we’re using has many of the first-party, optional modules compiled into the binary itself. We can see the modules that are installed by looking at how our binary was configured using nginx -V: [root] $ nginx -V nginx version: nginx/1.12.2 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie' The last line (the configure arguments one), provides a lot of information to us, but unfortunately, it’s hard to read. Let’s chain some commands together to get all of the --with-*_module lines: [root] $ nginx -V 2>&1 | tr -- - 'n' | grep _module http_addition_module http_auth_request_module http_dav_module http_flv_module http_gunzip_module http_gzip_static_module http_mp4_module http_random_index_module http_realip_module http_secure_link_module http_slice_module http_ssl_module http_stub_status_module http_sub_module http_v2_module mail_ssl_module stream_realip_module stream_ssl_module stream_ssl_preread_module This is a list of all of the additional modules that were compiled into the binary, and the directives defined in these modules are available to us without needing to load them into our configuration using the load_module directive. Now that we have a rough idea of the modules that are currently available, we’re ready to move onto adding functionality using dynamic modules.

Adding Functionality to NGINX with Dynamic Modules

00:13:08

Lesson Description:

It is possible to utilize third-party modules with NGINX, and one of the most common that people use is ModSecurity. We'll be using this module later in the course, and it is the perfect candidate to look at how we can install a third-party module. Notes The commands in this video are run as the root user. You'll need to run the following command after building the dynamic module to prevent a configuration error in more recent ModSecurity builds: [root] $ cp /opt/ModSecurity/unicode.mapping /etc/nginx/modsecurity/unicode.mapping Documentation For This Video ModSecurityModSecurity-nginxNGINX dynamic modulesload_module directive Compiling libModSecurity Before we can build the dynamic NGINX module for ModSecurity we need to have the V3 of ModSecurity (also known as libModSecurity) built. Since we're going to be compiling this package, we'll install git and some other developer tools to allow us to get the source code. Install pre-requisites: [root] $ yum groupinstall 'Development tools' [root] $ yum install -y geoip-devel gperftools-devel libcurl-devel libxml2-devel libxslt-devel libgd-devel lmdb-devel openssl-devel pcre-devel perl-ExtUtils-Embed yajl-devel zlib-develClone ModSecurity, compile, and install (this will take a long time): [root] $ cd /opt [root] $ git clone --depth 1 -b v3/master https://github.com/SpiderLabs/ModSecurity [root] $ cd ModSecurity [root] $ git submodule init [root] $ git submodule update [root] $ ./build.sh [root] $ ./configure [root] $ make [root] $ make install Building Compatible Dynamic Module Now that we have the core library that we need installed, we can download the NGINX source code for our installed version of NGINX and use it to build a dynamic module using the ModSecurity-nginx project. Let's pull in the code for both of these projects: Download ModSecurity-nginx: [root] $ cd /opt [root] $ git clone --depth 1 https://github.com/SpiderLabs/ModSecurity-nginx.git Download and unpack NGINX source: [root] $ nginx -v nginx version: nginx/1.12.2 [root] $ wget http://nginx.org/download/nginx-1.12.2.tar.gz [root] $ tar zxvf nginx-1.12.2.tar.gz Configure and build dynamic module: [root] $ cd nginx-1.12.2 [root] $ ./configure --with-compat --add-dynamic-module=../ModSecurity-nginx [root] $ make modules [root] $ cp objs/ngx_http_modsecurity_module.so /etc/nginx/modules/We've successfully built the dynamic module for ModSecurity (harder than expected) and copied it into a directory near the rest of our NGINX configuration so that we can load it. Load ModSecurity Module In the previous lessons, we learned that load_module is used to load dynamic modules. Let's use it at the top of the /etc/nginx/nginx.conf file to load our new module: /etc/nginx/nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; # Load ModSecurity dynamic module load_module /etc/nginx/modules/ngx_http_modsecurity_module.so; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; }Enabling ModSecurity with Recommended Rules We've loaded the module, but we haven't used it at all. Before we can use it, we'll need to create a rules file. Thankfully, the ModSecurity source code that we download contains a recommended configuration. Let's create a new directory local to our NGINX and copy that configuration over: [root] $ mkdir /etc/nginx/modsecurity [root] $ cp /opt/ModSecurity/modsecurity.conf-recommended /etc/nginx/modsecurity/modsecurity.conf [root] $ cp /opt/ModSecurity/unicode.mapping /etc/nginx/modsecurity/unicode.mappingWe will change one line in this default configuration, we want the audit log to go to /var/log/nginx so that we don't run into issues with SELinux. /etc/nginx/modsecurity/modsecurity.conf (partial) SecAuditLog /var/log/nginx/modsec_audit.logFinally, let's enable this in our primary server block (the HTTPS one): /etc/nginx/conf.d/default.conf server { listen 80 default_server; server_name _; return 301 https://$host$request_uri; } server { listen 443 ssl default_server; server_name _; root /usr/share/nginx/html; modsecurity on; modsecurity_rules_file /etc/nginx/modsecurity/modsecurity.conf; ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; rewrite ^(/.*).html(?.*)?$ $1$2 redirect; rewrite ^/(.*)/$ /$1 redirect; location / { try_files $uri/index.html $uri.html $uri/ $uri =404; } location = /admin { auth_basic "Login Required"; auth_basic_user_file /etc/nginx/.htpasswd; try_files $uri/index.html $uri.html $uri/ $uri =404; } error_page 404 /404.html; error_page 500 501 502 503 504 /50x.html; }Finally, let's make sure that our configuration is valid using nginx -t and then reload the service. [root] $ nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root] $ systemctl reload nginx

Using NGINX as a Web Server

00:15:00

NGINX as a Reverse Proxy

Reverse Proxy

What is a Reverse Proxy?

00:01:16

Lesson Description:

Before we start looking at NGINX as a reverse-proxy, we're going to define what a reverse-proxy is. Defining this topic early ensures that we're speaking the same language as we learn about this feature of NGINX. Here is the slideshow from this lesson.

Preparing a Node.js Sample Application

00:10:13

Lesson Description:

Based on our definition of reverse-proxy, there are quite a few things that we could use NGINX for in front of our other servers. Thankfully, we can add NGINX features in layer-by-layer. Our first step is to have NGINX receive traffic and route that traffic to an application of ours. Note: the commands in this video are run as the root user. Documentation For This Video Node.js Sample Node.js application Getting the Node.js Application For this first application, we’re going to be using a multi-application Node.js system that we’ll need to pull from GitHub. The first thing that we need to do is install Node.js itself: [root] $ curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash - [root] $ yum install -y nodejs Once Node.js is installed, we’re going to clone our repository and run the multiple applications required: [root] $ mkdir /srv/www/ [root] $ cd /srv/www/ [root] $ git clone https://github.com/CloudAssessments/s3photoapp [root] $ cd s3photoapp [root] $ make install Before we can start our applications, we need to configure our AWS credentials. Configuring AWS Credentials We need to have access to S3 and DynamoDB, so we’ll create a user in AWS IAM that has the following permissions: AmazonS3FullAccess AmazonDynamoDBFullAccess Once we’ve created the user, we’ll need to utilize the ACCESS_KEY_ID and SECRET_ACCESS_KEY values in some of the services we’ll create. Running Node.js Services Before we can get back to configuring NGINX, we’re going to set up systemd services for all three of the Node.js applications. The photo-filter has the simplest configuration: /etc/systemd/system/photo-filter.service [Unit] Description=photo-filter Node.js service After=network.target [Service] Restart=always User=nobody Group=nobody Environment=NODE_ENV=production ExecStart=/bin/node /srv/www/s3photoapp/apps/photo-filter/server.js [Install] WantedBy=multi-user.target Both the photo-storage and web-client services need additional environment variables for interacting with AWS: /etc/systemd/system/photo-storage.service [Unit] Description=photo-storage Node.js service After=network.target [Service] Restart=always User=nobody Group=nobody Environment=NODE_ENV=production Environment=AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID Environment=AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY ExecStart=/bin/node /srv/www/s3photoapp/apps/photo-storage/server.js [Install] WantedBy=multi-user.target /etc/systemd/system/web-client.service [Unit] Description=S3 Photo App Node.js service After=network.target photo-filter.target photo-storage.target [Service] Restart=always User=nobody Group=nobody Environment=NODE_ENV=production Environment=AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID Environment=AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY ExecStart=/srv/www/s3photoapp/apps/web-client/bin/www [Install] WantedBy=multi-user.target Finally, let’s start and enable our services: [root] $ systemctl start photo-storage [root] $ systemctl enable photo-storage [root] $ systemctl start photo-filter [root] $ systemctl enable photo-filter [root] $ systemctl start web-client [root] $ systemctl enable web-client In the next lesson, we’ll investigate how to configure NGINX as a reverse-proxy in front of our Node.js applications.

Reverse Proxy with proxy_pass

00:22:29

Lesson Description:

Let’s set up NGINX as a reverse-proxy for our example Node.js application. Note: the commands in this video are run as the root user. Documentation For This Video Sample application http_proxy NGINX module proxy_pass NGINX directive Creating the pictures.example.com Server We’re now ready to set up the host photos.example.com listening on port 80 to proxy to the application that is running locally on port 3000 (and not exposed publically). Let’s create a new configuration file within /etc/nginx/conf.d: /etc/nginx/conf.d/photos.example.com.conf server { listen 80; server_name photos.example.com; location / { proxy_pass http://127.0.0.1:3000; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } If we reload our NGINX configuration and attempt to connect using curl we’ll, unfortunately, see an error: [root] $ systemctl reload nginx [root] $ curl --header "Host: photos.example.com" localhost 502 Bad Gateway 502 Bad Gateway nginx/1.12.2 Taking a look at the NGINX error log, we’ll see that there is another Permission Denied situation which leads us to think that it’s SELinux. There are quite a few boolean values in SELinux that can be set regarding the httpd service (which it considers NGINX part of). If we list them out using getsebool -a | grep httpd we’ll find that there is one called httpd_can_network_connect that is set to off. We need to enable this feature for our server to be able to make network requests. [root] $ setsebool -P httpd_can_network_connect 1 Now if we curl again, we should see our page: [root] $ curl --header "Host: photos.example.com" localhost S3 Photo Storage DemoS3 Photo Storage Demoauto-generated table id: 3a665122-e918-464c-b13a-ad981c0df280Upload new imageBrowse… Select a file to uploadSubmitDemo Service by Linux Academy and Cloud Assessments To see this in the browser, we’ll need to modify the /etc/hosts file on our workstation (that has a browser). An example line in the /etc/hosts file would look like this: 34.205.63.94 photos.example.com Note: you’ll need to substitute in your server’s public IP address for 34.205.63.94. In the network inspector, we can see that we’re serving up static content successfully, but we’re doing so through the Node.js application. NGINX is really good at serving that kind of content, so let’s add the following location block to our configuration to take that load off of the application server itself. /etc/nginx/conf.d/photos.example.com.conf (partial) location ~* .(js|css|png|jpe?g|gif) { root /var/www/photos.example.com; } Let’s reload NGINX and refresh the page. If we look in the network inspector, we’ll see that we’re getting 403 errors from the javascript files that we’re trying to serve up. Once again, we’re running into issues with SELinux. This time it is because we’re trying to server content from a directory that is symbolically linked into /var/www. We’ll need to add another rule to handle this for the public directories in our potential apps: [root] $ ln -s /srv/www/s3photoapp/apps/web-client/public /var/www/photos.example.com [root] $ semanage fcontext -a -t httpd_sys_content_t '/srv/www/.*/public(/.*)?' [root] $ restorecon -R /srv/www Reloading the screen, we see that our static assets are being served up properly. Handling Image Upload By default, the client_max_body_size is 1MB, which is smaller than many images that someone might want to upload. Before our photos app will actually be useful, we’re going to need to customize this value in our NGINX configuration: /etc/nginx/conf.d/photos.example.com.conf server { listen 80; server_name photos.example.com; client_max_body_size 5m; location / { proxy_pass http://127.0.0.1:3000; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location ~* .(js|css|png|jpe?g|gif) { root /var/www/photos.example.com; } } Now, we can reload the server and test uploading an image. [root] $ systemctl reload nginx

Setting Up The LEMP Stack

00:12:25

Lesson Description:

Using proxy_pass will work for any service that we can make run on a port, but sometimes we want to utilize a socket. NGINX knows how to proxy traffic for both the FastCGI and uWSGI wire protocols. Next, we’ll learn how to proxy to a PHP application (WordPress) using FastCGI, but we need to set up the rest of the LEMP (Linux, NGINX, MariaDB, PHP) stack first. Note: the commands in this video are run as the root user. Documentation For This Video PHP-FPM MariaDB WordPress Setting Up php-fpm Note: Depending on your OS and how you install php-fpm, these commands and packages will vary quite a bit. The PHP version that would get installed by default is version 5.5, but we want to use PHP 7 because it’s significantly faster. To install this easily using yum, we need to add another repository; the “Software Collections” repository. This command will add the repository: [root] $ yum install -y centos-release-scl [root] $ yum update Now we need to install the php-fpm package (fpm is “FastCGI Process Manager”). Let’s install what we need for PHP. [root] $ yum install -y rh-php71-php rh-php71-php-fpm rh-php71-php-mysqlnd From here, we’ll configure and run our PHP service by modifying the new configuration file for php-fpm at /etc/opt/rh/rh-php71/php-fpm.d/www.conf. Our changes ensure that the nginx user has access and that we use a socket instead of a port to avoid unnecessary networking in our proxy setup later: /etc/opt/rh/rh-php71/php-fpm.d/www.conf (only changed lines) ; Set user and group user = nginx group = nginx ; Set to listen on a socket instead of port. listen = /var/run/php-fpm.sock ; Set permissions on Unix socket. listen.owner = nginx listen.group = nginx listen.mode = 0660 Finally, let’s start the service for php-fpm: [root] $ systemctl start rh-php71-php-fpm [root] $ systemctl enable rh-php71-php-fpm Setting Up MariaDB Now that we have PHP running, we need to set up the database that our WordPress blog can work with. Step one will be creating a repository to install a newer version of MariaDB: /etc/yum.repos.d/mariadb.repo [mariadb] name = MariaDB baseurl = http://yum.mariadb.org/10.2/centos7-amd64 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB gpgcheck=1 Now we’ll update and install the MariaDB client, server, and development libraries: [root] $ yum update [root] $ yum install -y MariaDB-server MariaDB-client MariaDB-devel MariaDB-shared [root] $ systemctl start mariadb [root] $ systemctl enable mariadb With the server running, let’s setup the necessary user’s and permissions: [root] $ mysql_secure_installation ... Go through the process to set the root password, answer `Y` to all other questions. [root] $ mysql -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or g. Your MariaDB connection id is 15 Server version: 10.2.13-MariaDB MariaDB Server Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE wordpress; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON wordpress.* TO wpuser@localhost IDENTIFIED BY "p@ssw0rd"; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.01 sec) MariaDB [(none)]> q [root] $ Now we can use the following information when we start running WordPress: Database Name: wordpress Database User: wpuser Database Password: p@ssw0rd Installing WordPress Our final piece of setup is to install WordPress itself. We’ll download the latest WordPress source code from wordpress.org and place it in /var/www/blog.example.com: [root] $ wget -q -O - http://wordpress.org/latest.tar.gz | tar -xzf - --strip 1 -C /var/www/blog.example.com Our last change will be to set up our wp-config.php file by copying the sample and making customizations from there: [root] $ cp /var/www/blog.example.com/wp-config-sample.php /var/www/blog.example.com/wp-config.php [root] $ chown -R nginx:nginx /var/www/blog.example.com Now we can modify the file to have the values for our local database: /var/www/blog.example.com/wp-config.php (partial) define('DB_NAME', 'wordpress'); /** MySQL database username */ define('DB_USER', 'wpuser'); /** MySQL database password */ define('DB_PASSWORD', 'p@ssw0rd'); /** MySQL hostname */ define('DB_HOST', 'localhost'); There are also some salt values that we need to define, but we can have them generated by copying the output from here. With PHP-FPM, MariaDB, and WordPress installed, we’re ready to move on and configure NGINX to proxy FastCGI traffic.

FastCGI Proxy for PHP/WordPress with fastcgi_pass

00:09:58

Lesson Description:

Time to set NGINX as a reverse-proxy for WordPress. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http_proxy module NGINX http_fastcgi_module module NGINX fastcgi_pass directive Utilizing fastcgi_pass FastCGI is “binary protocol” for sharing information from a web server to external programs. The CGI portion stands for “Common Gateway Interface”, and was the predecessor for FastCGI, which limits some of the overhead and is more performant. Understanding the protocol itself is beyond the scope of this course, but this protocol will allow us to pass information to and from PHP from NGINX using php-fpm. NGINX provides the http_fastcgi_module module for working with FastCGI that works very similar to the http_proxy_module that we used for proxying to a generic application on a port. Let’s create a simple FastCGI configuration to proxy traffic to our WordPress site that is listening on a socket: /etc/nginx/conf.d/blog.example.com.conf server { listen 80; server_name blog.example.com; root /var/www/blog.example.com; index index.php; location / { try_files $uri $uri/ /index.php?$args; } location ~ .php$ { fastcgi_index index.php; fastcgi_pass unix:/var/run/php-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } We don’t want to serve up an index.html, so we’re using the index directive to specify that we should only render index.php files if the request URI ends with a /. Our first location block is specific to working with WordPress to default to going to /index.php? with the query parameters appended in the event that we can’t find a $uri.. The fastcgi_pass directive works a lot like the proxy_pass directive and is how we specify the destination we’re proxying traffic to. We’re using a socket instead of a port so that we can avoid the overhead of using the network stack for internal communication. FastCGI has numerous variables (or params) that can be defined using the NGINX variables that we have. Thankfully, NGINX provides a file with sensible defaults for most of these values that we can include from /etc/nginx/fastcgi_params. Here’s what this file looks like: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REQUEST_SCHEME $scheme; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; For WordPress specifically, we need to modify the SCRIPT_FILENAME a little so we’ll do that after we include the configuration: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; Now, when we visit blog.example.com from our web browser, we’ll see be prompted by the installation wizard to create the site and admin user. With this, we now have successfully deployed WordPress behind NGINX.

Deploying a uWSGI Application

00:06:56

Lesson Description:

In this lesson, we’ll set up an example application that utilizes the WSGI (Web Server Gateway Interface) as an alternative to FastCGI. After this setup, we’ll investigate how NGINX can proxy to WSGI applications. Note: the commands in this video are run as the root user. Documentation For This Video Python 3 Pip Django MariaDB uwsgi Example uWSGI Project Preparing Python 3 The example application uses Django 2, which doesn’t support Python 2, so we’ll need to make sure that we have Python 3 installed. We will install Python 3.6, and the Python 3.6 development headers using yum: [root] $ yum install python36 python36-devel After we’ve installed Python 3, we’ll need to install pip and pipenv to help us manage Python dependencies: [root] $ wget https://bootstrap.pypa.io/get-pip.py ... [root] $ python3.6 get-pip.py ... [root] $ pip3.6 install pipenv requests ... Once we have those two tools installed, we’re ready to move forward to get our application running. Downloading the Application The application is hosted on Github and we’ll download it to the /srv/www/ directory like we did the Node.js applications. [root] $ cd /srv/www [root] $ git clone https://github.com/linuxacademy/content-nginx-uwsgi [root] $ cd content-nginx-uwsgi Preparing the Database Before our application can run, we need to have a database to save our notes. Since we’ve already installed MariaDB on our server, we’ll be using that running server with a different user and database. Let’s create the user and database now: [root] $ mysql -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or g. Your MariaDB connection id is 15 Server version: 10.2.13-MariaDB MariaDB Server Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or 'h' for help. Type 'c' to clear the current input statement. MariaDB [(none)]> CREATE DATABASE django_notes; Query OK, 1 row affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON django_notes.* TO notes@localhost IDENTIFIED BY "p@ssw0rd"; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.01 sec) MariaDB [(none)]> q [root] $ Now we can use the following information when we start running our application: Database Name: django_notes Database User: notes Database Password: p@ssw0rd Running the Application For this Python application, we’ll need to set up a virtualenv to install our dependencies. By default, the package manager pipenv will place all of the dependencies in a .local directory for the user that runs the install and we don’t want that. The Makefile has some customized lies that will ensure that the virtualenv and dependencies are installed local to the project so that we can avoid some issues later on. [root] $ make install PIPENV_VENV_IN_PROJECT=true pipenv install Creating a virtualenv for this project… Using /bin/python3.6m (3.6.3) to create virtualenv… ?Running virtualenv with interpreter /bin/python3.6m Using base prefix '/usr' New python executable in /srv/www/content-nginx-uwsgi/.venv/bin/python3.6m Also creating executable in /srv/www/content-nginx-uwsgi/.venv/bin/python Installing setuptools, pip, wheel...done. Virtualenv location: /srv/www/content-nginx-uwsgi/.venv Installing dependencies from Pipfile.lock (0d8c73)… ???? ???????????????????????????????? 7/7 — 00:00:40 To activate this project's virtualenv, run the following: $ pipenv shell We need to set up the systemd service to run the application with uwsgi using a socket. The make service task will generate and place the service file for us. This task will also create some directories for us for the socket, logs, and static assets. [root] $ make service ./systemd/create_service.sh Host: notes.example.com Database Name: django_notes Database User: notes Database Password: chown -R nginx:nginx . mkdir -p /run/uwsgi chown -R nginx:nginx /run/uwsgi mkdir -p /var/log/uwsgi touch /var/log/uwsgi/notes.log chown nginx:nginx /var/log/uwsgi/notes.log Before our service will run we need to run our migrations to ensure that all of the database tables exist. [root] $ export NOTES_HOST=notes.example.com [root] $ export NOTES_DB=django_notes [root] $ export NOTES_DB_USER=notes [root] $ export NOTES_DB_PASSWORD="p@ssw0rd" [root] $ make migrate PIPENV_VENV_IN_PROJECT=true pipenv run python manage.py migrate Operations to perform: Apply all migrations: auth, contenttypes, notepad, sessions Running migrations: Applying contenttypes.0001_initial... OK /srv/www/content-nginx-uwsgi/.venv/lib/python3.6/site-packages/django/db/backends/mysql/base.py:71: Warning: (1146, "Table 'mysql.column_stats' doesn't exist") return self.cursor.execute(query, args) Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0001_initial... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying auth.0009_alter_user_last_name_max_length... OK Applying notepad.0001_initial... OK Applying sessions.0001_initial... OK Finally, we can start and enable the service. [root] $ systemctl start notes.uwsgi [root] $ systemctl enable notes.uwsgi With the application and database setup, we’re able to move on to work with NGINX’s uwsgi interactions.

Proxying to uWSGI Python Web Application with uwsgi_pass

00:10:44

Lesson Description:

With our uWSGI (Django 2.0) application running, we’re ready to set NGINX to proxy traffic to it. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http_uwsgi_module module NGINX uwsgi_pass directive Utilizing uwsgi_pass The “WSGI” portion of uwsgi stands for “Web Server Gateway Interface”, and it was originally a Python specific protocol. The “u” in uwsgi stands for “Universal”. Just like FastCGI, uWSGI is another way to pass information from one program to another. Also like FastCGI, NGINX provides a module for proxying traffic directly to a running uWSGI application using the http_uwsgi_module. Let’s create a simple configuration utilizing this module now: /etc/nginx/conf.d/notes.example.com.conf server { listen 80; server_name notes.example.com; location /static { root /var/www/notes.example.com; } location / { include uwsgi_params; uwsgi_pass unix:/var/run/uwsgi/notes.sock; } } This should look familiar at this point. The creators of NGINX did a great job of making the proxying methods provided by NGINX all work in very similar ways. We’re serving the static assets from a directory within /var/www so that the Python process doesn’t need to worry about those at all. All traffic that isn’t for a static asset is being proxied to our application using uwsgi_pass. FastCGI has the fastcgi_params file, and uWSGI has the uwsgi_params file (both provided by NGINX by default). Here’s what the uwsgi_params file contains: /etc/nginx/uwsgi_params uwsgi_param QUERY_STRING $query_string; uwsgi_param REQUEST_METHOD $request_method; uwsgi_param CONTENT_TYPE $content_type; uwsgi_param CONTENT_LENGTH $content_length; uwsgi_param REQUEST_URI $request_uri; uwsgi_param PATH_INFO $document_uri; uwsgi_param DOCUMENT_ROOT $document_root; uwsgi_param SERVER_PROTOCOL $server_protocol; uwsgi_param REQUEST_SCHEME $scheme; uwsgi_param HTTPS $https if_not_empty; uwsgi_param REMOTE_ADDR $remote_addr; uwsgi_param REMOTE_PORT $remote_port; uwsgi_param SERVER_PORT $server_port; uwsgi_param SERVER_NAME $server_name; This file maps the NGINX variables to the string representation that uWSGI expects. Before this configuration will work, we’ll need to tell SELinux to allow NGINX to communicate with the socket. We’ve gotten to the point now where we’re going to generate a policy file based on the issues we’re seeing by setting SELinux to be permissive, making a request, and then generating a policy file from the audit log. [root] $ cd /etc/nginx [root] $ semanage permissive -a httpd_t [root] $ systemctl reload nginx [root] $ curl --header "Host: notes.example.com" localhost ... [root] $ grep nginx /var/log/audit/audit.log | audit2allow -m nginx module nginx 1.0; require { type httpd_t; type initrc_t; class unix_stream_socket connectto; } #============= httpd_t ============== #!!!! The file '/run/uwsgi/notes.sock' is mislabeled on your system. #!!!! Fix with $ restorecon -R -v /run/uwsgi/notes.sock allow httpd_t initrc_t:unix_stream_socket connectto; [root] $ grep nginx /var/log/audit/audit.log | audit2allow -M nginx ******************** IMPORTANT *********************** To make this policy package active, execute: semodule -i nginx.pp This module will hold onto the rules so for nginx, and if we need to we can make modifications to this later. Let’s finish the process of removing the permissiveness on NGINX, loading the module, and enabling it. [root] $ semanage permissive -d httpd_t [root] $ semodule -i nginx.pp libsemanage.semanage_direct_install_info: nginx module will be disabled after install due to default enabled status. [root] $ semodule -e nginx [root] $ restorecon -Rv /var/run/uwsgi/ restorecon reset /run/uwsgi/notes.sock context system_u:object_r:var_run_t:s0->system_u:object_r:httpd_var_run_t:s0 Now, when we visit notes.example.com from our web browser (assuming we’ve updated our /etc/hosts), we’ll see a login page for our Markdown note taking application.

Simple Caching for Static Content

00:19:32

Lesson Description:

In this lesson, we’ll look at how we can make our reverse-proxy more performant by having it cache the static content that is being served. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http_fastcgi module NGINX fastcgi_cache_path directive NGINX fastcgi_cache_key directive NGINX fastcgi_cache directive Creating a Cache Using fastcgi_cache_path When we make requests to an application, sometimes the content doesn’t change at all, but in our current reverse-proxy setup we would still pass the request over to the application. We can prevent our application servers from receiving this extra load by having NGINX cache responses and serve that data back when the request matches. NGINX has a powerful caching mechanism built into it and works almost exactly the same regardless of whether you’re using proxy_pass, fastcgi_pass, or uwsgi_pass to proxy to another server. Each of the proxy modules that we’ve looked at so far has equivalent caching directives. We’re going to investigate caching by looking at our WordPress application. Let’s add a simple cache by editing /etc/nginx/conf.d/blog.example.com.conf: /etc/nginx/conf.d/blog.example.com.conf fastcgi_cache_path /var/cache/nginx/blog levels=1:2 keys_zone=blog:10m size=1g inactive=60m; server { listen 80; server_name blog.example.com; root /var/www/blog.example.com; index index.php; fastcgi_cache_key $scheme$request_method$host$request_uri; location / { try_files $uri $uri/ /index.php?$args; } location ~ .php$ { fastcgi_index index.php; fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_cache blog; fastcgi_cache_valid 60m; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } The first thing to notice is that we had to specify the properties and name of our cache using the fastcgi_cache_path directive. There are a lot of arguments to this directive, so see what they each do: /var/cache/nginx/blog - specifies the directory to store the cached objects levels=1:2 - specifies the number of subdirectory levels used in the cache. This is the recommended setting, yet not the default keys_zone=blog:10m - Allows us to specify the name of the cache and the size of the memory space that will hold the keys and metadata information. The name and size of the lookup table so NGINX can quickly know if a request is a cache hit or miss max_size=1g - Defines the maximum amount of storage we’re allowing NGINX to use for this cache. If not set it will keep caching new keys only limited by storage inactive=60m - Defines the maximum cache lifetime of an item if it’s not accessed again. The cache will be populated on the first hit, and then after that, it will have 60 minutes to receive another request or the item will be removed from the cache We’ve specified where to cache things, but for fastcgi and uwsgi caching we’ll need to also specify the cache key using the appropriate _cache_key directive. Our cache key is created from the requests scheme, host, method (GET | POST | etc), and request URI. To see the fruits of the caching, we’re best off to open our browser’s developer tools to look at the network response time and response headers (right-click the page select “Inspect Element” then open the “Network” tab). We’ll reload the page after opening the network tab to see how long each request takes. This gives us the non-cached speed. Now we can reload NGINX before reloading the page to see the difference. [root] $ systemctl reload nginx Adding Additional Cache Header It’s really helpful to know if our request was a cache hit or cache miss. We can display this in the browser’s response information by adding an additional header within our location block. NGINX provides an upstream_cache_status variable that indicates whether the current request is a cache hit or miss. We can set this value as a header using the add_header directive. Let’s add this now: /etc/nginx/conf.d/blog.example.com.conf fastcgi_cache_path /var/cache/nginx/blog levels=1:2 keys_zone=blog:10m max_size=1g inactive=60m; server { listen 80; server_name blog.example.com; root /var/www/blog.example.com; index index.php; fastcgi_cache_key $scheme$request_method$host$request_uri; location / { try_files $uri $uri/ /index.php?$args; } location ~ .php$ { add_header X-Cache-Status $upstream_cache_status; fastcgi_index index.php; fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_cache blog; fastcgi_cache_valid 60m; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Now when we look at the response headers. We’ll see either “X-Cache-Status: MISS” or “X-Cache-Status: HIT”. This makes debugging caching issues a lot easier. Not Caching Content Caching the content of our blog is great, but we don’t want to cache more dynamic content like the admin area of WordPress. Let’s see if what we’ve currently got will cache the admin dashboard pages by going to blog.example.com/wp-login.php. Our “X-Cache-Status” shows a “MISS” the first time, but if we reload the page it doesn’t change to a “HIT”. Why not? NGINX caches responses that are valid, but the proxied server can ensure that it doesn’t happen by returning a “Cache-Control” header that includes “no-cache”. If we would rather not cache some content without relying on the proxied application, we can do so by creating a more specific location block that doesn’t use fastcgi_cache. There are also conditionals and variables that we can use to indicate whether we’d like to bypass the cache. Here’s an example of how we could bypass the cache if the request includes /wp-admin: /etc/nginx/conf.d/blog.example.com.conf fastcgi_cache_path /var/cache/nginx/blog levels=1:2 keys_zone=blog:10m max_size=1g inactive=60m; server { listen 80; server_name blog.example.com; root /var/www/blog.example.com; index index.php; fastcgi_cache_key $scheme$request_method$host$request_uri; set $skip_cache 0; if ($request_uri ~* "/wp-admin") { set $skip_cache 1; } location / { try_files $uri $uri/ /index.php?$args; } location ~ .php$ { add_header X-Cache-Status $upstream_cache_status; fastcgi_index index.php; fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_cache blog; fastcgi_cache_valid 60m; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } We’re using set and an if context to manipulate a variable before routing the request. If $skip_cache is set to anything other than 0, we can us it with fastcgi_cache_bypass and fastcgi_no_cache to skip checking the cache and not cache the returned response. Purging the Cache To get rid of already cached content, we have a few options: Remove files from disk within our cache directory (/var/cache/nginx/blog in our case). Utilizing the fastcgi_cache_purge directive. The first option is self-explanatory, we would use rm -rf to delete the directories under /var/cache/nginx/blog. The second option is more interesting and provides us with a way to programmatically purge the cache. Unfortunately, the documentation shows a fastcgi_cache_purge directive, but it is only part of the commercial distribution, NGINX Plus.

Microcaching for Non-Personalized Dynamic Content

00:08:32

Lesson Description:

In this lesson, we’ll look at an alternative caching strategy that is useful if you have an application with dynamic content that is receiving a lot of traffic. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http_uwsgi module NGINX uwsgi_cache_valid directive NGINX uwsgi_cache_key directive NGINX uwsgi_cache directive What is “Microcaching”? If an application or website is updated very frequently, then caching becomes pretty difficult because you don’t know when to remove the cache. If that same application receives a lot of traffic then routing all of that traffic to the application server can overwhelm the server quite quickly. NGINX is good at responding to a lot of traffic, and if the content of the website doesn’t always need to be updated in real-time, then microcaching might be worth using. This is where “microcaching” comes into play. With microcaching, we’re going to cache the contents of requests for just a few seconds, and for the same content, we’ll only send one request every 10 seconds or so to the application server. Implementing Microcaching To demonstrate microcaching, we’re going to add it to our UWSGI application and then send a lot of traffic to it. Before we add the caching, let’s install [boom] and see how our application handles a large spike in traffic. [root] $ pip install boom With boom installed, we’ll send traffic to https://notes.example.com: [root] $ boom http://localhost/ --header Host:notes.example.com -c 30 -n 3000 Server Software: nginx/1.12.2 Running GET http://127.0.0.1:80/ Host: notes.example.com Running 3000 queries - concurrency 30 [================================================================>.] 99% Done -------- Results -------- Successful calls 3000 Total time 15.1118 s Average 0.1503 s Fastest 0.0444 s Slowest 0.1600 s Amplitude 0.1157 s Standard deviation 0.006783 RPS 198 BSI Pretty good -------- Status codes -------- Code 200 3000 times. -------- Legend -------- RPS: Request Per Second BSI: Boom Speed Index Sending 3000 total requests using 30 concurrent users 15 seconds. That performance is bad really, but let’s modify the configuration for notes.example.com to see if NGINX can make a difference for us without aggressively caching content like we did for the blog: /etc/nginx/conf.d/notes.example.com.conf uwsgi_cache_path /var/cache/nginx/micro levels=1:2 keys_zone=micro:10m max_size=1g; server { listen 80; server_name notes.example.com; uwsgi_cache_key $scheme$request_method$host$request_uri; location /static { root /var/www/notes.example.com; } location / { add_header X-Cache-Status $upstream_cache_status; include uwsgi_params; uwsgi_pass unix:/var/run/uwsgi/notes.sock; uwsgi_cache micro; uwsgi_cache_valid 10s; } } The main difference between this configuration and the cache we set for the blog is with this line: uwsgi_cache_valid 10s; This means we’re only caching content for 10 seconds. Since our original result took more than 10 seconds, we should be able to see a noticeable difference running the same load test. Let’s see what kind of difference the microcaching gives us: boom http://localhost/ --header Host:notes.example.com -c 30 -n 3000 Server Software: nginx/1.12.2 Running GET http://127.0.0.1:80/ Host: notes.example.com Running 3000 queries - concurrency 30 [================================================================>.] 99% Done -------- Results -------- Successful calls 3000 Total time 6.4929 s Average 0.0595 s Fastest 0.0200 s Slowest 0.0808 s Amplitude 0.0608 s Standard deviation 0.004789 RPS 462 BSI Pretty good -------- Status codes -------- Code 200 3000 times. -------- Legend -------- RPS: Request Per Second BSI: Boom Speed Index Our Requests Per Second (RPS) went from 198 to 462! That’s a huge increase in performance without needing to worry about returning stale content for a long period of time like we potentially could with more aggressive caching. Knowing how long to cache information is a pretty hard problem to solve. If we have an application that doesn’t deploy very often or that doesn’t display that much dynamic data, then we can be aggressive with caching setting a longer cache_valid time. Microcaching is a good alternative to get some more performance out of your application servers when you do have dynamic content that is allowed to be stale for a short period of time.

Using NGINX as a Reverse Proxy and Content Cache

00:15:00

NGINX as a Load Balancer

Load Balancing

Load Balancing to Multiple Servers

00:07:48

Lesson Description:

In this lesson, we look at how we can utilize NGINX as a load balancer to distribute traffic between multiple instances of the same application. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http_upstream module NGINX upstream directive NGINX server directive from the upstream module Running Multiple Instances To demonstrate load balancing, we need to run more than one instance of an application. We’re going to create some duplicate services for photos.example.com, where the separate web-clients all run on different ports. To do this, we’ll need to duplicate the web-client.service file a few times and add an additional PORT environment variable. [root] $ cp /etc/systemd/system/web-client{,2}.service [root] $ cp /etc/systemd/system/web-client{,3}.service Now we set the PORT value in the new files: /etc/systemd/system/web-client2.service [Unit] Description=S3 Photo App Node.js service After=network.target photo-filter.target photo-storage.target [Service] Restart=always User=nobody Group=nobody Environment=NODE_ENV=production Environment=AWS_ACCESS_KEY_ID=YOUR_AWS_KEY_ID Environment=AWS_SECRET_ACCESS_KEY=YOUR_AWS_SECRET_KEY Environment=PORT=3100 ExecStart=/srv/www/s3photoapp/apps/web-client/bin/www [Install] WantedBy=multi-user.target And for the third service: /etc/systemd/system/web-client3.service [Unit] Description=S3 Photo App Node.js service After=network.target photo-filter.target photo-storage.target [Service] Restart=always User=nobody Group=nobody Environment=NODE_ENV=production Environment=AWS_ACCESS_KEY_ID=YOUR_AWS_KEY_ID Environment=AWS_SECRET_ACCESS_KEY=YOUR_AWS_SECRET_KEY Environment=PORT=3101 ExecStart=/srv/www/s3photoapp/apps/web-client/bin/www [Install] WantedBy=multi-user.target Lastly, we need to start these services: [root] $ systemctl start web-client2 [root] $ systemctl start web-client3 Creating a Server Group To load balance the traffic between our three identical services with NGINX we’ll need to use the http_upstream module and the upstream directive. This directive creates a new context where we configure all of the load balancing behavior. Let’s create our server group now: /etc/nginx/conf.d/photos.example.com upstream photos { server 127.0.0.1:3000; server 127.0.0.1:3100; server 127.0.0.1:3101; } server { listen 80; server_name photos.example.com; client_max_body_size 5m; location / { proxy_pass http://photos; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location ~* .(js|css|png|jpe?g|gif) { root /var/www/photos.example.com; } } Our upstream context is named photos and defines three servers. These servers us the server directive provided by the upstream module and their definition is similar to what we would provide to proxy_pass except that we will leave off the http or https scheme. Besides setting up the upstream context, we also needed to change our proxy_pass to proxy traffic to the server group instead of a specific server, and we did this by replacing 127.0.0.1:3000 with the name of our server group photos. If we reload NGINX and make 4 or 5 requests to photos.example.com we can check stdout from each service to see that they each received at least 1 request. Each will look something like this: [root] $ systemctl status web-client3 ? web-client3.service - S3 Photo App Node.js service Loaded: loaded (/etc/systemd/system/web-client3.service; disabled; vendor preset: disabled) Active: active (running) since Sun 2018-03-18 12:59:03 UTC; 15min ago Main PID: 1511 (node) CGroup: /system.slice/web-client3.service ??1511 node /srv/www/s3photoapp/apps/web-client/bin/www Mar 18 12:59:03 keiththomps3.mylabserver.com systemd[1]: Started S3 Photo App Node.js service. Mar 18 12:59:03 keiththomps3.mylabserver.com systemd[1]: Starting S3 Photo App Node.js service... Mar 18 12:59:07 keiththomps3.mylabserver.com www[1511]: Listening on port 3101 Mar 18 13:10:34 keiththomps3.mylabserver.com www[1511]: GET /favicon.ico 404 118.663 ms - 150 Mar 18 13:14:05 keiththomps3.mylabserver.com www[1511]: GET / 200 413.474 ms - 1745

Examining Load Balancing Methods

00:09:43

Lesson Description:

In this lesson, we look at the different ways we can configure NGINX to load balance traffic. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http_upstream module NGINX upstream directive NGINX server directive from the upstream module The Default Load Balancing Strategy: Round-Robin By default, the load balancer that we’ve already created for photos.example.com is routing traffic using a “round-robin” approach. This approach has an equal number of requests going to each server. If there are 4 requests made then, NGINX will cycle through the servers evenly until it runs out of requests, looking something like this: Request 1 -> Server 1 Request 2 -> Server 2 Request 3 -> Server 3 Request 4 -> Server 1 ... The other available load balancing methods are: hash - Specifies how a key should be build based on the request, mapping all requests with the same key to the same server ip_hash - Specifies a client-server grouping based on the client IP address least_conn - Specifies that requests should be routed to the server with the least number of active connections. This approach also takes into account server weights There is another load balancing method called least_time that routes requests to the server with the lowest average response time and least number of active connections, but this method is only available through NGINX Plus. Setting a Different Load Balancing Method Now that we know how each of the load balancing methods works, we should look at how to change them. To set the load balancing method for a specific upstream block we should use the corresponding directive before we specify any other directive. There are some potential ordering issues that you can run into with the keepalive directive, so it’s best to set our load balancing method first. Here’s what it would look like if we set our photos.example.com server group to use the least_conn method: /etc/nginx/conf.d/photos.example.com.conf (partial) upstream photos { least_conn; server 127.0.0.1:3000; server 127.0.0.1:3100; server 127.0.0.1:3101; } ... The only usage exception is if with the hash load balancing method is being used. For that method, we need to provide another parameter as the key. Here would be an example that routes to various servers based on the $request_uri: /etc/nginx/conf.d/photos.example.com.conf (partial) upstream photos { hash $request_uri; server 127.0.0.1:3000; server 127.0.0.1:3100; server 127.0.0.1:3101; } ... Weights & Passive Health Checking Besides the load balancing method directives themselves, we can manipulate the load balancing that is done by adding more configuration to our upstream server directives. Let’s pretend that servers two and three are running on separate machines that we know to be underpowered. We’re going to modify our configuration to achieve the following: Prioritize sending traffic to server 1. Temporarily remove servers 2 and 3 from the pool if they suffer 3 failed requests in 20 seconds. If this occurs, we’ll have NGINX remove the server from the pool for 20 seconds. We’re using the round-robin approach again, and this is what our new configuration would look like: /etc/nginx/conf.d/photos.example.com (partial) upstream photos { server 127.0.0.1:3000 weight=2; server 127.0.0.1:3100 max_fails=3 fail_timeout=20s; server 127.0.0.1:3101 max_fails=3 fail_timeout=20s; } The fail_timeout option sets both the span of time to count failures and also the duration that a server with errors should be marked as “unavailable”.

Using NGINX as a Load Balancer

00:15:00

NGINX Logging

Logging

Configuring Logging

00:13:14

Lesson Description:

In this lesson, we’ll look into some of the options that we have when configuring logging with NGINX. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http_log module NGINX log_format directive NGINX access_log directive NGINX error_log directive Configuring Access Logs The two main types of logging that we’ll worry about with NGINX are going to be logging of requests (“access logging”) and also error logging. Configuring access logs is more involved than configuring error logging, so let’s investigate that more by looking at what we already have configured in /etc/nginx/nginx.conf: /etc/nginx/nginx.conf (partial) log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; The log_format line here sets the name of the log format to main and then sets the string to be logged by combining many of the available NGINX variables. By itself, creating this format doesn’t do anything until it’s used as part of the access_log line in combination with the path to the log file. It’s fairly common to read NGINX configuration for a log that looks something like this: access_log /var/log/nginx/example.log combined; The combined format is a pre-defined log format type. Configuring Error Logs Error logging is simpler than access logging because there is no format to set, but we do need to understand the logging levels that exist: debug info notice warn error crit alert emerg With that list in mind, as the number gets larger the “severity” increases, and every level of a lower severity contains content logged at the severities higher than itself. The error log that is set up by default for NGINX looks like this: /etc/nginx/nginx.conf (partial) error_log /var/log/nginx/error.log warn; With the warn level set, anything logged at a notice, info, or debug level will not be written to the log file. Utilizing Syslog It’s common to want to log using syslog instead of a file so that logs can be aggregated with a logging server or service like PaperTrail. NGINX supports utilizing syslog through the error_log and access_log directives that we’ve already looked at. Let’s change our blog and notes servers to log to a locally running syslog server by adding the following line to each file’s server block: access_log syslog:/dev/log combined; We’re using the default combined format and logging to the default syslog socket at /dev/log. The server attribute could also be set to a remote address using a domain or IP address instead of a local unix: socket. Now we can refresh our NGINX configuration, make a few requests and see what is logged: [root] $ systemctl reload nginx [root] $ curl --header "Host: blog.example.com" localhost ... [root] $ curl --header "Host: notes.example.com" localhost ... Depending on the system that you’re running on the file that we will read from will be a little different. On CentOS it will be /var/log/messages, but on debian based systems it will be /var/log/syslog. Here’s how we can check the last 2 logged lines in the file: [root] $ tail -n 2 /var/log/messages Mar 19 22:02:25 keiththomps3 journal: keiththomps3.mylabserver.com nginx: 127.0.0.1 - - [19/Mar/2018:22:02:25 +0000] "GET / HTTP/1.1" 200 53332 "-" "curl/7.29.0" Mar 19 22:02:33 keiththomps3 journal: keiththomps3.mylabserver.com nginx: 127.0.0.1 - - [19/Mar/2018:22:02:33 +0000] "GET / HTTP/1.1" 200 1863 "-" "curl/7.29.0" Custom Log Format The log messages that main provides are not very helpful because we don’t know which virtual host was requested. Let’s create our own custom log_format that will include this information that we can use for our virtual hosts. Since we want to use this in many files, we’ll add it to the top level http context within /etc/nginx/nginx.conf: /etc/nginx/nginx.conf (partial) log_format vhost '$host $remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; The format is the same as main except we added the $host value. Now the line that we used in our virtual hosts can be changed to this: access_log syslog:/dev/log vhost; After a reload and a few more requests we should be able to tell what server received each request: [root] $ systemctl reload nginx [root] $ curl --header "Host: notes.example.com" localhost 2&>1 [root] $ curl --header "Host: blog.example.com" localhost 2&>1 [root] $ tail -n 2 /var/log/messages Mar 19 22:10:18 keiththomps3 journal: keiththomps3.mylabserver.com nginx: notes.example.com 127.0.0.1 - - [19/Mar/2018:22:10:18 +0000] "GET / HTTP/1.1" 200 1863 "-" "curl/7.29.0" "-" Mar 19 22:10:24 keiththomps3 journal: keiththomps3.mylabserver.com nginx: blog.example.com 127.0.0.1 - - [19/Mar/2018:22:10:24 +0000] "GET / HTTP/1.1" 200 53324 "-" "curl/7.29.0" "-"

Configuring NGINX Logging

00:15:00

Advanced NGINX Security

Security

Improving SSL Configuration

00:15:14

Lesson Description:

Now that we have an understanding of the main features of NGINX, we’re going to dig a little deeper into some of the specific aspects of running NGINX. In this lesson, we’ll dig deeper into improving the SSL setup for our server that is using a self-signed certificate. Note: the commands in this video are run as the root user. Documentation For This Video NGINX ssl module Mozilla’s Server Side TLS Documentation Improving SSL Configuration with SSL Session Caching Up to this point, when we’ve worked with SSL in our servers, we’ve done the bare minimum by using ssl_certificate and ssl_certificate_key. There are a number of other settings that we can and should set to improve the overall configuration. These improvements are better for security, but can also improve our server performance by alleviating repetitive work. To begin, let’s add session caching so that fewer SSL handshakes need to be made. The initial handshake is the most taxing part of the process, and if we can avoid repeating it then we’ll improve our overall performance. Let’s make some changes to our /etc/nginx/conf.d/default.conf file to add SSL session caching. /etc/nginx/conf.d/default.conf (partial) ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; These three new directives do the following: ssl_session_timeout - set’s how long a session can persist ssl_session_cache - set’s the number of sessions to cache. The shared value indicates that it can be used by all NGINX processes. One megabyte of space can hold about 4,000 sessions ssl_sessions_tickets - SSL Tickets are an idea where encrypted session information is stored on the client instead of the server. We’re opting to not use tickets Setting SSL Cipher and Protocols Depending on the clients that will be interacting with a service, the configuration of the TLS protocols and ciphers can be important. For our purposes, we’re expecting that our clients are using modern web browsers (at least IE 11, FireFox 27, Chrome 30, Safari 9, or Android 8). Knowing that we expect “modern” clients we’ll set some values based on suggestions from Mozilla. /etc/nginx/conf.d/default.conf (partial) ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; ssl_protocols TLSv1.2; ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256'; ssl_prefer_server_ciphers on; Note: You can read about these suggestions and more about backward compatibility here. We know that all of our clients can work with TLS version 1.2 so we’re going to require it using ssl_protocols. Skipping over the ciphers for a moment, we’re setting ssl_prefer_server_ciphers so that we utilize the list of ciphers that we specify rather than using client ciphers. This configuration is so that our server is in control of the SSL/TLS experience, so we’re keeping as much as we can under the server’s umbrella. The ssl_ciphers list is quite long, the ordering is important, and we’re not (at least I’m not) cryptography experts. This list is taken directly from Mozilla as recommended. HSTS and OCSP Stapling We only have a few more things to change. The first is adding support for HSTS. HSTS is a way to have NGINX tell the client that it should never interact with our domain using HTTP, and we add this using a header. If we were not ready for our domain/IP address to only receive HTTPS traffic then we should NOT add this header until we are: /etc/nginx/conf.d/default.conf (partial) ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; ssl_protocols TLSv1.2; ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256'; ssl_prefer_server_ciphers on; # 15768000 is roughly 6 months add_header Strict-Transport-Security max-age=15768000; Now any browser that receives a response from our server will refuse to send traffic over HTTP to our domain for the duration of max-age. Our last change relates to OCSP or “Online Certificate Status Protocol”. Without OCSP stapling, the client will need to make a separate request to verify the validity of a certificate from the issuer, but with OSCP stapling, the server can make that request and cache it for awhile to return that information with the initial handshake. Let’s add OCSP Stapling: /etc/nginx/conf.d/default.conf (partial) ssl_certificate /etc/nginx/ssl/public.pem; ssl_certificate_key /etc/nginx/ssl/private.key; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; ssl_protocols TLSv1.2; ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256'; ssl_prefer_server_ciphers on; # 15768000 is roughly 6 months add_header Strict-Transport-Security max-age=15768000; # OCSP Stapling ssl_stapling on; ssl_stapling_verify on; If we check our configuration using nginx -t, we’ll see that we have a warning related to OCSP stapling: nginx: [warn] "ssl_stapling" ignored, issuer certificate not found for certificate "/etc/nginx/ssl/public.pem" nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Since we’re using a self-signed certificate, the stapling is ineffective. In a setting where we’re using a certificate from an actual certificate authority, we would also want to include the following directive with a downloaded root CA certificate: /etc/nginx/conf.d/default.conf (partial) ssl_trusted_certificate /path/to/ROOT_CA_CERT; When we discuss using Let’s Encrypt, we’ll generate a valid and verifiable certificate. At that point, we’ll be able to revisit OCSP stapling.

Utilizing ModSecurity WAF

00:10:30

Lesson Description:

We’ve installed the ModSecurity module for NGINX, but we haven’t done much with it yet. In this lesson, we’ll pull in the OWASP ModSecurity Core Rule Set (CRS) and examine what this professionally sourced set of security rules provides for us. Note: the commands in this video are run as the root user. Documentation For This Video ModSecurity ModSecurity-nginx OWASP ModSecurity Core Rule Set Installing OWASP ModSecurity Core Rule Set The OWASP ModSecurity Core Rule Set is a collection of rules for ModSecurity that provide protection from many common attacks. These rules are created by the “Open Web Application Security Project” (OWASP) organization that is entirely focused on software security. Security and cryptography are two topics that are best left to those that study them extensively, and with that in mind, we’re not going to create our own ModSecurity rule set. Our first step is to install the current version of the OWASP CRS using git: [root] $ cd /etc/nginx/modsecurity [root] $ git clone https://github.com/SpiderLabs/owasp-modsecurity-crs.git [root] $ cd owasp-modsecurity-crs This repository includes quite a lot, but most of what we’re interested in is going to be in the rules directory. Before moving forward, we want to copy the files that include .example in the name to not include that: [root] $ cp crs-setup.conf{.example,} [root] $ cp rules/REQUEST-900-EXCLUSION-RULES-BEFORE-CRS.conf{.example,} [root] $ cp rules/RESPONSE-999-EXCLUSION-RULES-AFTER-CRS.conf{.example,} As of right now, those two rule files only contain documentation explaining how they work, and we’re not going to put anything into them (but it’s a good idea to read them). The important thing to note here is that we named them so that they aren’t managed as part of the git repository, and if we put overrides in them, we can safely update the rule set using git pull later on without losing our customizations. With this modification made, we’re ready to create our full rules file for use with NGINX. Building and Using Our Full ModSecurity Rules File It’s best practice to create a ModSecurity configuration file that simply includes the parts that we want from both the ModSecurity default configuration file that we have and the specialized rule sets from OWASP. We need to create this file so that we have a single file that we can utilize with the modsecurity_rules_file directive. We’re going to put this file directly in the /etc/nginx/modsecurity directory, and we’ll call it modsecurity_includes.conf. The REQUEST and RESPONSE configuration files are numbered so that we know the order include them into our own custom configuration, and we want to include the base configuration from the modsecurity.conf as the very first thing followed by the initial CRS configuration (crs-setup.conf). Let’s create this file now: /etc/nginx/modsecurity/modsecurity_includes.conf include modsecurity.conf include owasp-modsecurity-crs/crs-setup.conf To easily include the REQUEST and RESPONSE lines, let’s write a bash loop to append to this file: [root] $ for f in $(ls -1 owasp-modsecurity-crs/rules/ | grep -E "^(RESPONSE|REQUEST)-.*.conf$"); do echo "include owasp-modsecurity-crs/rules/${f}" >> modsecurity_includes.conf; done With this file created, we can not use it with the modsecurity_rules_file and if we want to make customizations we can comment out includes that we don’t need. Let’s add ModSecurity for our WordPress site now: /etc/nginx/conf.d/blog.example.com.conf (partial) root /var/www/blog.example.com; index index.php; modsecurity on; modsecurity_rules_file /etc/nginx/modsecurity/modsecurity_includes.conf; access_log syslog:server=unix:/dev/log vhost; Now we can validate the configuration and reload NGINX. Testing The WAF The last thing that we need to do is make sure that the WAF is actually doing something. We’re going to simulate a cross-site scripting attack to make sure that ModSecurity catches it. First, let’s start tailing the ModSecurity audit log: [root] $ tail -f /var/log/nginx/modsecurity_audit.log

Use Case: Generating SSL Certificates using Let's Encrypt

00:09:59

Lesson Description:

Until now, we’ve used a self-signed certificate and example.com based domains because there’s no requirement to own a domain to learn NGINX. In this lesson, we’ll go through setting up a real domain with an SSL certificate from Let’s Encrypt. If you own a domain then you can follow along. Note: the commands in this video are run as the root user. Documentation For This Video Let’s Encrypt Certbot Setting Up The DNS Entry This lesson I’ll be using a domain that I own called chord.tools. Here are the DNS records that I’ll be using. You can do something similar with your own domain: NAME TYPE DATA @ A MY_PUBLIC_IP_ADDRESS www CNAME chord.tools Installing Certbot The generation of SSL certificates using Let’s Encrypt can be automated, and one of the best tools for doing so is certbot. With the epel-release already activated, we can install certbot-nginx which will give us certbot and a plugin for interacting with NGINX: [root] $ yum install -y certbot-nginx The certbot NGINX plugin will allow it to make modifications to our NGINX configuration for us and restart the server, but to make sure that it modifies the proper file, we’ll create a simple configuration to start: /etc/nginx/conf.d/chord.tools.conf server { listen 80; server_name chord.tools www.chord.tools; location / { root /usr/share/nginx/html; index index.html; } } Generating Certificates There are a number of ways that we could use certbot, but for right now, we’re going to keep it simple. Here’s the command to have certbot generate certificates for our 2 domains: [root] $ certbot --nginx -d chord.tools -d www.chord.tools This will send us through a few different prompts, but after answering them all, we’ll have a new configuration that will even force SSL automatically. Here’s what the final configuration looks like: /etc/nginx/conf.d/chord.tools.conf server { server_name chord.tools www.chord.tools; location / { root /usr/share/nginx/html; index index.html; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/chord.tools/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/chord.tools/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = www.chord.tools) { return 301 https://$host$request_uri; } # managed by Certbot if ($host = chord.tools) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; server_name chord.tools www.chord.tools; return 404; # managed by Certbot } Renewing Certificates Automatically Certificates generated by Let’s Encrypt are valid for 90 days, so we’ll need to renew them more often than if we paid a different certificate authority for a certificate. Thankfully, certbot once again has this covered. The certbot renew command will check which certificates on the server need to be renewed soon and fetch new ones. Behind the scenes, certbot has already created a certbot-renew systemd service and timer (found at /lib/systemd/system/certbot-renew.{service,timer}). We’re going to make a small modification to the configuration file that the service uses (located at /etc/sysconfig/certbot) to make sure that NGINX is reloaded after a certificate is renewed. Find the uncommented line containing POST_HOOK and set it to match this: /etc/sysconfig/certbot POST_HOOK="--post-hook 'systemctl reload nginx'" Finally, let’s enable the service and timer: [root] $ systemctl start certbot-renew [root] $ systemctl enable certbot-renew [root] $ systemctl start certbot-renew.timer [root] $ systemctl enable certbot-renew.timer With this timer and service enabled, there will be a daily check to see if our certificate needs to be renewed. If the certificate does need renewed, certbot will fetch the new certificate(s) and reload the NGINX configuration automatically after the new certificates are downloaded.

Configuring NGINX for TLS/SSL

00:15:00

Tuning NGINX Performance

Performance

Content Compressions and Decompression

00:11:16

Lesson Description:

Much of what NGINX has configured for us by default is great, but there are some settings that we can adjust to make the server perform even better. One of the adjustments that we can make is to have NGINX compress resources before sending them to the client. This modification isn’t free, as it does add some CPU overhead, but not enough to outweigh the benefits. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http_gzip module NGINX http_gunzip module Configuring and Enabling GZIP We came across gzip when we first installed NGINX and looked at the default configuration, but it was commented out. The simplest way to enable compression for our responses is to uncomment that line in our nginx.conf file: /etc/nginx/nginx.conf (partial) gzip on; That change is simple enough, but there’s more that we can do to make this better by using some of the other directives from the gzip module. By default, we’re not going to compress the responses that we’re getting from proxied servers or any piece of content that isn’t HTML. Let’s make some modifications to compress more of what we know we’ll respond with: /etc/nginx/nginx.conf (partial) gzip on; gzip_disable msie6; gzip_proxied no-cache no-store private expired auth; gzip_types text/plain text/css application/x-javascript application/javascript text/xml application/xml application/xml+rss text/javascript image/x-icon image/bmp image/svg+xml; gzip_min_length 1024; gzip_vary on; Let’s break down these directives: gzip_disable - Ensure that we’re not sending compressed responses to older versions of Internet Explorer (hopefully no one is actually using these browsers) gzip_proxied - Specify that we only want to compress responses from proxied servers if we normally wouldn’t cache them gzip_types - The Content-Types that we will compress before sending the response gzip_min_length - Adjusting the minimum size of a file that we’ll compress, the default is 20 bytes, but we’re going to not compress resources that are less than one kilobyte gzip_vary - Adds a Vary: Accept-Encoding header. This tells intermediate caches (like CDNs) to treat the compressed and uncompressed version of the same resource as 2 separate entities What About Decompression We’re handling on-the-fly compression now, but if one of our proxied servers happens to send us a pre-compressed response then we probably want to decompress it for clients that can’t handle gzip. In this situation, we’ll use the gunzip module, and we’ll place this in the same area as our gzip directives: /etc/nginx/nginx.conf gzip on; gzip_disable msie6; gzip_proxied no-cache no-store private expired auth; gzip_types text/plain text/css application/x-javascript application/javascript text/xml application/xml application/xml+rss text/javascript image/x-icon image/bmp image/svg+xml; gzip_min_length 1024; gzip_vary on; gunzip on;

Workers & Connections

00:15:02

Lesson Description:

When fine-tuning NGINX performance, the two lowest level attributes that we can adjust are the workers and connections. In this lesson, we look into how we can find the proper values for both of these attributes. Note: the commands in this video are run as the root user. Documentation For This Video NGINX worker_processes directive NGINX worker_connections directive Adjusting Worker Processes By default, NGINX comes configured to only use one worker process, but is that good? It really depends on if the server only has 1 CPU core. To optimize our server, we should set the worker_processes equal to the number of CPU cores that our server has. Thankfully, we don’t need to modify the configuration for every different server that we deploy to because of the auto option. Let’s tell NGINX to automatically use the right number of worker processes. /etc/nginx/nginx.conf (partial) worker_processes auto; Adjusting Worker Connections Sadly, determining the worker connections value that we should use is not as easy as setting something to auto. Adjusting connection information happens in the events context of our nginx.conf file. The default is 512, but this number is almost guaranteed to be too low for modern day servers. Having too low of a limit is going to be bad because if we get a spike in traffic on a 4 core server we could only handle 2k clients even with the auto setting for the worker processes. NGINX was designed to handle 10k connections over a decade ago. With that in mind, let’s increase our connections pretty substantially: /etc/nginx/nginx.conf (partial) events { worker_connections 2048; } For a four core CPU, this would lead to 8192 simultaneous connections. If NGINX ever hits this limit, it will log an error in /var/log/nginx/error.log. Knowing that this will be logged, we should have some log monitoring in place so that we can see if we’ve set our limit too low. When running a server that is getting a lot of concurrent requests it is possible to run into operating system limits for file connections, and that is something that you’ll need to watch for. If we validate our configuration, we’ll see the following: [root] $ nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: [warn] 2048 worker_connections exceed open file resource limit: 1024 nginx: configuration file /etc/nginx/nginx.conf test is successful To view these limits we can use ulimit and run it as the NGINX user: [root] $ su -s /bin/sh nginx -c "ulimit -Hn" 4096 [root] $ su -s /bin/sh nginx -c "ulimit -Sn" 1024 The 1024 that we’re seeing is the “soft” limit. We stayed within the range of the “hard” limit so we can change the value that NGINX uses using the worker_rlimit_nofile directive near the top of our configuration: /etc/nginx/nginx.conf (partial) user nginx; worker_processes auto; worker_rlimit_nofile 4096; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; # Load ModSecurity dynamic module load_module /etc/nginx/modules/ngx_http_modsecurity_module.so; events { worker_connections 2048; } In the event that you run into a “Too Many Open Files” error at some point, you can adjust the hard and soft file limits for the nginx user and then change this value to match. Utilizing Keepalives We want to get the most that we can out of the connections that we open up. One way to accomplish this is to configure a keepalive value that allows a client to hold an open TCP connection for more than a single request. This same connection can then be used for multiple requests and doesn’t need to be closed and reopened. There are two spots in our configuration that we should consider using keepalives. Between the Client and NGINX. Between NGINX and an upstream server/group. The first type of keepalive that we’re going to discuss is between the web client and the NGINX server. These keepalives are enabled by default through the keepalive_timeout directive in the nginx.conf, and the keepalive_requests default value of 100. These values can be set for an entire http context or a specific server. There’s not a set in stone rule or approach to setting the keepalive_timeout for NGINX, but the default is used quite often. The keepalive_requests directive is interesting because it’s not the number of connections to keep open, but rather the number of requests a single keepalive connection can make. If you know that your clients are going to likely make more than 100 requests within your keepalive duration, then you should increase this number to match. Let’s look at how keepalives work for connections between NGINX and proxied servers. To tell an upstream to use keepalives, we’ll use the keepalive directive. Here’s what it looks like if we set our photos.example.com proxy to use keepalives: /etc/nginx/conf.d/photos.example.com.conf upstream photos { server 127.0.0.1:3000 weight=2; server 127.0.0.1:3100 max_fails=3 fail_timeout=20s; server 127.0.0.1:3101 max_fails=3 fail_timeout=20s; keepalive 32; } server { listen 80; server_name photos.example.com; client_max_body_size 5m; location / { proxy_pass http://photos; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location ~* .(js|css|png|jpe?g|gif) { root /var/www/photos.example.com; } } The number of connections used with the keepalive directive is the number to use with each of the worker processes. It’s important to keep this relatively low so that new connections can also be made. Keepalives are one of the areas where we do need to change different proxy settings depending on whether we’re using proxy_pass, fastcgi_pass, or uwsgi_pass. For HTTP and proxy_pass, we need to set the HTTP version to 1.1 and make sure that the Connection header is set to "upgrade" or an empty string "". These directives are already used in our photos configuration, but for a different server we would need to make sure that these lines existed: proxy_http_version 1.1; proxy_set_header Connection ""; For FastCGI and fastcgi_pass, we need to enable the fastcgi_keep_conn directive. The directive will look like this: fastcgi_keep_conn on; For uWSGI and uwsgi_pass, there actually isn’t the concept of a keepalive connection. uWSGI web services can also work with proxy_pass, so that is something to consider if you really want upstream keepalives.

HTTP/2

00:06:36

Lesson Description:

One of the ways that we can get more performance out of NGINX is by enabling HTTP/2 (H2). There are quite a few performance benefits to using HTTP/2, and it is HTTP 1.1 compatible, so there isn’t a downside to enabling it. Note: the commands in this video are run as the root user. Documentation For This Video NGINX http_v2 module Benefits of HTTP/2 HTTP/2 is an advancement in the HTTP protocol that provides (or will provide) the following benefits: Improved encryption of HTTP 1.1 Multiplexed Streams - single TCP connection simultaneously requesting multiple resources HTTP/2 Server Push - send resources to the client before they’re requested (not in NGINX stable yet) Header compression - prevent sending duplicate header information from the server to the client All of these benefits can be summed up result in improved performance. Enabling HTTP/2 Note: NGINX must have been compiled with --with-http_v2_module. The official binary should already have this. For us to enable HTTP/2 for our server, we need first to make sure that we’re using SSL. Once we’ve forced traffic over HTTPS instead of standard HTTP we can enable HTTP/2 for the server by adding the http2 parameter to the listen directive for the server. Here’s what it would look like to enable HTTP/2 for our default server configuration since we’re already forcing all traffic over HTTPS: /etc/nginx/conf.d/default.conf (partial) listen 443 ssl http2 default_server;

Use Case: PageSpeed by Google

00:08:59

Lesson Description:

Google is the technology company with the best reputation when it comes to speed and performance. In this use case, we’ll go over how to install the ngx_pagespeed module that Google created to make it easy to use the server side best practices that they’ve discovered over the years. Note: the commands in this video are run as the root user. Documentation For This Video Pagespeed by Google ngx_pagespeed documentation Compiling ngx_pagespeed Dynamic Module Similar to how we needed to build a separate module for ModSecurity, we’ll need to compile the ngx_pagespeed module. Unlike ModSecurity, there is an install script accessible to us that we can configure quite easily: [root] $ cd /opt [root] $ bash

NGINX Performance Tuning

00:15:00

Course Conclusion

Final Steps

What's Next?

00:01:10

Lesson Description:

Thank you for going through the NGINX Deep Dive. I hope that this course has been enjoyable for you. Be sure to stop by the community to let everyone know how your experience was and what you thought. If you have any feedback on pieces of content (good or bad), I ask that you please rate it because it helps me continually improve the courses that I create. In this video, I suggest that you continue your learning by digging into container-based technologies where NGINX can easily be deployed and used. Here are the courses that I mention: Docker Certified Associate Prep Course Certified Kubernetes Administrator