< Back to home

NGINX

đź’ˇ
nginx config generator tool → https://www.digitalocean.com/community/tools/nginx

Use cases of NGINX.

What Is Nginx? A Basic Look at What It Is and How It Works

https://kinsta.com/knowledgebase/what-is-nginx/#:~:text=NGINX or Apache-,How Does Nginx Work%3F,can control multiple worker processes.

How To Change the Nginx Web Document Location on Ubuntu

https://www.tutorialspoint.com/how-to-change-the-nginx-web-document-location-on-ubuntu-16-04#:~:text=By default Nginx Web server,website requirement or client requirements.

By default Nginx Web server default location is at /usr/share/nginx/html which is located on the default file system of the Linux. Generally, this is done, based on the website requirement or client requirements.

How To Set Up Nginx Server Blocks (Virtual Hosts) on Ubuntu

https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-16-04

đź’ˇ
In /etc/hosts Un-comment server_names_hash_bucket_size 64;

few screenshots

Create a Self-Signed SSL Certificate for Nginx in Ubuntu

https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-20-04-1

what is worker_processes and worker_connections in Nginx?

nginx has one master process and several worker processes. The main purpose of the master process is to read and evaluate configuration, and maintain worker processes. Worker processes do actual processing of requests. nginx employs event-based model and OS-dependent mechanisms to efficiently distribute requests among worker processes. The number of worker processes is defined in the configuration file and may be fixed for a given configuration or automatically adjusted to the number of available CPU cores

A worker process is a single-threaded process.

If Nginx is doing CPU-intensive work such as SSL or gzipping and you have 2 or more CPUs/cores, then you may set worker_processes to be equal to the number of CPUs or cores.

If you are serving a lot of static files and the total size of the files is bigger than the available memory, then you may increase worker_processes to fully utilize disk bandwidth.

worker_connections

The worker_connections and worker_processes from the main section allows you to calculate max clients you can handle:

max clients = worker_processes * worker_connections

So I understand that worker_processes is single threaded and its value is helpful in CPU-intensive work, but I am unable to understand "allows you to handle max clients you can handle".

worker_connections is the number of simultaneous connections; so they are simply stating how to calculate, for example:

The number of connections is limited by the maximum number of open files (RLIMIT_NOFILE) on your system

Forwards proxy vs reverse proxy

The Forward Proxy

When people talk about a proxy server (often called a "proxy"), more often than not they are referring to a forward proxy. Let me explain what this particular server does.

A forward proxy provides proxy services to a client or a group of clients. Often, these clients belong to a common internal network like the one shown below.

When one of these clients makes a connection attempt to that file transfer server on the Internet, its requests have to pass through the forward proxy first.

Depending on the forward proxy's settings, a request can be allowed or denied. If allowed, then the request is forwarded to the firewall and then to the file transfer server. From the point of view of the file transfer server, it is the proxy server that issued the request, not the client. So when the server responds, it addresses its response to the proxy.

But then when the forward proxy receives the response, it recognizes it as a response to the request that went through earlier. And so it then sends that response to the client that made the request.

Because proxy servers can keep track of requests, responses, their sources and their destinations, different clients can send out various requests to different servers through the forward proxy and the proxy will intermediate for all of them. Again, some requests will be allowed, while some will be denied.

As you can see, the proxy can serve as a single point of access and control, making it easier for you to enforce authentication, SSL encryption or other security policies. A forward proxy is typically used in tandem with a firewall to enhance an internal network's security by controlling traffic originating from clients in the internal network that are directed at hosts on the Internet. Thus, from a security standpoint, a forward proxy is primarily aimed at enforcing security on client computers in your private network.

But then client computers aren't always the only ones you find in your internal network. Sometimes, you also have servers. And when those servers have to provide services to external clients (for example, field staff who need to access files from your FTP server), a more appropriate solution would be a reverse proxy.

The Reverse Proxy

What is a reverse proxy? As its name implies, a reverse proxy does the exact opposite of what a forward proxy does. While a forward proxy proxies on behalf of clients (or requesting hosts), a reverse proxy proxies on behalf of servers. A reverse proxy accepts requests from external clients on behalf of servers stationed behind it as shown below.

In our example, it is the reverse proxy that is providing file transfer services. The client is oblivious to the file transfer servers behind the proxy, which are actually providing those services. In effect, where a forward proxy hides the identities of clients, a reverse proxy hides the identities of servers.

An Internet-based attacker would find it considerably more difficult to acquire data found in those file transfer servers than if he didn't have to deal with a reverse proxy.

Just like forward proxy servers, reverse proxies also provide a single point of access and control. You typically set it up to work alongside one or two firewalls to control traffic and requests directed to your internal servers.

In most cases, reverse proxy servers also act as load balancers for the servers behind them. Load balancers play a crucial role in providing high availability to network services that receive large volumes of requests. When a reverse proxy performs load balancing, it distributes incoming requests to a cluster of servers, all providing the same kind of service. So, for instance, a reverse proxy load balancing FTP services will have a cluster of FTP servers behind it, and will manage server load to prevent bottlenecks and delays.

Both types of proxy servers relay requests and responses between clients and destination machines. But in the case of reverse proxy servers, client requests that go through them normally originate over TCP/IP connections, while, in the case of forward proxies, client requests normally come from the internal network behind them.

Setup an Nginx reverse proxy server

https://www.theserverside.com/blog/Coffee-Talk-Java-News-Stories-and-Opinions/How-to-setup-Nginx-reverse-proxy-servers-by-example

A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers.

Most enterprise architectures use a single, reverse proxy server to handle all incoming requests. The proxy server then inspects each HTTP request and identifies which backend system, be it an Apache, Tomcat, Express or NodeJS server, should handle the request.

The reverse proxy then forwards the request to that server, allows the request to be processed, obtains a response from that backend server, and then send the response back to the client.

Deploying NGINX as an API Gateway

https://www.nginx.com/blog/deploying-nginx-plus-as-an-api-gateway-part-1/

An API gateway is an API management tool that sits between a client and a collection of backend services. An API gateway acts as a reverse proxy to accept all application programming interface (API) calls, aggregate the various services required to fulfill them, and return the appropriate result.

configure load balancing using Nginx

https://upcloud.com/resources/tutorials/configure-load-balancing-nginx

Configuring nginx as a load balancer

When nginx is installed and tested, start to configure it for load balancing. In essence, all you need to do is set up nginx with instructions for which type of connections to listen to and where to redirect them. Create a new configuration file using whichever text editor you prefer. For example with nano:

sudo nano /etc/nginx/conf.d/load-balancer.conf

In the load-balancer.conf you’ll need to define the following two segments, upstream and server, see the examples below.

# Define which servers to include in the load balancing scheme.
# It's best to use the servers' private IPs for better performance and security.
# You can find the private IPs at yourUpCloud control panel Network section.
http {
   upstream backend {
      server 10.1.0.101;
      server 10.1.0.102;
      server 10.1.0.103;
   }

   # This server accepts all traffic to port 80 and passes it to the upstream.
   # Notice that the upstream name and the proxy_pass need to match.

   server {
      listen 80;

      location / {
          proxy_pass http://backend;
      }
   }
}

Then save the file and exit the editor.

Next, disable the default server configuration you earlier tested was working after the installation. Again depending on your OS, this part differs slightly.

On Debian and Ubuntu systems you’ll need to remove the default symbolic link from the sites-enabled folder.

sudo rm /etc/nginx/sites-enabled/default

CentOS hosts don’t use the same linking. Instead, simply rename the default.conf in the conf.d/ directory to something that doesn’t end with .conf, for example:

sudo mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.disabled

Then use the following to restart nginx.

sudo systemctl restart nginx

Check that nginx starts successfully. If the restart fails, take a look at the  /etc/nginx/conf.d/load-balancer.conf you just created to make sure there are no mistypes or missing semicolons.

When you enter the load balancer’s public IP address in your web browser, you should pass to one of your back-end servers.

Load balancing methods

Load balancing with nginx uses a round-robin algorithm by default if no other method is defined, like in the first example above. With round-robin scheme each server is selected in turns according to the order you set them in the load-balancer.conf file. This balances the number of requests equally for short operations.

Least connections based load balancing is another straightforward method. As the name suggests, this method directs the requests to the server with the least active connections at that time. It works more fairly than round-robin would with applications where requests might sometimes take longer to complete.

To enable least connections balancing method, add the parameter least_conn to your upstream section as shown in the example below.

upstream backend {
   least_conn;
   server 10.1.0.101;
   server 10.1.0.102;
   server 10.1.0.103;
}

Round-robin and least connections balancing schemes are fair and have their uses. However, they cannot provide session persistence. If your web application requires that the users are subsequently directed to the same back-end server as during their previous connection, use IP hashing method instead. IP hashing uses the visitors IP address as a key to determine which host should be selected to service the request. This allows the visitors to be each time directed to the same server, granted that the server is available and the visitor’s IP address hasn’t changed.

To use this method, add the ip_hash -parameter to your upstream segment like in the example underneath.

upstream backend {
   ip_hash;
   server 10.1.0.101;
   server 10.1.0.102;
   server 10.1.0.103;
}

In a server setup where the available resources between different hosts are not equal, it might be desirable to favour some servers over others. Defining server weights allows you to further fine-tune load balancing with nginx. The server with the highest weight in the load balancer is selected the most often.

upstream backend {
   server 10.1.0.101 weight=4;
   server 10.1.0.102 weight=2;
   server 10.1.0.103;
}

For example in the configuration shown above the first server is selected twice as often as the second, which again gets twice the requests compared to the third.

Load balancing with HTTPS enabled

Enable HTTPS for your site, it is a great way to protect your visitors and their data. If you haven’t yet implemented encryption on your web hosts, we highly recommend you take a look at our guide for how to install Let’s Encrypt on nginx.

To use encryption with a load balancer is easier than you might think. All you need to do is to add another server section to your load balancer configuration file which listens to HTTPS traffic at port 443 with SSL.  Then set up a proxy_pass to your upstream segment like with the HTTP in the previous example above.

Open your configuration file again for edit.

sudo nano /etc/nginx/conf.d/load-balancer.conf

Then add the following server segment to the end of the file.

server {
   listen 443 ssl;
   server_name domain_name;
   ssl_certificate /etc/letsencrypt/live/domain_name/cert.pem;
   ssl_certificate_key /etc/letsencrypt/live/domain_name/privkey.pem;
   ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

   location / {
      proxy_pass http://backend;
   }
}

Then save the file, exit the editor and restart nginx again.

sudo systemctl restart nginx

Setting up encryption at your load balancer when you are using the private network connections to your back-end has some great advantages.

With the HTTPS-enabled you also have the option to enforce encryption to all connections to your load balancer. Simply update your server segment listening to port 80 with a server name and a redirection to your HTTPS port. Then remove or comment out the location portion as it’s no longer needed. See the example below.

server {
   listen 80;
   server_name domain_name;
   return 301 https://$server_name$request_uri;

   #location / {
   #   proxy_pass http://backend;
   #}
}

Save the file again after you have made the changes. Then restart nginx.

sudo systemctl restart nginx

Now all connections to your load balancer will be served over an encrypted HTTPS connection. Requests to the unencrypted HTTP will be redirected to use HTTPS as well. This provides a seamless transition into encryption. Nothing is required from your visitors.

NGINX Gzip Compression

Enabling NGINX Gzip Compression

GZIP compression allows NGINX server to compress data before sending it to client browser. This reduces data bandwidth, improves website speed and saves server costs. Here are the steps to enable NGINX GZip compression.

Here are the steps to enable NGINX GZip compression.

1. Open NGINX Configuration file

Open terminal and run the following command to open NGINX server configuration file.

$ sudo vi /etc/nginx/nginx.conf

If you have configured separate virtual hosts for your website (e.g www.example.com), such as /etc/nginx/sites-enabled/website.conf then open its configuration with the following command

$ sudo vi /etc/nginx/sites-enabled/website.conf

Bonus Read : How to Enable Browser Caching in NGINX

2. Enable GZIP Compression in NGINX

Add/Uncomment the following lines in your NGINX configuration file.

gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";

Let us look at each of the above lines

Bonus Read : How to Remove Trailing Slash in NGINX

3. Restart NGINX Server

Run the following command to check syntax of your updated config file.

$ sudo nginx -t

If there are no errors, run the following command to restart NGINX server.

$ sudo service nginx reload #debian/ubuntu
$ systemctl restart nginx #redhat/centos

Bonus Read : How to Install & Enable modsecurity in NGINX

4. Verify GZIP Compression

If you are wondering how to verify GZIP compression in NGINX, then there are many third-party tools like GZIP compression test where you can enter your website URL and it will tell you if GZIP compression is enabled on your website.

Hopefully, the above article will help you enable and verify GZIP compression in NGINX.