< Back to home

Linux - 2

Linux is the best-known and most-used open source operating system.

To Learn

Process Management

- View - Kill Process and other Signals - cd /Proc - Start a Process - Nohup, Pm2, screen, setsid. bg , fg, disown, - Port Binding 127 vs 192 vs 0

File Management

- less - grep - tail - cat"


- Install: ppa and source - Configure - User Permission - Locations - Logs


- Password/ File based logins - Passphrase - ~/.ssh permission - authorized_keys - id_rsa and id_rsa.pub - ssh_config

Java installation

- from ppa - From Source - adding and editing Environment Variables


- Install and setup - Run - Log files - directories and files - Working”

Exploring /proc File System in Linux

we are going to take a look inside the /proc  directory. One misconception that we have to immediately clear up is that the /proc  directory is NOT  a real File System , in the sense of the term. It is a Virtual File System . Contained within the procfs  are information about processes and other system information. It is mapped to /proc  and mounted at boot  time.

cd /proc

cat /proc/meminfo

quick rundown on /proc’s files:

  1. /proc/cmdline – Kernel command line information.
  1. /proc/console – Information about current consoles including tty.
  1. /proc/devices – Device drivers currently configured for the running kernel.
  1. /proc/dma – Info about current DMA channels.
  1. /proc/fb – Framebuffer devices.
  1. /proc/filesystems – Current filesystems supported by the kernel.
  1. /proc/iomem – Current system memory map for devices.
  1. /proc/ioports – Registered port regions for input output communication with device.
  1. /proc/loadavg – System load average.
  1. /proc/locks – Files currently locked by kernel.
  1. /proc/meminfo – Info about system memory (see above example).
  1. /proc/misc – Miscellaneous drivers registered for miscellaneous major device.
  1. /proc/modules – Currently loaded kernel modules.
  1. /proc/mounts – List of all mounts in use by system.
  1. /proc/partitions – Detailed info about partitions available to the system.
  1. /proc/pci – Information about every PCI device.
  1. /proc/stat – Record or various statistics kept from last reboot.
  1. /proc/swap – Information about swap space.
  1. /proc/uptime – Uptime information (in seconds).
  1. /proc/version – Kernel version, gcc version, and Linux distribution installed.

Within /proc’s numbered directories you will find a few files and links. Remember that these directories’ numbers correlate to the PID of the command being run within them. Let’s use an example. On my system, there is a folder name /proc/12:

cd /proc/12


In any numbered directory, you will have a similar file structure. The most important ones, and their descriptions, are as follows:

  1. cmdline – command line of the process
  1. environ – environmental variables
  1. fd – file descriptors
  1. limits – contains information about the limits of the process
  1. mounts – related information

You will also notice a number of links in the numbered directory:

  1. cwd – a link to the current working directory of the process
  1. exe – link to the executable of the process
  1. root – link to the work directory of the process

Keeping a process running


A process may not continue to run when you log out or close your terminal. This special case can be avoided by preceding the command you want to run with the nohup command. Also, appending an ampersand (&) will send the process to the background and allow you to continue using the terminal. For example, suppose you want to run myprogram.sh.

nohup myprogram.sh &

One nice thing nohup does is return the running process's PID. I'll talk more about the PID next.

https://opensource.com/article/18/9/linux-commands-process-management#:~:text=The easiest way to start,want to check the version.



PM2 is a daemon process manager that will help you manage and keep your application online 24/7

PM2 is a free open source, advanced, efficient and cross-platform production-level process manager for Node.js with a built-in load balancer.

It keeps your apps “alive forever ” with auto restarts and can be enabled to start at system boot, thus allowing for High Availability  (HA ) configurations or architectures. Notably, PM2  allows you to run your apps in cluster mode  without making any changes in your code (this also depends on the number of CPU cores on your server). It also allows you to easily manage app logs, and so much more.


screen command in Linux provides the ability to launch and use multiple shell sessions from a single ssh session. When a process is started with ‘screen’, the process can be detached from session & then can reattach the session at a later time. When the session is detached, the process that was originally started from the screen is still running and managed by the screen itself. The process can then re-attach the session at a later time, and the terminals are still there, the way it was left. Syntax:

screen [-opts] [cmd [args]]

setsid command in Linux

setsid command in Linux system is used to run a program in a new session. The command will call the fork(2) if already a process group leader. Else, it will execute a program in the current process.


setsid [options] program [arguments]

When you run a lengthy process

interactively or in the background, it is attached to the shell. When the shell exits, the process will abort.

Although you can use


to ensure that a command ignores the hangup signal, which occurs when a user disconnects from the pseudo-terminal (pty), it is not as reliable as


. nohup is known to time out prematurely. Setsid runs commands in a separate session (that is not attached to your pty) so the commands run to completion even after you log out.

exec command

exec  command in Linux is used to execute a command from the bash itself. This command does not create a new process it just replaces the bash with the command to be executed. If the exec command is successful, it does not return to the calling process.

Daemon-izing a Process in Linux - just read. don’t go into details


What is Chkconfig used for?

The Chkconfig command tool allows to configure services start and stop automatically in the /etc/rd. d/init. d scripts through command line.


Configure the web server to restart if it gets stopped

chkconfig httpd on

Disown command

https://phoenixnap.com/kb/disown-command-linux#:~:text=The disown command is a,doesn't require root privileges.


In Unix-like  operating systems/dev/random/dev/urandom  and /dev/arandom  are special files  that serve as pseudorandom number generators .

In this example, we will start up a couple of jobs running in the background:

cat /dev/random > /dev/null &
ping google.com > /dev/null &

jobs -l

Remove All Jobs

To remove all jobs from the job table, use the following command:

disown -a

Remove Specific Jobs

If you want to remove a specific job from the job table, use the disown command with the appropriate job ID. The job ID is listed in brackets on the job table:

In our example, if we want to remove the ping command, we need to use the disown command on job 2:

disown %2

Remove Currently Running Jobs

To remove only the jobs currently running, use the following command:

disown -r

Keep Jobs Running After You Log Out

Once you exit your system’s terminal, all currently running jobs are automatically terminated. To prevent this, use the disown command with the -h option:

disown -h jobID

In our example, we want to keep the cat command running in the background. To prevent it from being terminated on exit, use the following command:

disown -h %1

After you use the disown command, close the terminal:


Any jobs you used the disown -h command on will keep running.

How to check if port is in use on Linux or Unix


Run any one of the following command on Linux to see open ports:

sudo lsof -i -P -n | grep LISTEN

sudo netstat -tulpn | grep LISTEN

sudo ss -tulpn | grep LISTEN

sudo lsof -i:22 ## see a specific port such as 22 ##

sudo nmap -sTU -O IP-address-Here

Find and remove large files that are open but have been deleted lsof +L1 more about this : https://unix.stackexchange.com/questions/68523/find-and-remove-large-files-that-are-open-but-have-been-deleted#:~:text=You can also use lsof,bit more reliable than grepping.
strace is a diagnostic, debugging and instructional userspace utility for Linux. It is used to monitor and tamper with interactions between processes and the Linux kernel, which include system calls, signal deliveries, and changes of process state.
Both strace and ltrace are powerful command-line tools for debugging and troubleshooting programs on Linux : Strace captures and records all system calls made by a process as well as the signals received, while ltrace does the same for library calls.

Opening a port on Linux


List all open ports

netstat -lntu

eg. let’s open port 4000

just to make sure, let’s ensure that port 4000 (can be anything )is not used, using the netstat or the ss command.

netstat -na | grep :4000

ss -na | grep :4000

Ubuntu has a firewall called ufw, which takes care of these rules for ports and connections, instead of the old iptables firewall. If you are a Ubuntu user, you can directly open the port using ufw

sudo ufw allow 4000

Determining what process is bound to a port

netstat -lnp

How to bind a service to a port in Linux


Not all system services require an association with a port number, meaning they do not need to open a socket on a network to receive packets. However, if the network services need to communicate with other network processes continuously, a socket is required, making it mandatory for these services to bind to specific ports.

Port numbers make it easy to identify requested services. Their absence implies that a client-to-server request would be unsuccessful because the transport headers associated with these requests will not have port numbers that link them to specific machine services.

A service such as HTTP has a default binding to port 80. This default binding does not imply that the HTTP service can only receive network packets or respond to network requests via port 80. With access to the right config files, you can associate this service with a new custom port. After this successful configuration, accessing the service with the new port number would imply specifying the machine’s IP address or domain name and the new port number as part of its URL definition.

For example, a machine on an HTTP service network that was initially accessed via the IP address may have a new access URL like if the port number is changed from 80 to a custom port number like 83.

Modifying the /etc/services files

Since we now understand the relationship between network services and ports, any open network connection on a Linux server associates the client machine that opened that connection to a targeted service through a specific port. This active network classifies these ports as “well-known ports” because both the server and the client computers need to know beforehand.

The configuration that binds a service to a port on a Linux machine is defined in the small local database file “/etc/services”. To explore the content of this file structure, you can use the nano command.

File Management

less , grep, tail, cat


command is a Linux utility that can be used to read the contents of a text file one page(one screen) at a time. It has faster access because if file is large it doesn’t access the complete file, but accesses it page by page.

less filename

dmesg | less

dmesg | less -p "failure"

The above command tells less to start at first occurrence of pattern “failure” in the file.

dmesg | less -N

It will show output along with line numbers


grep is a Linux / Unix command-line tool used to search for a string of characters in a specified file. The text search pattern is called a regular expression. When it finds a match, it prints the line with the result. The grep command is handy when searching through large log files.

Note:  Grep is case-sensitive. Make sure to use the correct case when running grep commands.

grep phoenix sample2.txt

Grep will display every line where there is a match for the word phoenix.


grep phoenix sample sample2 sample3

To search all files in the current directory, use an asterisk instead of a filename at the end of a grep command.

In this example, we use nix as a search criterion:

grep nix *

Grep allows you to find and print the results for whole words only. To search for the word phoenix in all files in the current directory, append -w to the grep command.

grep -w phoenix *

As grep commands are case sensitive, one of the most useful operators for grep searches is -i. Instead of printing lowercase results only, the terminal displays both uppercase and lowercase results. The output includes lines with mixed case entries.

An example of this command:

grep -i phoenix *

To include all subdirectories in a search, add the -r operator to the grep command.

grep -r phoenix *

You can use grep to print all lines that do not match(inverse grep search) a specific pattern of characters. To invert the search, append -v to a grep command.

To exclude all lines that contain phoenix, enter:

grep -v phoenix sample

The grep command prints entire lines when it finds a match in a file. To print only those lines that completely match the search string, add the -x option.

grep -x “phoenix number3” *

Grep can display the filenames and the count of lines where it finds a match for your word.

Use the -c operator to count the number of matches:

grep -c phoenix *

Tail command

shows last 3 lines

tail -3 test.txt

Display Contents of File(cat)

# cat /etc/passwd

displays line number

# cat -n song.txt

What Is Nginx? A Basic Look at What It Is and How It Works

Nginx , pronounced like “engine-ex”, is an open-source web server that, since its initial success as a web server, is now also used as a reverse proxy , HTTP cache, and load balancer.


With Nginx, one master process can control multiple worker processes. The master maintains the worker processes, while the workers do the actual processing. Because Nginx is asynchronous, each request can be executed by the worker concurrently without blocking other requests.

Some common features seen in Nginx include:

Why use a reverse proxy?


In a computer network, a reverse proxy server acts as a middleman – communicating with the users so the users never interact directly with the origin servers. Serving as a gateway, it sits in front of one or more web servers and forwards client (web browser) requests to those web servers. Web traffic must pass through it before they forward a request to a server to be fulfilled and then return the server’s response.

Reverse proxies make different servers and services appear as one single unit, allowing organizations to hide several different servers behind the same name - making it easier to remove services, upgrade them, add new ones, or roll them back. As a result, the site visitor only sees my-company-123.net and not myweirdinternalservername.my-company-123.net

To sum up, reverse proxy servers can:

Most businesses host their website’s content management system or shopping cart apps with an external service outside their own network. Instead of letting site visitors know that you’re sending them to a different URL for payment, businesses can conceal that detail using a reverse proxy.

Reverse proxy and load balancers: what’s the correlation?

A reverse proxy is a layer 7 load balancer (or, vice versa) that operates at the highest level applicable and provides for deeper context on the Application Layer protocols such as HTTP. By using additional application awareness, a reverse proxy or layer 7 load balancer has the ability to make more complex and informed load balancing decisions on the content of the message – whether it’s to optimise and change the content (HTTP header manipulation, compression and encryption) and/or monitor the health of applications to ensure reliability and availability. On the other hand, layer 4 load balancers are FAST routers rather than application (reverse) proxies where the client effectively talks directly (transparently) to the backend servers.

All modern load balancers are capable of doing both – layer 4 as well as layer 7 load balancing, by acting either as reverse proxies (layer 7 load balancers) or routers (layer 4 load balancers). An initial tier of layer 4 load balancers can distribute the inbound traffic across a second tier of layer 7 (proxy-based) load balancers. Splitting up the traffic allows the computationally complex work of the proxy load balancers to be spread across multiple nodes. Thus, the two-tiered model serves far greater volumes of traffic than would otherwise be possible and therefore, is a great option for load balancing object storage systems – the demand for which has significantly exploded in the recent years.

Installing Nginx

sudo apt update
sudo apt install nginx

Default page is placed in /var/www/html/  location. You can place your static pages here, or use virtual host and place it other location.

Ubuntu Linux: Start / Restart / Stop Nginx Web Server

sudo service nginx start
sudo service nginx stop
sudo service nginx restart


sudo systemctl start nginx 
sudo systemctl stop nginx 
sudo systemctl restart nginx

To view status of your Nginx server

Use any one of the following command:sudo service status nginx## OR ##sudo systemctl status nginx


A note about reload nginx server

It is also possible to use the following syntax to reload nginx server after you made changes to the config file such as nginx.conf:sudo nginx -s reloadORsudo systemctl reload nginxORsudo service nginx reload

error log

sudo tail -f /var/log/nginx/error.log

It is also possible to use the systemd systemctl and journalctl commands for details on errors:

$ sudo systemctl status nginx.service
$ sudo journalctl -xe

Systemctl is a Linux command-line utility used to control and manage systemd and services . You can think of Systemctl as a control interface for Systemd init service, allowing you to communicate with systemd and perform operations. Systemctl is a successor of Init.


How to Enable Services at Boot

If you want a specific service to run during system startup, you can use the enable command.

For example:

sudo systemctl enable nginx

The above command, however, does not enable the service during an active session. To do this, add the –now flag.

sudo systemctl enable nginx --now

Allow Nginx Traffic through a Firewall

You can generate a list of the firewall rules using the following command:

sudo ufw app list

This should generate a list of application profiles. On the list, you should see four entries related to Nginx:

To allow normal HTTP traffic to your Nginx server, use the Nginx HTTP profile with the following command:

sudo ufw allow ‘Nginx HTTP’

To check the status of your firewall, use the following command:

sudo ufw status

It should display a list of the kind of HTTP web traffic allowed to different services. Nginx HTTP should be listed as ALLOW and Anywhere.

https://phoenixnap.com/kb/install-nginx-on-ubuntu#:~:text=By default%2C the main Nginx,%2Fnginx%2Fsites-available.

Test Nginx in a Web Browser

Open a web browser, such as Firefox.

Enter your system’s IP address in the address bar or type localhost.

Your browser should display a page welcoming you to Nginx.

Define Server Blocks

Nginx uses a configuration file to determine how it behaves. One way to use the configuration file is to define server blocks, which work similar to an Apache Virtual Host.

Nginx is designed to act as a front for multiple servers, which is done by creating server blocks.

By default, the main Nginx configuration file is located at /etc/nginx/nginx.conf. Server block configuration files are located at /etc/nginx/sites-available.

To view the contents of the default server block configuration file, enter the following command in a terminal:

sudo vi /etc/nginx/sites-available/default

This should open the default configuration file in the Vi text editor, which should look something like this:

# Default server configuration
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;

Create a Sample Server Block

Set up an HTML File

Going through a sample configuration is helpful. In a terminal window, enter the following command to create a “test” directory to work with:

sudo mkdir /var/www/example

Create and open a basic HTML index file to work as a test webpage:

sudo vi /var/www/example/index.html

In the Vi text editor (you can substitute your preferred text editor if you’d like), enter the following:

Welcome to the Example Website!

Save the file and exit.

Set up a Simple Server Block

Use the following command to create a new server block file for our Test website:

sudo vi /etc/nginx/sites-available/example.com

This should launch the Vi text editor and create a new server block file.Enter the following lines into the text file:

server {
listen 80;
root /var/www/example;
index index.html;
server_name www.example.com;

This tells Nginx to look at the /var/www/example directory for the files to serve, and to use the index.html file we created as the front page for the website.Save the file and exit.

Create a Symbolic Link to Activate Server Block

In the terminal window, enter the following command:

sudo ln –s /s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled

This creates a link and enables your test website in Nginx. Restart the Nginx service to apply the changes:

sudo systemctl restart nginx

Start Testing

In a browser window, visit www.example.com.

Nginx should intercept the request, and display the text we entered in the HTML file.

NGINX Configuration: Understanding Directives

Every NGINX configuration file will be found in the /etc/nginx/ directory, with the main configuration file located in /etc/nginx/nginx.conf .

NGINX configuration options are known as “directives”: these are arranged into groups, known interchangeably as blocks or contexts .

https://www.plesk.com/blog/various/nginx-configuration-guide/#:~:text=Every NGINX configuration file will,interchangeably as blocks or contexts .

Nginx configuration file locations


Serving custom content and setting up virtual host


Creating our own website

Default page is placed in /var/www/html/ location. You can place your static pages here, or use virtual host and place it other location.

Virtual host is a method of hosting multiple domain names on the same server.

Let’s create simple HTML page in /var/www/tutorial/ (it can be anything you want). Create index.html file in this location.

cd /var/www
sudo mkdir tutorial
cd tutorial
sudo "${EDITOR:-vi}" index.html

Paste the following to the index.html file:

<!doctype html>
    <meta charset="utf-8">
    <title>Hello, Nginx!</title>
    <h1>Hello, Nginx!</h1>
    <p>We have just configured our Nginx web server on Ubuntu Server!</p>

Save this file. In next step we are going to set up virtual host to make Nginx use pages from this location.

Setting up virtual host

To set up virtual host, we need to create file in /etc/nginx/sites-enabled/ directory.

For this tutorial, we will make our site available on 81 port, not the standard 80 port. You can change it if you would like to.

cd /etc/nginx/sites-enabled
sudo "${EDITOR:-vi}" tutorial
server {
       listen 81;
       listen [::]:81;

       server_name example.ubuntu.com;

       root /var/www/tutorial;
       index index.html;

       location / {
               try_files $uri $uri/ =404;

root is a directory where we have placed our .html file. index is used to specify file available when visiting root directory of site. server_name can be anything you want, because you aren’t pointing it to any real domain by now.

Activating virtual host and testing results

To make our site working, simply restart Nginx service.

sudo service nginx restart

Let’s check if everything works as it should. Open our newly created site in web browser. Remember that we used :81 port.

Adjusting the Firewall

Before testing Nginx, the firewall software needs to be adjusted to allow access to the service. Nginx registers itself as a service with ufw upon installation, making it straightforward to allow Nginx access.

List the application configurations that ufw knows how to work with by typing the following:

sudo ufw app list


Your output should be a list of the application profiles:

Available applications:
  Nginx Full
  Nginx HTTP
  Nginx HTTPS

This list displays three profiles available for Nginx:

It is recommended that you enable the most restrictive profile that will still allow the traffic you’ve configured. Since you haven’t configured SSL for your server yet in this guide, you’ll only need to allow traffic on port 80.

You can enable this by typing the following:

1. sudo ufw allow 'Nginx HTTP'

Then, verify the change:

sudo ufw status


You should receive a list of HTTP traffic allowed in the outpu

You can access the default Nginx landing page to confirm that the software is running properly by navigating to your server’s IP address. If you do not know your server’s IP address, you can get it a few different ways.

Try typing the following at your server’s command prompt:

ip addr show eth0 | grep inet | awk '{ print $2; }' | sed 's/\/.*$//'


You will receive a few lines. You can try each in your web browser to confirm if they work.

An alternative is running the following command, which should generate your public IP address as identified from another location on the internet:

curl -4 icanhazip.com


When you have your server’s IP address, enter it into your browser’s address bar:


You should receive the default Nginx landing page:

By default, Nginx is configured to start automatically when the server boots. If this is not what you want, you can disable this behavior by typing the following:

sudo systemctl disable nginx


To re-enable the service to start up at boot, you can type the following:

1. sudo systemctl enable nginx

install nginx ununtu


Next, enable the file by creating a link from it to the sites-enabled directory, which Nginx reads from during startup:

sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/


Two server blocks are now enabled and configured to respond to requests based on their listen and server_name directives (you can read more about how Nginx processes these directives here):

To avoid a possible hash bucket memory problem that can arise from adding additional server names, it is necessary to adjust a single value in the /etc/nginx/nginx.conf file. Open the file:

sudo nano /etc/nginx/nginx.conf


Find the server_names_hash_bucket_size directive and remove the # symbol to uncomment the line:


http {
    server_names_hash_bucket_size 64;

Save and close the file when you are finished.

Next, test to make sure that there are no syntax errors in any of your Nginx files:

sudo nginx -t


If there aren’t any problems, restart Nginx to enable your changes:

1. sudo systemctl restart nginx

Server Logs

Compiling and Installing NGINX from Source


update and install dependencies

sudo apt-get install build-essential libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev libgd-dev libxml2 libxml2-dev uuid-dev

Download NGINX Source Code and Configure

We now have all the necessary tools to compile NGINX.

Now, we need to download the NGINX source from their Official website.

Run the following command to download the source code.

wget http://nginx.org/download/nginx-1.20.0.tar.gz

We have now NGINX source code in tarball format. We can extract it by using this command

    tar -zxvf nginx-1.20.0.tar.gz

Go to the extracted directory by using this command

    cd nginx-1.20.0

Now we have to use the configure flag for configuring NGINX by using this command.

./configure --prefix=/var/www/html --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --with-pcre --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-http_ssl_module --with-http_image_filter_module=dynamic --modules-path=/etc/nginx/modules --with-http_v2_module --with-stream=dynamic --with-http_addition_module --with-http_mp4_module

In the above command, we configured our custom path for the NGINX configuration file, access, and Error Log path with some NGINX's module.

Build NGINX & Adding Modules

There are many configuration options available in NGINX, you can use it as per your need. To find all the configuration options available in NGINX by visiting nginx.org.

There are some modules that come pre-installed in NGINX.

Modules Built by Default

There many modules comes with NGINX pre-installed If you don't need a module that is built by default, you can disable it by naming it with the --without-<MODULE-NAME> option on the configure script, for example:

    ./configure --without-http_empty_gif_module

Compiling the NGINX source code

After custom configuration complete we can now compile NGINX source code by using this command :


This will take quite a bit of time and once that's done install the compiled source code by using this command.

    make install

Start NGINX by using this command


Now we have successfully installed NGINX. To verify this, check the NGINX version by using this command.

    nginx -V

Or you can visit your IP to see the holding page NGINX.


Standard NGINX Command-Line Tools

Before we start, however, let's quickly see how to use the standard NGINX command-line tools to execute service signals.

We can confirm that NGINX is running by checking for the process.

ps aux | grep nginx

We can see the master and worker process here.

So with NGINX running in the background, let's see how to send it a stop signal. Using the standard command-line tools.

For example, with NGINX running, we can send the stop signal with NGINX by using this command.

nginx -s stop

You can check the NGINX status by visiting your IP address; you will not see any holding page as NGINX is now stopped or more accurately terminated.

So next, let's add that systemd service.

To enable the service, we're going to have to add a small script, which is the same across operating systems.

Create an Nginx systemd unit file by using nano editor

nano /lib/systemd/system/nginx.service

and paste this script

Description=The NGINX HTTP and reverse proxy server
After=syslog.target network-online.target remote-fs.target nss-lookup.target

ExecStartPre=/usr/sbin/nginx -t
ExecReload=/usr/sbin/nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID


You can change the PIDfile location as per your custom configuration path.

Now, save the file by pressing the key CTRL+XY, and Enter to save this file.

Start your NGINX by using systemd with this command.

systemctl restart nginx

Now you can manage your NGINX by using Systemd.

You can also check the status of NGINX whether it is running or not by using this command.

systemctl status nginx

This gives us a really informative printout of the NGINX server status.

You can change the PIDfile location as per your custom configuration path.

Now, save the file by pressing the key CTRL+XY, and Enter to save this file.

Start your NGINX by using systemd with this command.

systemctl restart nginx

Now you can manage your NGINX by using Systemd.

You can also check the status of NGINX whether it is running or not by using this command.

systemctl status nginx

This gives us a really informative printout of the NGINX server status.

Enable NGINX on Boot

Now, as we mentioned, the other very useful feature of a systemd service is enabling NGINX to start automatically when the system Boots at the moment, when this machine is shut down or rebooted NGINX will no longer be running.

Obviously not good for a web server in particular.

So to enable start-up on boot, run this command.

systemctl enable nginx

So we get confirmation of a start-up, symlink being created for this service.

We can test this by rebooting the machine.

That's it!

mounting volume docker container


NGINX and PHP-FPM. What my permissions should be?


Why enable user www-data is dangerous?

https://forums.raspberrypi.com/viewtopic.php?t=99126#:~:text=Re%3A Why enable user www-data is dangerous%3F&text=Everything that runs on a,external hackers get 100%25 control.

Everything that runs on a web server runs with the userid = www-data and group = www-data. You've now allowed that userid complete control of your machine. So when there's any small security bug in anything you run on that web server the external hackers get 100% control. That is less than desirable and an enormous red flag to anyone who is in any way serious about security.

Secure Shell (SSH)

SSH, or Secure Shell Protocol, is a remote administration protocol that allows users to access, control, and modify their remote servers over the internet.

The SSH command consists of 3 distinct parts:

ssh {user}@{host}

The SSH key command instructs your system that you want to open an encrypted Secure Shell Connection. {user} represents the account you want to access. For example, you may want to access the root user, which is basically synonymous with the system administrator with complete rights to modify anything on the system. {host} refers to the computer you want to access. This can be an IP Address (e.g. or a domain name (e.g. www.xyzdomain.com).

When you hit enter, you will be prompted to enter the password for the requested account. When you type it in, nothing will appear on the screen, but your password is, in fact being transmitted. Once you’re done typing, hit enter once again. If your password is correct, you will be greeted with a remote terminal window.

Configure Password-Based SSH Authentication or How to Enable/Disable password based authentication for SSH access to server

A password authentication against SSH isn’t bad but creating a long and complicated password may also encourage you to store it an unsecured manner. Using encryption keys to authenticate SSH connection is a more secure alternative.


enable :


vim /etc/ssh/sshd_config

PasswordAuthentication yes

/etc/init.d/sshd restart


vim /etc/ssh/sshd_config

PasswordAuthentication no

service sshd restart


SSH passphrase

passphrase  is similar to a password. However, a password generally refers to something used to authenticate or log into a system. A password generally refers to a secret used to protect an encryption key. Commonly, an actual encryption key is derived from the passphrase and used to encrypt the protected resource

Adding or changing a passphrase

You can change the passphrase for an existing private key without regenerating the keypair by typing the following command:

$ ssh-keygen -p -f ~/.ssh/id_ed25519
> Enter old passphrase:[Type old passphrase]
> Key has comment 'your_email@example.com'
> Enter new passphrase (empty for no passphrase):[Type new passphrase]
> Enter same passphrase again:[Repeat the new passphrase]
> Your identification has been saved with the new passphrase.


Listing OpenSSH private and public ssh keys

You can use the ls -l $HOME/.ssh/ command to see the following files:

For example:ls -l ~/.ssh/ls -l ~/.ssh/id_*

Typically private key names start with id_rsa or id_ed25519 or id_dsa, and they are protected with a passphrase. However, users can name their keys anything. In the above example, for my intel NUC, I named RSA keys as follows:

How to change a ssh passphrase for private key

The procedure is as follows for OpenSSH to change a passphrase:

  1. Open the terminal application
  1. To change the passphrase for default SSH private key:

    ssh-keygen -p

  1. First, enter the old passphrase and then type a new passphrase two times.
  1. You can specify the filename of the key file:

    ssh-keygen -p -f ~/.ssh/intel_nuc_debian

Let us see all examples for changing a passphrase with ssh-keygen command in details.

OpenSSH Change a Passphrase ssh-keygen command

The -p option requests changing the passphrase of a private key file instead of creating a new private key. The program will prompt for the file containing the private key, for the old passphrase, and twice for the new passphrase. Use -f {filename} option to specifies the filename of the key file. For example, change directory to $HOME/.ssh. Open the Terminal app and then type the cd command:$ cd ~/.ssh/To change DSA passphrase, enter:$ ssh-keygen -f id_dsa -pFor ed25519 key:$ ssh-keygen -f id_ed25519 -pLet us change RSA passphrase, enter:$ ssh-keygen -f id_rsa -p

Removing a Passphrase with ssh-keygen

The syntax is same but to remove the existing passphrase, hit Enter key twice at the steps to enter the new one and then confirm it:ssh-keygen -f ~/.ssh/id_rsa -pssh-keygen -f ~/.ssh/aws_cloud_automation -p

However, you can state empty passphrase by abusing the


option as follows to save hitting the


key twice:

ssh-keygen -p -N ""
ssh-keygen -f ~/.ssh/aws_cloud_automation -p -N ""

Permissions for .ssh folder and key files


Typically, the permissions need to be1:

Use the following commands to change the permissions2:

sudo chmod 700 ~/.ssh sudo chmod 644 ~/.ssh/id_example.pub sudo chmod 600 ~/.ssh/id_example

Summary based on the ssh man page (to show by man ssh)34:

+------------------------+-------------------------------------+-------------+-------------+ | Directory or File | Man Page | Recommended | Mandatory | | | | Permissions | Permissions | +------------------------+-------------------------------------+-------------+-------------+ | ~/.ssh/ | There is no general requirement to | 700 | | | | keep the entire contents of this | | | | | directory secret, but the | | | | | recommended permissions are | | | | | read/write/execute for the user, | | | | | and not accessible by others. | | | +------------------------+-------------------------------------+-------------+-------------+ | ~/.ssh/authorized_keys | This file is not highly sensitive, | 600 | | | | but the recommended permissions are | | | | | read/write for the user, and not | | | | | accessible by others | | | +------------------------+-------------------------------------+-------------+-------------+ | ~/.ssh/config | Because of the potential for abuse, | | 600 | | | this file must have strict | | | | | permissions: read/write for the | | | | | user, and not accessible by others. | | | | | It may be group-writable provided | | | | | that the group in question contains | | | | | only the user. | | | +------------------------+-------------------------------------+-------------+-------------+ | ~/.ssh/identity | These files contain sensitive data | | 600 | | ~/.ssh/id_dsa | and should be readable by the user | | | | ~/.ssh/id_rsa | but not accessible by others | | | | | (read/write/execute) | | | +------------------------+-------------------------------------+-------------+-------------+ | ~/.ssh/identity.pub | Contains the public key for | 644 | | | ~/.ssh/id_dsa.pub | authentication. These files are | | | | ~/.ssh/id_rsa.pub | not sensitive and can (but need | | | | | not) be readable by anyone. | | | +------------------------+-------------------------------------+-------------+-------


Fail2ban is an intrusion prevention software framework that protects computer servers from brute-force attacks. Written in the Python programming language, it is able to run on POSIX systems that have an interface to a packet-control system or firewall installed locally, for example, iptables or TCP Wrapper.

What is fail2ban for SSH?

Fail2Ban is an intrusion prevention framework written in Python that protects Linux systems and servers from brute-force attacks. You can setup Fail2Ban to provide brute-force protection for SSH on your server. This ensures that your server is secure from brute-force attacks

Authorized keys or How To Configure SSH Key-Based Authentication on a Linux Server


Step 1 — Creating SSH Keys


Step 2 — Copying an SSH Public Key to Your Server

Copying Your Public Key Using ssh-copy-id

ssh-copy-id username@remote_host

Copying Your Public Key Using SSH

If you do not have ssh-copy-id available, but you have password-based SSH access to an account on your server, you can upload your keys using a conventional SSH method.

We can do this by outputting the content of our public SSH key on our local computer and piping it through an SSH connection to the remote server. On the other side, we can make sure that the ~/.ssh directory exists under the account we are using and then output the content we piped over into a file called authorized_keys within this directory.

We will use the >> redirect symbol to append the content instead of overwriting it. This will let us add keys without destroying previously added keys.

cat ~/.ssh/id_rsa.pub | ssh username@remote_host "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys”

Copying Your Public Key Manually

If you do not have password-based SSH access to your server available, you will have to do the above process manually.

The content of your id_rsa.pub file will have to be added to a file at ~/.ssh/authorized_keys on your remote machine somehow.

To display the content of your id_rsa.pub key, type this into your local computer:

1. cat ~/.ssh/id_rsa.pub

Access your remote host using whatever method you have available. This may be a web-based console provided by your infrastructure provider.

Once you have access to your account on the remote server, you should make sure the ~/.ssh directory is created. This command will create the directory if necessary, or do nothing if it already exists:

mkdir -p ~/.ssh


Now, you can create or modify the authorized_keys file within this directory. You can add the contents of your id_rsa.pub file to the end of the authorized_keys file, creating it if necessary, using this:

echopublic_key_string >> ~/.ssh/authorized_keys


In the above command, substitute the public_key_string with the output from the cat ~/.ssh/id_rsa.pub command that you executed on your local system. It should start with ssh-rsa AAAA... or similar.

If this works, you can move on to test your new key-based SSH authentication.

Step 3 — Authenticating to Your Server Using SSH Keys

The process is mostly the same:

1. ssh username@remote_host

Step 4 — Disabling Password Authentication on your Server

If you were able to login to your account using SSH without a password, you have successfully configured SSH key-based authentication to your account. However, your password-based authentication mechanism is still active, meaning that your server is still exposed to brute-force attacks.

Before completing the steps in this section, make sure that you either have SSH key-based authentication configured for the root account on this server, or preferably, that you have SSH key-based authentication configured for an account on this server with sudo access. This step will lock down password-based logins, so ensuring that you will still be able to get administrative access is essential.

Once the above conditions are true, log into your remote server with SSH keys, either as root or with an account with sudo privileges. Open the SSH daemon’s configuration file:

sudo nano /etc/ssh/sshd_config


Inside the file, search for a directive called PasswordAuthentication. This may be commented out. Uncomment the line by removing any # at the beginning of the line, and set the value to no. This will disable your ability to log in through SSH using account passwords:


PasswordAuthentication no

Save and close the file when you are finished. To actually implement the changes we just made, you must restart the service.

On most Linux distributions, you can issue the following command to do that:

sudo systemctl restart ssh


After completing this step, you’ve successfully transitioned your SSH daemon to only respond to SSH keys.

What's the difference between id_rsa.pub and id_dsa.pub?

id_rsa.pub and id_dsa.pub are the public keys for id_rsa and id_dsa.

If you are asking in relation to SSHid_rsa is an RSA key and can be used with the SSH protocol 1 or 2, whereas id_dsa is a DSA key and can only be used with SSH protocol 2.

What is difference between Id_rsa and Id_rsa pub?

In the context of ssh and related software, id_rsa is your RSA *private* key, used to sign and authenticate your connection to a remote host.

id_rsa.pub is your RSA *public* key, which, when supplied the remote host (via an ‘authorized keys’ file, publishing it in the DNS, or other means) allows the host to authenticate your connection as being originated by you, and decide whether or not to accept it as a result.

Don’t get the two confused, and maintain tight control over id_rsa, because anybody who gets that can use it to impersonate you.

What is the difference between ssh_config and sshd_config

When you work on a Linux system, you play with SSH program on daily basis. You will be required to configure ssh client or ssh daemon on your Linux box to make it work properly. In each Linux distribution (Debian, Redhat and so on), there are two configuration files ssh_config and sshd_config for SSH program. What is the difference between ssh_config and sshd_config?

ssh_config: configuration file for the ssh client on the host machine you are running. For example, if you want to ssh to another remote host machine, you use a SSH client. Every settings for this SSH client will be using ssh_config, such as port number, protocol version and encryption/MAC algorithms.

sshd_config: configuration file for the sshd daemon (the program that listens to any incoming connection request to the ssh port) on the host machine. That is to say, if someone wants to connect to your host machine via SSH, their SSH client settings must match your sshd_config settings in order to communicate with you, such as port number, version and so on.

sshd_config is the configuration file for the OpenSSH server.

https://www.ssh.com/academy/ssh/config  ssh_config is the configuration file for the OpenSSH client

How ssh_config Works

The ssh client reads configuration from three places in the following order:

  1. System wide in /etc/ssh/ssh_config
  1. User-specific in your home directory ~/.ssh/ssh_config
  1. Command line flags supplied to ssh directly

What is PPA?

PPA stands for Personal Package Archive. The PPA allows application developers and Linux users to create their own repositories to distribute software. With PPA, you can easily get newer software version or software that are not available via the official Ubuntu repositories

Concept of repositories and package management

A repository is a collection of files that has information about various software, their versions and some other details like the checksum. Each Ubuntu version has its own official set of four repositories:

You can see such repositories for all Ubuntu versions here. You can browse through them and also go to the individual repositories. For example, Ubuntu 16.04 main repository can be found here.

So basically it’s a web URL that has information about the software. How does your system know where are these repositories?

This information is stored in the sources.list file in the directory /etc/apt. If you look at its content, you’ll see that it has the URL of the repositories. The lines with # at the beginning are ignored

How to use PPA? How does PPA work?


PPA, as I already told you, means Personal Package Archive. Mind the word ‘Personal’ here. That gives the hint that this is something exclusive to a developer and is not officially endorsed by the distribution.

Ubuntu provides a platform called Launchpad that enables software developers to create their own repositories. An end user i.e. you can add the PPA repository to your sources.list and when you update your system, your system would know about the availability of this new software and you can install it using the standard sudo apt install command like this.

sudo add-apt-repository ppa:dr-akulavich/lighttable
sudo apt-get update
sudo apt-get install lighttable-installer

To summarize:

Basically, when you add a PPA using add-apt-repository, it will do the same action as if you manually run these commands:

deb http://ppa.launchpad.net/dr-akulavich/lighttable/ubuntu YOUR_UBUNTU_VERSION_HERE main
deb-src http://ppa.launchpad.net/dr-akulavich/lighttable/ubuntu YOUR_UBUNTU_VERSION_HERE main

The above two lines are the traditional way to add any repositories to your sources.list. But PPA does it automatically for you, without wondering about the exact repository URL and operating system version.

One important thing to note here is that when you use PPA, it doesn’t change your original sources.list. Instead, it creates two files in /etc/apt/sources.list.d directory, a list and a back up file with suffix ‘save’.

“add-apt-repository command not found” Error

The add-apt-repository command is not installed by default. If you try to run this command you will get the “add-apt-repository command not found” error. If you get this error you should install this tool which is described in the following step.

Install add-apt-repository Command

The add-apt-repository command is provided with the package named software-properties-common. So we will install this package like below.

sudo apt install software-properties-common

install Java

simplest :

The Ubuntu repository offers two (2), open-source Java packages, Java Development Kit  (Open JDK) and Java Runtime Environment  (Open JRE). You use JRE for running Java-based applications, while JDK is for developing and programming with Java.

sudo apt update

Then you need to check if Java is already installed:

java -version

Run the following command to install OpenJDK:

sudo apt install default-jre

This command installs the Java Runtime Environment (JRE). It allows you to run almost any Java software.

Now check the Java version:

java -version

To compile and run some specific Java programs in addition to the JRE, you may need the Java Development Kit (JDK). To install the JDK, run the following command, which also installs the JRE:

sudo apt install default-jdk

This command installs the Java Development Kit (JDK).

Now check JDK version with command:

javac -version

JDK is installed!

java installation PPA


Install Oracle Java 11

To download the official Oracle JDK, you first need to download a third-party repository.

We include instructions for installations from 2 (two) different package repositories. You can decide from which one you prefer to download.

Option 1: Download Oracle Java from Webupd8 PPA

1. First, add the required package repository by typing:

sudo add-apt-repository ppa:webupd8team/java

Hit Enter when prompted.

2. Make sure to update your system before initiating any installation:

sudo apt update

3. Now, you can install Java 11, the latest LTS version:

sudo apt install oracle-java11-installer

4. Optionally, you can set this Java version as the default with the following command:

sudo apt install oracle-java11-set-default

Option 2: Download Oracle Java from Linux Uprising PPA

1. Before adding the new repository, install the required packages if you do not have them on your system yet:

sudo apt install software-properties-common

2. Next, add the repository with the following command:

sudo add-apt-repository ppa:linuxuprising/java

3. Update the package list before installing any new software with:

sudo apt update

4. Then, download and install the latest version of Oracle Java (version number 11):

sudo apt install oracle-java11-installer

Install Specific Version of Java

If for some reason you do not wish to install the default or latest version of Java, you can specify the version number you prefer.

Install Specific Version of OpenJDK

You may decide to use Open JDK 8, instead of the default OpenJDK 11.

To do so, open the terminal and type in the following command:

sudo apt install openjdk-8-jdk

Verify the version of java installed with the command:

java –version

Install Specific Version of Oracle Java

When you download the Oracle Java packages from a third-party repository, you have to type out the version number as part of the code.

Therefore, if you want other versions of Java Oracle on your system, change that number accordingly.

The command for installing Oracle JDK is the following (the symbol # representing the Java version):

sudo apt install oracle-java#-installer

For instance, if you want to install Java 10, use the command:

sudo apt install oracle-java10-installer

How to Set Default Java Version

As you can have multiple versions of Java installed on your system, you can decide which one is the default one.

First, run a command that shows all the installed versions on your computer:

sudo update-alternatives --config java

The image above shows that there are two alternatives on this system. These choices are represented by numbers 1 (Java 11) and (Java 8), while the refers to the current default version.

As the output instructs, you can change the default version if you type its associated number (in this case, 1 or 2) and press Enter.

How to Set JAVA_HOME Environment Variable

The JAVA_HOME environment variable determines the location of your Java installation. The variable helps other applications access Java’s installation path easily.

1. To set up the JAVA_HOME variable, you first need to find where Java is installed. Use the following command to locate it:

sudo update-alternatives --config java

The Path section shows the locations, which are in this case:

  • /usr/lib/jvm/java-11-openjdk-amd64/bin/java (where OpenJDK 11 is located)
  • /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java (where OpenJDK 8 is located)

2. Once you see all the paths, copy one of your preferred Java version.

3. Then, open the file /etc/environment with any text editor. In this example, we use Nano:

nano /etc/environment

4. At the end of the file, add a line which specifies the location of JAVA_HOME in the following manner:


How to Uninstall Java on Ubuntu

In case you need to remove any of the Java packages installed, use the apt remove command.

To remove Open JDK 11, run the command:

sudo apt remove default-jdk

To uninstall OpenJDK 8:

sudo apt remove openjdk-8-jdk

How to Set Environment Variables in Linux

nano /etc/environment/

Setting an Environment Variable

To set an environment variable the export command is used. We give the variable a name, which is what is used to access it in shell scripts and configurations and then a value to hold whatever data is needed in the variable.


For example, to set the environment variable for the home directory of a manual OpenJDK 11 installation, we would use something similar to the following.

export JAVA_HOME=/opt/openjdk11

To output the value of the environment variable from the shell, we use the echo command and prepend the variable’s name with a dollar ($) sign.


And so long as the variable has a value it will be echoed out. If no value is set then an empty line will be displayed instead.

Unsetting an Environment Variable

To unset an environment variable, which removes its existence all together, we use the unset command. Simply replace the environment variable with an empty string will not remove it, and in most cases will likely cause problems with scripts or application expecting a valid value.

To following syntax is used to unset an environment variable


For example, to unset the JAVA_HOME environment variable, we would use the following command.


Listing All Set Environment Variables

To list all environment variables, we simply use the set command without any arguments.


Persisting Environment Variables for a User

When an environment variable is set from the shell using the export command, its existence ends when the user’s sessions ends. This is problematic when we need the variable to persist across sessions.

To make an environment persistent for a user’s environment, we export the variable from the user’s profile script.

  1. Open the current user’s profile into a text editor
    vi ~/.bash_profile
  1. Add the export command for every environment variable you want to persist.
    export JAVA_HOME=/opt/openjdk11
  1. Save your changes.

Adding the environment variable to a user’s bash profile alone will not export it automatically. However, the variable will be exported the next time the user logs in.

To immediately apply all changes to bash_profile, use the source command.

source ~/.bash_profile

Export Environment Variable

Export is a built-in shell command for Bash that is used to export an environment variable to allow new child processes to inherit it.

To export a environment variable you run the export command while setting the variable.

export MYVAR="my variable value"

We can view a complete list of exported environment variables by running the export command without any arguments.


To view all exported variables in the current shell you use the -p flag with export.

export -p

Setting Permanent Global Environment Variables for All Users

A permanent environment variable that persists after a reboot can be created by adding it to the default profile. This profile is loaded by all users on the system, including service accounts.

All global profile settings are stored under /etc/profile. And while this file can be edited directory, it is actually recommended to store global environment variables in a directory named /etc/profile.d, where you will find a list of files that are used to set environment variables for the entire system.

  1. Create a new file under /etc/profile.d to store the global environment variable(s). The name of the should be contextual so others may understand its purpose. For demonstrations, we will create a permanent environment variable for HTTP_PROXY.
    sudo touch /etc/profile.d/http_proxy.sh
  1. Open the default profile into a text editor.
    sudo vi /etc/profile.d/http_proxy.sh
  1. Add new lines to export the environment variables
    export HTTP_PROXY=http://my.proxy:8080
    export HTTPS_PROXY=https://my.proxy:8080
    export NO_PROXY=localhost,::1,.example.com
  1. Save your changes and exit the text editor



What is Tomcat used for?

It is mainly used to provide the foundation for hosting Java servlets. The Apache Tomcat works in the center while Java Server Pages and Servlet produce the dynamic pages. It is one of the server-side programming languages that facilitate the developer to run and perform independent dynamic content creation.

Benefits of Apache Tomcat

  • Tomcat is a quick and easy way to run your applications in Ubuntu. It provides quick loading and helps run a server more efficiently
  • Tomcat contains a suite of comprehensive, built-in customization choices which enable its users to work flexibly
  • Tomcat is a free, open-source application. It offers great customization through access to the code
  • Tomcat offers its users an extra level of security
  • Thanks to its stability, even if you face issues in Tomcat, it doesn’t stop the rest of the server from working’
  • Apache Tomcat is the most widely adopted application and web server in production today.

Installing tomcat


step 1 : install java

sudo apt update

Install the OpenJDK package by running:

sudo apt install default–jdk

step 2 : create tomcat user

Create Tomcat User

For security, you should not use Tomcat without a unique user. This will make the install of Tomcat on Ubuntu easier. Create a new tomcat group that will run the service:

sudo groupadd tomcat

Now, the next procedure is to create a new tomcat user. Create user members of the Tomcat group with a home directory opt/tomcat for running the Tomcat service:

sudo useradd -s /bin/false -g tomcat -d /opt/tomcat tomcat

When /sbin/nologin is set as the shell, if user with that shell logs in, they'll get a polite message saying 'This account is currently not available.' This message can be changed with the file /etc/nologin.txt.

/bin/false is just a binary that immediately exits, returning false, when it's called, so when someone who has false as shell logs in, they're immediately logged out when false exits. Setting the shell to /bin/true has the same effect of not allowing someone to log in but false is probably used as a convention over true since it's much better at conveying the concept that person doesn't have a shell.

follow this:


Failed to start tomcat: Unit tomcat.service not found


The "Permission denied" error for the logs directory most likely means that the OS user running the Tomcat process does not have write permission on that directory.

Assuming you are running Tomcat with user "tomcat7", try setting the ownership and filesystem permissions of the logs directory, e.g.:

sudo chown -R tomcat7:tomcat7 /usr/share/tomcat7/logs
sudo chmod -R u+rw /usr/share/tomcat7/logs

If you are running Tomcat with a different OS user, replace tomcat7:tomcat7 by the username and primary group of that user, respectively.

Step 3: Install Tomcat on Ubuntu

The best way to install Tomcat 9 on Ubuntu is to download the latest binary release from the Tomcat 9 downloads page and configure it manually. If the version is not 9.0.60 or it’s the latest version, then follow the latest stable version. Just copy the link of the core tar.gz file under the Binary Distributions section.

Now, change to the /tmp directory on your server to download the items which you won’t need after extracting the Tomcat contents:

cd /tmp

To download from the copied link (from Tomcat website), use the following curl command:

cucurl -O https://dlcdn.apache.org/tomcat/tomcat-9/v9.0.63/bin/apache-tomcat-9.0.63.tar.gz

Step 4: Update Permissions

Now that you finished the install of Tomcat on Ubuntu, you need to set up the Tomcat user to have full access to the Tomcat installation. This user needs to have access to the directory. Follow the steps below:

sudo mkdir /opt/tomcat
cd /opt/tomcat
sudo tar xzvf /tmp/apache-tomcat-9.0.*tar.gz -C /opt/tomcat --strip-components=1

Now, give the Tomcat group ownership over the entire installation directory with the chgrp command:

sudo chgrp -R tomcat /opt/tomcat

Next, you need to give the Tomcat user access to the conf directory to view its contents and execute access to the directory itself:

sudo chmod -R g+r conf
sudo chmod g+x conf

Make the Tomcat user the owner of the web apps, work, temp, and logs directories:

sudo chown -R tomcat webapps/ work/ temp/ logs/

Step5: Create a systemd Unit File

We will need to create a new unit file to run Tomcat as a service. Open your text editor and create a file name tomcat.service in the /etc/systemd/system/:

sudo nano /etc/systemd/system/tomcat.service

Next, paste the following configuration:

Description=Apache Tomcat Web Application Container


Environment=’CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC’
Environment=’JAVA_OPTS.awt.headless=true -Djava.security.egd=file:/dev/v/urandom’





Save and close the file after finishing the given commands above.

Next, Notify the system that you have created a new file by issuing the following command in the command line:

sudo systemctl daemon-reload

The following commands will allow you to execute the Tomcat service:

cd /opt/tomcat/bin
sudo ./startup.sh run

Step6: Adjust the Firewall

It is essential to adjust the firewall so the requests get to the service. Tomcat uses port 8080 to accept conventional requests. Allow traffic to that port by using UFW:

sudo ufw allow 8080

Follow the command below to access the splash page by going to your domain or IP address followed by :8080 in a web browser – http://IP:8080

Step 7: Configure the Tomcat Web Management Interface

Follow the command below to add a login to your Tomcat user and edit the tomcat-users.xml file:

sudo nano /opt/tomcat/conf/tomcat-users.xml

Now, define the user who can access the files and add username and passwords:

tomcat-users.xml — Admin User
<tomcat-users . . .>
<tomcat-users . . .>
<user username="admin" password="password" roles="manager-gui,admin-gui"/>

For the Manager app, type:

sudo nano /opt/tomcat/webapps/manager/META-INF/context.xml

For the Host Manager app, type:

sudo nano /opt/tomcat/webapps/host-manager/META-INF/context.xml

To restart the Tomcat service and view the effects:

sudo systemctl restart tomcat

Step 8: Access the Online Interface

Now that you already have a user, you can access the web management interface in a browser. Once again, you can access the interface by providing your server’s domain name or IP address followed by port 8080 in your browser – http://server_domain_or_IP:8080

Let’s take a look at the Manager App, accessible via the link – http://server_domain_or_IP:8080/manager/html.

Make sure that you entered the account credentials to the tomcat-users.xml file.

We use the Web Application Manager to manage our Java applications. You can Begin, Stop, Reload, Deploy, and Undeploy all apps here. Lastly, it provides data about your server at the bottom of the page.

Now let’s look at the Host Manager, accessible via http://server_domain_or_IP:8080/host-manager/html/

From the Virtual Host Manager page, you can also add new virtual hosts that follow your application form’s guidelines

How To Use Rsync to Sync Local and Remote Directories


Rsync, which stands for remote sync, is a remote and local file synchronization tool. It uses an algorithm to minimize the amount of data copied by only moving the portions of files that have changed.

In this tutorial, we’ll define Rsync, review the syntax when using rsync, explain how to use Rsync to sync with a remote system, and other options available to you.

Rsync is a very flexible network-enabled syncing tool. Due to its ubiquity on Linux and Unix-like systems and its popularity as a tool for system scripts, it’s included on most Linux distributions by default.

Understanding Rsync Syntax

The syntax for rsync operates similar to other tools, such as sshscp, and cp.

First, change into your home directory by running the following command:

cd ~


Then create a test directory:

mkdir dir1


Create another test directory:

mkdir dir2


Now add some test files:

touch dir1/file{1..100}


There’s now a directory called dir1 with 100 empty files in it. Confirm by listing out the files:

ls dir1


file1    file18  file27  file36  file45  file54  file63  file72  file81  file90
file10   file19  file28  file37  file46  file55  file64  file73  file82  file91
file100  file2   file29  file38  file47  file56  file65  file74  file83  file92
file11   file20  file3   file39  file48  file57  file66  file75  file84  file93
file12   file21  file30  file4   file49  file58  file67  file76  file85  file94
file13   file22  file31  file40  file5   file59  file68  file77  file86  file95
file14   file23  file32  file41  file50  file6   file69  file78  file87  file96
file15   file24  file33  file42  file51  file60  file7   file79  file88  file97
file16   file25  file34  file43  file52  file61  file70  file8   file89  file98
file17   file26  file35  file44  file53  file62  file71  file80  file9   file99

You also have an empty directory called dir2. To sync the contents of dir1 to dir2 on the same system, you will run rsync and use the -r flag, which stands for “recursive” and is necessary for directory syncing:

rsync -r dir1/ dir2


Another option is to use the -a flag, which is a combination flag and stands for “archive”. This flag syncs recursively and preserves symbolic links, special and device files, modification times, groups, owners, and permissions. It’s more commonly used than -r and is the recommended flag to use. Run the same command as the previous example, this time using the -a flag:

rsync -a dir1/ dir2


Please note that there is a trailing slash (/) at the end of the first argument in the syntax of the the previous two commands and highlighted here:

rsync -a dir1/ dir2


This trailing slash signifies the contents of dir1. Without the trailing slash, dir1, including the directory, would be placed within dir2. The outcome would create a hierarchy like the following:


Another tip is to double-check your arguments before executing an rsync command. Rsync provides a method for doing this by passing the -n or --dry-run options. The -v flag, which means “verbose”, is also necessary to get the appropriate output. You’ll combine the an, and v flags in the following command:

rsync -anv dir1/ dir2


sending incremental file list
. . .

Now compare that output to the one you receive when removing the trailing slash, as in the following:

rsync -anv dir1 dir2


sending incremental file list
. . .

This output now demonstrates that the directory itself was transferred, rather than only the files within the directory.

Using Rsync to Sync with a Remote System

To use rsync to sync with a remote system, you only need SSH access configured between your local and remote machines, as well as rsync installed on both systems. Once you have SSH access verified between the two machines, you can sync the dir1 folder from the previous section to a remote machine by using the following syntax. Please note in this case, that you want to transfer the actual directory, so you’ll omit the trailing slash:

rsync -a ~/dir1username@remote_host:destination_directory


This process is called a push operation because it “pushes” a directory from the local system to a remote system. The opposite operation is pull, and is used to sync a remote directory to the local system. If the dir1 directory were on the remote system instead of your local system, the syntax would be the following:

rsync -ausername@remote_host:/home/username/dir1place_to_sync_on_local_machine


Like cp and similar tools, the source is always the first argument, and the destination is always the second.

##Using Other Rsync Options

Rsync provides many options for altering the default behavior of the utility, such as the flag options you learned about in the previous section.

If you’re transferring files that have not already been compressed, like text files, you can reduce the network transfer by adding compression with the -z option:

rsync -azsourcedestination


The -P flag is also helpful. It combines the flags --progress and --partial. This first flag provides a progress bar for the transfers, and the second flag allows you to resume interrupted transfers:

rsync -azPsourcedestination


sending incremental file list
created directorydestinationsource/
              0 100%    0.00kB/s    0:00:00 (xfr#1, to-chk=99/101)
              0 100%    0.00kB/s    0:00:00 (xfr#2, to-chk=98/101)
              0 100%    0.00kB/s    0:00:00 (xfr#3, to-chk=97/101)
              0 100%    0.00kB/s    0:00:00 (xfr#4, to-chk=96/101)
              0 100%    0.00kB/s    0:00:00 (xfr#5, to-chk=95/101)
. . .

If you run the command again, you’ll receive a shortened output since no changes have been made. This illustrates Rsync’s ability to use modification times to determine if changes have been made:

rsync -azPsourcedestination


sending incremental file list
sent 818 bytes received 12 bytes 1660.00 bytes/sec
total size is 0 speedup is 0.00

Say you were to update the modification time on some of the files with a command like the following:

touch dir1/file{1..10}


Then, if you were to run rsync with -azP again, you’ll notice in the output how Rsync intelligently re-copies only the changed files:

rsync -azPsourcedestination


sending incremental file list
            0 100%    0.00kB/s    0:00:00 (xfer#1, to-check=99/101)
            0 100%    0.00kB/s    0:00:00 (xfer#2, to-check=98/101)
            0 100%    0.00kB/s    0:00:00 (xfer#3, to-check=87/101)
            0 100%    0.00kB/s    0:00:00 (xfer#4, to-check=76/101)
. . .

In order to keep two directories truly in sync, it’s necessary to delete files from the destination directory if they are removed from the source. By default, rsync does not delete anything from the destination directory.

You can change this behavior with the --delete option. Before using this option, you can use -n, the --dry-run option, to perform a test to prevent unwanted data loss:

rsync -an --deletesourcedestination


If you prefer to exclude certain files or directories located inside a directory you are syncing, you can do so by specifying them in a comma-separated list following the --exclude= option:

rsync -a --exclude=pattern_to_excludesourcedestination


If you have a specified pattern to exclude, you can override that exclusion for files that match a different pattern by using the --include= option:

rsync -a --exclude=pattern_to_exclude --include=pattern_to_includesourcedestination


Finally, Rsync’s --backup option can be used to store backups of important files. It’s used in conjunction with the --backup-dir option, which specifies the directory where the backup files should be stored:

rsync -a --delete --backup --backup-dir=/path/to/backups/path/to/sourcedestination



Rsync can streamline file transfers over networked connections and add robustness to local directory syncing. The flexibility of Rsync makes it a good option for many different file-level operations.

A mastery of Rsync allows you to design complex backup operations and obtain fine-grained control over how and what is transferred.

Learn how to allocate swap space in linux and in ec2 instance also



ulimit ulimit is a built-in Linux shell command that allows viewing or limiting system resource amounts that individual users consume. Limiting resource usage is valuable in environments with multiple users and system performance issues.

ulimit -a

The limits.conf file is a configuration file that defines the system resource allocation settings ulimit uses. The full path to the configuration file is /etc/security/limits.conf


logrotate is designed to ease administration of systems that generate large numbers of log files. It allows automatic rotation, compression, removal, and mailing of log files. Each log file may be handled daily, weekly, monthly, or when it grows too large.

Normally, logrotate is run as a daily cron job. It will not modify a log multiple times in one day unless the criterion for that log is based on the log's size and logrotate is being run multiple times each day, or unless the -f or --force option is used.