< Back to home

AWS - 1

Amazon Web Services : It is a platform that offers flexible, reliable, scalable, easy-to-use and, cost-effective cloud computing solutions.

To Learn


Before we start.. let’s take a look at AWS free tier limits, so that we don’t get charged

AWS free tier limits :

Services that are available in the AWS Free Usage Tier

Note: If you are linked to an Organization (under AWS Organizations), only one account within the organization can benefit from the Free Tier offers.


How to monitor your AWS Free Usage Tier?

You receive fairly generous credits of AWS resources as part of the Free Tier and you will not be billed unless your usage exceeds those credits. Additionally, AWS has brought in a new feature the Billing and Cost Management Dashboard to help you keep better track of your AWS usage, and see where you are at with respect to the Free Tier credits for each service. It’s easy to view your actual usage (month to date) and your forecasted usage (up to the end of the month).

This feature should be used to estimate and plan your AWS costs, ensuring you stay within your free tier limits. You can even receive alerts if your costs exceed a threshold that you set, which could be $0. All of this information is available to you in the AWS Billing and Cost Management Dashboard.

Can you host a website using the AWS Free Usage Tier?

This is one of the most common questions asked about the AWS Free Tier.

Yes,  the credits offered by AWS Free Usage Tier are enough to host and run a website for a year, with enough left over for additional experimentation. Using AWS Free Tier Web Hosting, you can host a Static Website. Static websites deliver HTML, JavaScript, images, video, and other files to your website visitors, and contain no application code.

They are the best for sites with few authors and relatively infrequent content changes. Static websites are low cost, provide high levels of reliability, require almost no IT administration, and scale to handle enterprise-level traffic with no additional work.

Limits on the AWS Free Tier

The AWS free usage tier expires 12 months from the date you sign up. When your free usage expires, you simply pay standard, pay-as-you-go service rates.

The AWS free usage tier is available to new AWS accounts created on or after October 21, 2010.

Amazon Simple Workflow Service, Amazon DynamoDB, Amazon SimpleDB, Amazon Simple Notification Service(SNS), and Amazon Simple Queue Service(SQS) free tiers are some of the services that are available to both existing and new AWS customers indefinitely.

Services not available in the AWS Free Usage Tier

Why am I being billed for Elastic IP addresses when all my Amazon EC2 instances are terminated?

How do I associate a static public IP address with my EC2 Windows or Linux instance? https://aws.amazon.com/premiumsupport/knowledge-center/ec2-associate-static-public-ip/

Control your AWS costs

https://aws.amazon.com/getting-started/hands-on/control-your-costs-free-tier-budgets/#:~:text=AWS Budgets has a Free,eligible service usage is free.


EC2 instances and instance types

Launch an EC2 Instance in AWS free tier account

Amazon EC2 Spot Instances

Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and test & development workloads. Because Spot Instances are tightly integrated with AWS services such as Auto Scaling, EMR, ECS, CloudFormation, Data Pipeline and AWS Batch, you can choose how to launch and maintain your applications running on Spot Instances.

Moreover, you can easily combine Spot Instances with On-Demand, RIs and Savings Plans Instances to further optimize workload cost with performance. Due to the operating scale of AWS, Spot Instances can offer the scale and cost savings to run hyper-scale workloads. You also have the option to hibernate, stop or terminate your Spot Instances when EC2 reclaims the capacity back with two-minutes of notice. Only on AWS, you have easy access to unused compute capacity at such massive scale - all at up to a 90% discount.

Savings Plans

Savings Plans are a flexible pricing model that offer low prices on EC2, Fargate and Lambda usage, in exchange for a commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term. Savings Plans provide you the flexibility to use the compute option that best suits your needs and automatically save money, all without having to perform exchanges or modifications. When you sign up for a Savings Plan, you will be charged the discounted Savings Plans price for your usage up to your commitmen

current state of an instance(AWS EC2 lifecycle.)

The valid values for instance-state-code will all be in the range of the low byte and they are:

AWS EC2 Stop vs Terminate: The Difference in Status, Purpose, and Cost


EC2 stop vs terminate are two different instance states in the AWS EC2 lifecycle. Understanding the differences is important to ensure you’re best managing your resources, which has implications for your applications, resource costs, and more.

What Happens When You Stop EC2 Instances

To look at stop vs terminate EC2 actions, we’ll start with “stopped”. The EC2 “stopped” state indicates that an instance is shut down and cannot be used. Basically, it is a temporary shutdown for when you are not using an instance, but you will need it later. The attached bootable EBS volume will not be deleted.

Here are a few important things to know about how stopped instances behave:

AWS EC2 Stop Instances Use Cases

Reasons you may want to use the “stopped” state include:

Instance Types That Cannot Be Stopped

When you launch your instances from an AMI you can choose between AMIs backed by Amazon  EBS or backed by instance store. Instance store-backed instances cannot be stopped.

Additionally, instances in auto scaling groups are not designed to be stopped individually. This will typically trigger a health check and your instances will be marked as unhealthy so your instances will be terminated or replaced. You can “suspend” this action for an individual auto scaling group, or detach the instance from the group and then stop it.

Previously, the AWS stop instance state could not be used for spot instances, but this functionality was added for persistent spot requests with EBS-backed spot instances in January 2020

Hibernate EC2 Instances

Notice also that there is a “stopping” state between running and stopped. If the instance is preparing to be stopped, you are not charged for usage, but if it is preparing to hibernate, usage is billed.

The hibernate action is a “suspend-to-disk” action that saves the contents from the instance memory to your Amazon EBS root volume. So, processes that were previously running can be restored when the instance is started. This allows a quick restart, without having to wait for caches and other memory-centric application components that slow down restarts.

Otherwise, hibernation is similar to stopping and a hibernated instance also goes from the “stopping” to “stopped” state (and as mentioned above, it is still billed while in the “stopping” state, but not “stopped.”)

Note that in order to be hibernated, instances must enable this ability when launching the instance – you can’t enable or disable after launch. Review AWS’s EC2 hibernation prerequisites for details regarding which types of instances can enable this option.

What Happens When You Terminate EC2 Instances

To terminate, on the other hand, is a permanent deletion. Use this when you are finished with an instance, as terminated instances can’t be recovered.

Here are a few things to note about the behavior of terminated instances:

Regarding billing, there can be some confusion regarding reserved instances. Reserved instances are not capacity reservations, but more like pre-paid credits. Therefore, when you stop an instance that had a reservation applied to it, that does not reduce the cost of your reserved instance. To that end, check out savings plans instead.

As you can see, the AWS EC2 stop vs terminate statuses serve very different functions.

Start and Stop EC2 Instances on a Schedule to Reduce Costs

As mentioned above, temporarily stopping an EC2 instance is a great way to save money. The typical use case is to set non-production instances – such as those used for development, testing, staging, and QA – on an on/off schedule, to turn off nights and weekends when not in use. Just by turning an instance off for 12 hours/day and on weekends, you can reduce the cost by 65%.


An Amazon Machine Image (AMI) is a supported and maintained image provided by AWS that provides the information required to launch an instance. You must specify an AMI when you launch an instance. You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations.

An AMI includes the following:

An AMI is a template that contains the software configuration (operating system, application server, and applications) required to launch your instance. You can select an AMI provided by AWS, our user community, or the AWS Marketplace; or you can select one of your own AMIs.



An Amazon EBS volume is a durable, block-level storage device that you can attach to your instances. After you attach a volume to an instance, you can use it as you would use a physical hard drive. EBS volumes are flexible. For current-generation volumes attached to current-generation instance types, you can dynamically increase size, modify the provisioned IOPS capacity, and change volume type on live production volumes.

You can use EBS volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for a database application. You can also use them for throughput-intensive applications that perform continuous disk scans. EBS volumes persist independently from the running life of an EC2 instance.

You can attach multiple EBS volumes to a single instance. The volume and instance must be in the same Availability Zone. Depending on the volume and instance types, you can use Multi-Attach to mount a volume to multiple instances at the same time.

EBS volumes provide benefits that are not provided by instance store volumes.




Snapshots go across regions where volumes stay in the same region as the snapshot. You can create a copy of a snapshot but you can't create a copy of a volume. In order to make a copy of a volume you have to use a snapshot. Volumes, images, instances all depend on the snapshot. Snapshot is the glue between volumes, images and instances.

You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. Each snapshot contains all of the information that is needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume.

When you create an EBS volume based on a snapshot, the new volume begins as an exact replica of the original volume that was used to create the snapshot. The replicated volume loads data in the background so that you can begin using it immediately. If you access data that hasn't been loaded yet, the volume immediately downloads the requested data from Amazon S3, and then continues loading the rest of the volume's data in the background.

When you delete a snapshot, only the data unique to that snapshot is removed.

How incremental snapshots work

This section shows how an EBS snapshot captures the state of a volume at a point in time, and how successive snapshots of a changing volume create a history of those changes.

Relations among multiple snapshots of the same volume

The diagram in this section shows Volume 1 at three points in time. A snapshot is taken of each of these three volume states. The diagram specifically shows the following:

Relations among incremental snapshots of different volumes

The diagram in this section shows how incremental snapshots can be taken from different volumes.


The diagram assumes that you own Vol 1 and that you have created Snap A. If Vol 1 was owned by another AWS account and that account took Snap A and shared it with you, then Snap B would be a full snapshot.

1. Vol 1 has 10 GiB of data. Because Snap A is the first snapshot taken of the volume, the entire 10 GiB of data is copied and stored. 2. Vol 2 is created from Snap A, so it is an exact replica of Vol 1 at the time the snapshot was taken. 3. Over time, 4 GiB of data is added to Vol 2 and its total size becomes 14 GiB. 4. Snap B is taken from Vol 2. For Snap B, only the 4 GiB of data that was added after the volume was created from Snap A is copied and stored. The other 10 GiB of unchanged data, which is already stored in Snap A, is referenced by Snap B instead of being copied and stored again. Snap B is an incremental snapshot of Snap A, even though it was created from a different volume.

For more information about how data is managed when you delete a snapshot, see Delete an Amazon EBS snapshot.

Copy and share snapshots

You can share a snapshot across AWS accounts by modifying its access permissions. You can make copies of your own snapshots as well as snapshots that have been shared with you. For more information, see Share an Amazon EBS snapshot.

A snapshot is constrained to the AWS Region where it was created. After you create a snapshot of an EBS volume, you can use it to create new volumes in the same Region. For more information, see Create a volume from a snapshot. You can also copy snapshots across Regions, making it possible to use multiple Regions for geographical expansion, data center migration, and disaster recovery. You can copy any accessible snapshot that has a completed status. For more information, see Copy an Amazon EBS snapshot.

Encryption support for snapshots

EBS snapshots fully support EBS encryption.

AWS Security groups or Control traffic to resources using security groups

https://www.checkpoint.com/cyber-hub/cloud-security/what-is-aws-security-groups/#:~:text=Check Point solution-,What are AWS Security Groups%3F,traffic from your instance%2C respectively.


security group  acts as a virtual firewall, controlling the traffic that is allowed to reach and leave the resources that it is associated with. For example, after you associate a security group with an EC2 instance, it controls the inbound and outbound traffic for the instance.

When you create a VPC, it comes with a default security group. You can create additional security groups for each VPC. You can associate a security group only with resources in the VPC for which it is created.

For each security group, you add rules that control the traffic based on protocols and port numbers. There are separate sets of rules for inbound traffic and outbound traffic.

You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. For more information about the differences between security groups and network ACLs, see Compare security groups and network ACLs.

• Security groups are stateful. For example, if you send a request from an instance, the response traffic for that request is allowed to reach the instance regardless of the inbound security group rules. Responses to allowed inbound traffic are allowed to leave the instance, regardless of the outbound rules.

Security group basics

The following are the characteristics of security groups:

The following are the characteristics of security group rules:

Control traffic to subnets using Network ACLs

network access control list (ACL)  is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.

Network ACL basics

The following are the basic things that you need to know about network ACLs:

Elastic IP addresses

An Elastic IP address  is a static IPv4 address designed for dynamic cloud computing. An Elastic IP address is allocated to your AWS account, and is yours until you release it. By using an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account. Alternatively, you can specify the Elastic IP address in a DNS record for your domain, so that your domain points to your instance. For more information, see the documentation for your domain registrar, or Set up dynamic DNS on Your Amazon Linux instance .

An Elastic IP address is a public IPv4 address, which is reachable from the internet. If your instance does not have a public IPv4 address, you can associate an Elastic IP address with your instance to enable communication with the internet. For example, this allows you to connect to your instance from your local computer.

Elastic IP address pricing

To ensure efficient use of Elastic IP addresses, we impose a small hourly charge if an Elastic IP address is not associated with a running instance, or if it is associated with a stopped instance or an unattached network interface. While your instance is running, you are not charged for one Elastic IP address associated with the instance, but you are charged for any additional Elastic IP addresses associated with the instance.

Elastic IP address basics

The following are the basic characteristics of an Elastic IP address:

Use reverse DNS for email applications

If you intend to send email to third parties from an instance, we recommend that you provision one or more Elastic IP addresses and assign static reverse DNS records to the Elastic IP addresses that you use to send email. This can help you avoid having your email flagged as spam by some anti-spam organizations. AWS works with ISPs and internet anti-spam organizations to reduce the chance that your email sent from these addresses will be flagged as spam.


Elastic IP address limit

By default, all AWS accounts are limited to five (5) Elastic IP addresses per Region, because public (IPv4) internet addresses are a scarce public resource. We strongly encourage you to use an Elastic IP address primarily for the ability to remap the address to another instance in the case of instance failure, and to use DNS hostnames for all other inter-node communication

Placement Groups

Run commands on your Linux instance at launch


When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. You can also pass this data into the launch instance wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls).

If you are interested in more complex automation scenarios, consider using AWS CloudFormation and AWS OpsWorks. For more information, see the AWS CloudFormation User Guide and the AWS OpsWorks User Guide.

In the following examples, the commands from the Install a LAMP Web Server on Amazon Linux 2 are converted to a shell script and a set of cloud-init directives that run when the instance launches. In each example, the following tasks are performed by the user data:

The examples in this topic assume the following:

User data and shell scripts


By default, user data scripts and cloud-init directives run only during the boot cycle when you first launch an instance. You can update your configuration to ensure that your user data scripts and cloud-init directives run every time you restart your instance. For more information, see How can I utilize user data to automatically run a script with every restart of my Amazon EC2 Linux instance? in the AWS Knowledge Center.

User data shell scripts must start with the #!  characters and the path to the interpreter you want to read the script (commonly /bin/bash)

For a great introduction on shell scripting, see the BASH Programming HOW-TO  at the Linux Documentation Project (tldp.org ).

Scripts entered as user data are run as the root  user, so do not use the sudo  command in the script. Remember that any files you create will be owned by root ; if you need non-root users to have file access, you should modify the permissions accordingly in the script. Also, because the script is not run interactively, you cannot include commands that require user feedback (such as yum update  without the -y  flag).

If you use an AWS API, including the AWS CLI, in a user data script, you must use an instance profile when launching the instance. An instance profile provides the appropriate AWS credentials required by the user data script to issue the API call. For more information, see Using instance profiles in the IAM User Guide. The permissions you assign to the IAM role depend on which services you are calling with the API. For more information, see IAM roles for Amazon EC2.

The cloud-init output log file (/var/log/cloud-init-output.log) captures console output so it is easy to debug your scripts following a launch if the instance does not behave the way you intended.

When a user data script is processed, it is copied to and run from /var/lib/cloud/instances/instance-id/. The script is not deleted after it is run. Be sure to delete the user data scripts from /var/lib/cloud/instances/instance-id/ before you create an AMI from the instance. Otherwise, the script will exist in this directory on any instance launched from the AMI.

Follow the procedure for launching an instance . The User data  field is located in the Advanced details  section of the launch instance wizard. Enter your shell script in the User data  field, and then complete the instance launch procedure.

In the example script below, the script creates and configures our web server.

yum update -y
amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2
yum install -y httpd mariadb-server
systemctl start httpd
systemctl enable httpd
usermod -a -G apache ec2-user
chown -R ec2-user:apache /var/www
chmod 2775 /var/www
find /var/www -type d -exec chmod 2775 {} \;
find /var/www -type f -exec chmod 0664 {} \;
echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php

For our example, in a web browser, enter the URL of the PHP test file the script created. This URL is the public DNS address of your instance followed by a forward slash and the file name.


You should see the PHP information page. If you are unable to see the PHP information page, check that the security group you are using contains a rule to allow HTTP (port 80) traffic. For more information, see Add rules to a security group.

(Optional) If your script did not accomplish the tasks you were expecting it to, or if you just want to verify that your script completed without errors, examine the cloud-init output log file at /var/log/cloud-init-output.log and look for error messages in the output.

For additional debugging information, you can create a Mime multipart archive that includes a cloud-init data section with the following directive:

output : { all : '| tee -a /var/log/cloud-init-output.log' }

This directive sends command output from your script to /var/log/cloud-init-output.log. For more information about cloud-init data formats and creating Mime multi part archive, see cloud-init Formats.

User data and cloud-init directives

The cloud-init package configures specific aspects of a new Amazon Linux instance when it is launched; most notably, it configures the .ssh/authorized_keys file for the ec2-user so you can log in with your own private key. For more information about the configuration tasks that the cloud-init package performs for Amazon Linux instances, see cloud-init.

The cloud-init user directives can be passed to an instance at launch the same way that a script is passed, although the syntax is different. For more information about cloud-init, see http://cloudinit.readthedocs.org/en/latest/index.html.

To pass cloud-init directives to an instance with user data

Follow the procedure for launching an instance. The User data field is located in the Advanced details section of the launch instance wizard. Enter your cloud-init directive text in the User data field, and then complete the instance launch procedure.

In the example below, the directives create and configure a web server on Amazon Linux 2. The #cloud-config line at the top is required in order to identify the commands as cloud-init directives.

repo_update: true
repo_upgrade: all

 - httpd
 - mariadb-server

 - [ sh, -c, "amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2" ]
 - systemctl start httpd
 - sudo systemctl enable httpd
 - [ sh, -c, "usermod -a -G apache ec2-user" ]
 - [ sh, -c, "chown -R ec2-user:apache /var/www" ]
 - chmod 2775 /var/www
 - [ find, /var/www, -type, d, -exec, chmod, 2775, {}, \; ]
 - [ find, /var/www, -type, f, -exec, chmod, 0664, {}, \; ]
 - [ sh, -c, 'echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php' ]

For this example, in a web browser, enter the URL of the PHP test file the directives created. This URL is the public DNS address of your instance followed by a forward slash and the file name.


(Optional) If your directives did not accomplish the tasks you were expecting them to, or if you just want to verify that your directives completed without errors, examine the output log file at /var/log/cloud-init-output.log and look for error messages in the output. For additional debugging information, you can add the following line to your directives:

output : { all : '| tee -a /var/log/cloud-init-output.log' }

This directive sends runcmd output to /var/log/cloud-init-output.log.

User data and the AWS CLI


You can use the AWS CLI to specify, modify, and view the user data for your instance. For information about viewing user data from your instance using instance metadata, see Retrieve instance user data.

Utilize user data to automatically run a script with every restart of my Amazon EC2 Linux instance


By default, user data scripts and cloud-init directives run only during the first boot cycle when an EC2 instance is launched. However, you can configure your user data script and cloud-init directives with a mime multi-part file. A mime multi-part file allows your script to override how frequently user data is run in the cloud-init package. Then, the file runs the user script. For more information on mime multi-part files, see Mime Multi Part Archive on the cloud-init website.

Note: It's a best practice to create a snapshot of your instance before proceeding with the following resolution.


Warning: Before starting this procedure, review the following:

1.    Make sure that the latest version of cloud-init is installed and functioning properly on your EC2 instance.

2.    For security reasons, create an IAM policy to restrict the users who are allowed to add or remove user data through the ModifyInstanceAttribute API.

3.    Open the Amazon EC2 console.

4.    Stop your instance.

5.    Choose Actions, choose Instance Settings, and then choose Edit User Data.

6.    Copy your user script into the Edit user data box, and then choose Save.

The following example is a shell script that writes "Hello World" to a testfile.txt file in a /tmp directory.

Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0

Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"

- [scripts-user, always]

Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"

/bin/echo "Hello World" >> /tmp/testfile.txt

By default, cloud-init allows only one content type in user data at a time. However, this example shows both text/cloud-config and text/x-shellscript content-types in a mime-multi part file.

The text/cloud-config content type overrides how frequently user data is run in the cloud-init package by setting the SCRIPTS-USER parameter to ALWAYS.

The text/x-shellscript content type provides the actual user script to be run by the cloud-init cloud_final_modules module. In this example, there is only one line to be run, which is /bin/echo "Hello World." >> /tmp/testfile.txt.

Note: Replace the line /bin/echo "Hello World." >> /tmp/testfile.txt with the shell script that you want to run during the instance boot.

7.    Start your EC2 instance again, and validate that your script runs correctly.

AWS & EC2: Achieving Elastic Load Balancing & Auto Scaling


Load balancer is a service which uniformly distributes network traffic and workloads across multiple servers or cluster of servers. Load balancer in AWS increases the availability and fault tolerance of an application.

It allows auto scaling and, at the same time, allows single point access. It keeps the health of registered instances in check and if any instance is unhealthy, traffic is redirected to another instance.

AWS has 3 types of Load Balancers

It is recommended to use the new generation Load Balancers as they provide more features.

Application Loads Balancers

Network Loads Balancers

If LB doesn't connect to application, check the Security Groups

For this article, I have already setup an EC2 instance and deployed a basic Node.js application using Nginx and PM2.

follow the link for complete instructions


AWS Auto Scaling

Auto-scaling is a way to automatically scale up or down the number of compute resources that are being allocated to your application based on its needs at any given time. AWS Auto Scaling lets you build scaling plans that automate how groups of different resources respond to changes in demand.

In simple words, in Auto Scaling, when the load increases on a server, it is this concept that will scale-in or scale-out number of active servers, so that there is no increase in response time of requests.


Load balancing test script

yum update -y
yum install -y httpd.x86_64
systemctl start httpd.service
systemctl enable httpd.service
echo "Hello, world from $(hostname -f)" > /var/www/html/index.html

Step by Step Instructions to setup Application Load Balancer



Install a LAMP web server on Amazon Linux 2

The following procedures help you install an Apache web server with PHP and MariaDB  (a community-developed fork of MySQL) support on your Amazon Linux 2 instance (sometimes called a LAMP web server or LAMP stack). You can use this server to host a static website or deploy a dynamic PHP application that reads and writes information to a database.


If you are trying to set up a LAMP web server on a different distribution, such as Ubuntu or Red Hat Enterprise Linux, this tutorial will not work. For Amazon Linux AMI, see Tutorial: Install a LAMP web server on the Amazon Linux AMI. For Ubuntu, see the following Ubuntu community documentation: ApacheMySQLPHP. For other distributions, see their specific documentation.


You must also have configured your security group to allow SSH (port 22), HTTP (port 80), and HTTPS (port 443) connections.

To prepare the LAMP server

The -y option installs the updates without asking for confirmation. If you would like to examine the updates before installing, you can omit this option.

[ec2-user ~]$sudo yum update -y

Install the lamp-mariadb10.2-php7.2 and php7.2 Amazon Linux Extras repositories to get the latest versions of the LAMP MariaDB and PHP packages for Amazon Linux 2.

[ec2-user ~]$sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2

Now that your instance is current, you can install the Apache web server, MariaDB, and PHP software packages.

Use the yum install command to install multiple software packages and all related dependencies at the same time.

[ec2-user ~]$sudo yum install -y httpd mariadb-server

You can view the current versions of these packages using the following command:

yum infopackage_name

Start the Apache web server.

[ec2-user ~]$sudo systemctl start httpd

Use the systemctl command to configure the Apache web server to start at each system boot.

[ec2-user ~]$sudo systemctl enable httpd

You can verify that httpd is on by running the following command:

[ec2-user ~]$sudo systemctl is-enabled httpd

Test your web server. In a web browser, type the public DNS address (or the public IP address) of your instance. If there is no content in /var/www/html , you should see the Apache test page. You can get the public DNS for your instance using the Amazon EC2 console (check the Public DNS  column; if this column is hidden, choose Show/Hide Columns  (the gear-shaped icon) and choose Public DNS ).

Apache httpd serves files that are kept in a directory called the Apache document root. The Amazon Linux Apache document root is /var/www/html, which by default is owned by root.

To allow the ec2-user account to manipulate files in this directory, you must modify the ownership and permissions of the directory. There are many ways to accomplish this task. In this tutorial, you add ec2-user to the apache group, to give the apache group ownership of the /var/www directory and assign write permissions to the group.

To set file permissions

  1. Add your user (in this case, ec2-user) to the apache group.
    [ec2-user ~]$sudo usermod -a -G apacheec2-user

Log out and then log back in again to pick up the new group, and then verify your membership.

  1. Log out (use the exit command or close the terminal window):
    [ec2-user ~]$exit
  1. To verify your membership in the apache group, reconnect to your instance, and then run the following command:
    [ec2-user ~]$groups
    ec2-user adm wheel apache systemd-journal
  1. Change the group ownership of /var/www and its contents to the apache group.
    [ec2-user ~]$sudo chown -R ec2-user:apache /var/www

To add group write permissions and to set the group ID on future subdirectories, change the directory permissions of /var/www and its subdirectories.

[ec2-user ~]$sudo chmod 2775 /var/www && find /var/www -ty
  1. To add group write permissions, recursively change the file permissions of /var/www and its subdirectories:
    [ec2-user ~]$find /var/www -type f -exec sudo chmod 0664 {} \;

Now, ec2-user  (and any future members of the apache  group) can add, delete, and edit files in the Apache document root, enabling you to add content, such as a static website or a PHP application.

Test your LAMP server

If your server is installed and running, and your file permissions are set correctly, your ec2-user account should be able to create a PHP file in the /var/www/html directory that is available from the internet.

Create a PHP file in the Apache document root.

[ec2-user ~]$echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php

In a web browser, type the URL of the file that you just created. This URL is the public DNS address of your instance followed by a forward slash and the file name. For example:

  1. Delete the phpinfo.php file. Although this can be useful information, it should not be broadcast to the internet for security reasons.
    [ec2-user ~]$rm /var/www/html/phpinfo.php

Secure the database server

The default installation of the MariaDB server has several features that are great for testing and development, but they should be disabled or removed for production servers. The mysql_secure_installation command walks you through the process of setting a root password and removing the insecure features from your installation. Even if you are not planning on using the MariaDB server, we recommend performing this procedure.

  1. Start the MariaDB server.
    [ec2-user ~]$sudo systemctl start mariadb

Run mysql_secure_installation.

[ec2-user ~]$sudo mysql_secure_installation
  1. When prompted, type a password for the root account.
    1. Type the current root password. By default, the root account does not have a password set. Press Enter.
    1. Type Y to set a password, and type a secure password twice. For more information about creating a secure password, see https://identitysafe.norton.com/password-generator/. Make sure to store this password in a safe place.

      Setting a root password for MariaDB is only the most basic measure for securing your database. When you build or install a database-driven application, you typically create a database service user for that application and avoid using the root account for anything but database administration.

  1. Type Y to remove the anonymous user accounts.
  1. Type Y to disable the remote root login.
  1. Type Y to remove the test database.
  1. Type Y to reload the privilege tables and save your changes.
  1. (Optional) If you do not plan to use the MariaDB server right away, stop it. You can restart it when you need it again.
    [ec2-user ~]$sudo systemctl stop mariadb
  1. (Optional) If you want the MariaDB server to start at every boot, type the following command.
    [ec2-user ~]$sudo systemctl enable mariadb

(Optional) Install phpMyAdmin

To install phpMyAdmin

  1. Install the required dependencies.
    [ec2-user ~]$sudo yum install php-mbstring php-xml -y
  1. Restart Apache.
    [ec2-user ~]$sudo systemctl restart httpd
  1. Restart php-fpm.
    [ec2-user ~]$sudo systemctl restart php-fpm
  1. Navigate to the Apache document root at /var/www/html.
    [ec2-user ~]$cd /var/www/html
  1. Select a source package for the latest phpMyAdmin release from https://www.phpmyadmin.net/downloads. To download the file directly to your instance, copy the link and paste it into a wget command, as in this example:
    [ec2-user html]$wgethttps://www.phpmyadmin.net/downloads/phpMyAdmin-latest-all-languages.tar.gz
  1. Create a phpMyAdmin folder and extract the package into it with the following command.
    [ec2-user html]$mkdir phpMyAdmin && tar -xvzfphpMyAdmin-latest-all-languages.tar.gz -C phpMyAdmin --strip-components 1
  1. Delete the phpMyAdmin-latest-all-languages.tar.gz tarball.
    [ec2-user html]$rmphpMyAdmin-latest-all-languages.tar.gz
  1. (Optional) If the MySQL server is not running, start it now.
    [ec2-user ~]$sudo systemctl start mariadb
  1. In a web browser, type the URL of your phpMyAdmin installation. This URL is the public DNS address (or the public IP address) of your instance followed by a forward slash and the name of your installation directory. For example:

    You should see the phpMyAdmin login page:

  1. Log in to your phpMyAdmin installation with the root user name and the MySQL root password you created earlier.

Your installation must still be configured before you put it into service. We suggest that you begin by manually creating the configuration file, as follows:

  1. To start with a minimal configuration file, use your favorite text editor to create a new file, and then copy the contents of config.sample.inc.php into it.
  1. Save the file as config.inc.php in the phpMyAdmin directory that contains index.php.
  1. Refer to post-file creation instructions in the Using the Setup script section of the phpMyAdmin installation instructions for any additional setup.

Host a WordPress blog on Amazon Linux 2


The following procedures will help you install, configure, and secure a WordPress blog on your Amazon Linux 2 instance. This tutorial is a good introduction to using Amazon EC2 in that you have full control over a web server that hosts your WordPress blog, which is not typical with a traditional hosting service.

You are responsible for updating the software packages and maintaining security patches for your server. For a more automated WordPress installation that does not require direct interaction with the web server configuration, the AWS CloudFormation service provides a WordPress template that can also get you started quickly. For more information, see Get started in the AWS CloudFormation User Guide. If you'd prefer to host your WordPress blog on a Windows instance, see Deploy a WordPress blog on your Amazon EC2 Windows instance in the Amazon EC2 User Guide for Windows Instances. If you need a high-availability solution with a decoupled database, see Deploying a high-availability WordPress website in the AWS Elastic Beanstalk Developer Guide.

To complete this tutorial using AWS Systems Manager Automation instead of the following tasks, run the automation document


Help! My public DNS name changed and now my blog is broken

Your WordPress installation is automatically configured using the public DNS address for your EC2 instance. If you stop and restart the instance, the public DNS address changes (unless it is associated with an Elastic IP address) and your blog will not work anymore because it references resources at an address that no longer exists (or is assigned to another EC2 instance). A more detailed description of the problem and several possible solutions are outlined in https://wordpress.org/support/article/changing-the-site-url/.

If this has happened to your WordPress installation, you may be able to recover your blog with the procedure below, which uses the wp-cli command line interface for WordPress.

To change your WordPress site URL with the wp-cli

  1. Connect to your EC2 instance with SSH.
  1. Note the old site URL and the new site URL for your instance. The old site URL is likely the public DNS name for your EC2 instance when you installed WordPress. The new site URL is the current public DNS name for your EC2 instance. If you are not sure of your old site URL, you can use curl to find it with the following command.
    [ec2-user ~]$curl localhost | grep wp-content

    You should see references to your old public DNS name in the output, which will look like this (old site URL in red):

    <script type='text/javascript' src='http://ec2-52-8-139-223.us-west-1.compute.amazonaws.com/wp-content/themes/twentyfifteen/js/functions.js?ver=20150330'></script>
  1. Download the wp-cli with the following command.
    [ec2-user ~]$curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
  1. Search and replace the old site URL in your WordPress installation with the following command. Substitute the old and new site URLs for your EC2 instance and the path to your WordPress installation (usually /var/www/html or /var/www/html/blog).
    [ec2-user ~]$php wp-cli.phar search-replace'old_site_url''new_site_url' 
    --path=/path/to/wordpress/installation --skip-columns=guid
    [ec2-user ~]$php wp-cli.phar search-replace'http://old-ip''http://new-ip' 
    	--path=/path/to/wordpress/installation --skip-columns=guid


[ec2-user@ip-172-31-80-100 ~]$ php wp-cli.phar search-replace 'http://ec2-184-73-148-109.compute-1.amazonaws.com' 
	--path=/home/ec2-user/wordpress --skip-columns=guid
[ec2-user@ip-172-31-80-100 ~]$ php wp-cli.phar search-replace 
	'' '' 
	--path=/home/ec2-user/wordpress --skip-columns=guid
  1. In a web browser, enter the new site URL of your WordPress blog to verify that the site is working properly again. If it is not, see https://wordpress.org/support/article/changing-the-site-url/ and https://wordpress.org/support/article/how-to-install-wordpress/#common-installation-problems for more information.

Resizing & Changing Type, EBS Snapshot, Attach & Detach EBS


Root device volume linux -> : /dev/xvda

Amazon EC2 instance root device volume

Deployed React JS application on EC2 ubuntu

  1. update the system
  1. install git
  1. add ssh key for git
  1. clone repo
  1. install node and npm and then npm install the application dependencies
  1. npm start or npm build
  1. port 3000 should be made accessible

How to install and setup Git on EC2


git is already installed ..just add ssh and git config