Amazon AWS EC2 hosting

This is part 1 of a 2 part post running a modern React and Node app with a PostgreSQL database on a single AWS EC2 instance with PM2 and Nginx for a reverse proxy.

  • Part 1: Setting up your AWS EC2 instance and running multiple Node processes simultaneously - [you are here]
  • Part 2: Pointing your domain name at the EC2 instance for free (without the use of AWS Route 53), and installing and configuring a PostgreSQL database - [coming soon]

Amazon Web Services (AWS) is a gargantuan beast of online services, and can be daunting to use for newcomers. One of the most popular services is AWS’s EC2 - virtual cloud computing instances for hosting web sites/apps. I’ve used Heroku and Digital Ocean before, and made a move last year to use AWS because of it’s flexibility and low cost (when done in the right way). I’ve seen more people moving over to AWS lately, and a lot of people asking the same question of how best to approach running small applications. This post will explain how I approached AWS for hosting separate front-end and back-end repositories on a single EC2 instance with a PostgreSQL database using Nginx.

To clarify in more detail, I’ll be discussing the configuration of one EC2 instance to run an application which is comprised of a client side (could be built in React or Vue) and a server side (built with Node). This will be handled by having the client and server side parts of the project on different ports with custom domain name pointing to the EC2 instance, using Nginx. The idea behind this is to be the most cost effective solution for hosting a modern web application on AWS.

I’ll be going in to a fair amount of detail here as I see a lot of people asking questions who have never used AWS.

Before we cover EC2…

Before I start, it’s worth noting that I have tried AWS’s Elastic Beanstalk (EB) service recently (part of what made me want to write this post), and did not like it at all. It could be my ignorance, but I tried for a couple of days to get the outcome described above and hit so many stumbling blocks that it was just not worth it.

For those who don’t know, Elastic Beanstalk is a solution which aims to automate the process of hosting on AWS. It’s a free service, and essentially all it does is sets up a separate EC2 instances for different parts of your application for you. This means that for one React/Node/Postgres app, Elastic Beanstalk sets up 3 EC2 instances (front-end, back-end, and database), so for a small side-project this can be relatively expensive.

Aside from the cost, I found the administration to be disjointed as I was switching back and forth between Elastic Beanstalk and the actual EC2 instances. There’s a separate dashboard for EB and EC2, and you have to use both dashboards to manage certain things. There’s also CLI tools, but again there’s a specific EB CLI tool which is different from the AWS CLI tool.

The one good part of Elastic Beanstalk that I did like was the deployment feature, allowing very easy code deployment which is already configured. Coming from a Heroku and DO background this is a great feature to have, but ultimately everything can be set up manually with a little effort, and it’s not as hard as you might think. I’ll cover deployment later in the post though.

Creating your EC2 instance

Start off by heading over to AWS and creating an account if you don’t have one already. With a new account you get 12 months of free hosting on an EC2 instance. You actually get 750 hours of free EC2 hosting a month which works out to one T2.micro instance running constantly. Once the first 12 months are over, you are only charged for the time the EC2 instances are running - a pay as you go system which works really well.

EC2 have different “types” of instances which differ in the specs. Here’s a great link which shows the cost of these instance levels, and which one you select in production will depend on the size/usage of your site or app. This post will use the T2.micro because it’s available on the free tier.

Once you log in, go to services, and then EC2.

Before we create an instance, we want to create a Key Pair. This is a file which you keep on your local machine which is used to give you SSH access to your EC2 instance. It’s kind of like your SSH keys. Scroll down in the left side menu to the “NETWORK AND SECURITY” section and then “Key Pairs”, then select “Create Key Pair”.

Give the key what ever name you like, and create the key. This should automatically download the key to your machine, where you’ll want to move it to your /Users/[USER]/.ssh directory.

Next, we create the EC2 instance, so go to your EC2 dashboard and click “Launch Instance”.

On step one, select the “Amazon Linux 2 AMI” (should be the first on the list), and then on step two select the “T2.micro” - this should say “free tier eligible” which will give you the first 12 months free as mentioned above. Then click “Review and Launch” as we can configure the other parts when we need to later.

Finally click “Launch” and select the Key Pair we just created from the drop down list. This assigns the key pair to the instance for you. Then click “Launch Instances”.

Once the instance is up and running, you’ll see it on the EC2 dashboard. Next, we’ll try and SSH in to our new EC2 instance for the first time! On the dashboard, select your instance and click “Connect” at the top (to the left of the “Launch Instance” button), and this will open a popup which gives instructions to connect. Make sure you change the permissions of your .pem file you downloaded earlier as you won’t be allowed to SSH in otherwise. Finally, run the SSH command which is shown at the bottom of the popup, which will look something like:

ssh -i "/Users/[USER]/.ssh/[KEY-NAME].pem" [email protected]

Wooh! We’re in.

Installing Node and Git

Now we’re in, we’ll install Node in order to run our app. The easiest way is to use Node Version Manager (NVM), so run:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash

Which installs NVM. Once installed, NVM cannot be used right away as the path has not been added to the .bash_profile or .bash_rc files. The paths only update once the user logs in again, so just run exit which logs you out of the EC2 instance, and then SSH back in again. Once back in, run nvm install node which gets the latest version of Node and NPM. This instance is starting to feel a little more like home!

In order to get my client and server side repositories on to the server I like using Git. There are many deployment methods which can use container systems like Docker with CI etc., but for the purposes of this post we’ll keep it basic. Go ahead and run sudo yum install git to install Git.

Amazon’s Linux server installs packages using Yum as it runs on an Ubuntu server. Running other package managers like apt-get will not work on the Amazon Linux machines.

Installing our application and environment variables

Now we have Git installed, we can Git clone our repo’s! On your EC2 instance run cd /home/ec2-user which takes you to the home of your instance. I like to have separate directories here for my front end and back end of my application, so create these directories - mkdir app server. We now have two new directories, so cd into each directory and clone the relevant repo’s:

https://github.com/[USERNAME]/[REPO].git ./

The trailing “./” stops the clone from creating a new directory with the files inside. Instead, all files are downloaded to the current directory.

You should now have a repo in each of your directories. For each of your new clones, run npm install to pull down dependencies.

One more step before we can run our application is to set up our environment variables. I remember spending a while looking for an easy way to add environment variables in the EC2 dashboard, and there’s no “easy” way to do this like you have in other services like Heroku. You can run something like:

export PORT=8081

And this will set the PORT env var (you can check by running var to see all vars), but it will not be there when the instance restarts - i.e. the env vars are not persistent. I find the easiest way is to simply create and update the .env file on the server. So the following must be done in both ‘app’ and ‘server’ directories:

sudo nano .env

And then add in the relevant env vars. I tend to have API keys, ports, URLs etc. For example, the client side application needs to know the URL for the API it’s connecting to, so I will often have a API_URL env var in my client project.

Once the .env files are saved in the relevant directories, you can now start your application! Run what ever script builds your app for production (npm run start or something similar).

Opening our EC2 application

Your application will now be running on a specific port. On your local machine by default you’ll be accessing your Node app for example via http://localhost:3000. On an EC2 instance it works very much the same way but your instance has a public DNS address we can use to access (we’ve not set up custom domain pointing yet as that’s covered in the next post). So to find your public DNS access, go to the AWS EC2 dashboard and select your instance. You’ll then see the DNS address:

You should then be able to enter this address in to your browser with the port your React or Node app is running on, and seeing the application response in the browser (for example ec2-35-178-204-57.eu-….com:3000). However, at this point the standard EC2 security settings have not yet been updated. Before we access the EC2 instance in the browser, we must open some ports.

By default the web runs on port 80 (or 443 for https). Our Node application runs on port 3000 for example, and our React front end might run on 8081 for example. We therefore need all these 4 ports opening. I’ll cover this in more detail later, but basically visits in browser will still go through port 80, but Nginx will reverse proxy the requests to a port of our choice depending on how we access the server. For example, Nginx might direct all requests to myapp.com (port 80 by default) to the port which Node is serving (port 3000). Again, I’ll go in to more detail later.

In order to open ports, go to the EC2 dashboard, “NETWORK AND SECURITY” in the left menu, and scroll down to “Security Groups” and select your security group (this can be found on the EC2 dashboard when you select the instance).

Select the “Inbound” tab, and then “Edit”, to open up a table of ports (SSH should be the only one opened by default). Go ahead and open 80, 443, and which ever 2 ports run your Node and React applications. Select “Anywhere” in the Source drop down to open for everyone, as we want all traffic to browse the site.

Once ports are opened, we can now visit the site in the browser! If it’s a Node application will no default routing page it will probably just show a message saying it can’t get “/”, but this is good as it means your server is on it’s way to becoming ready for action.

Running both client and server at the same time with PM2

At this point you can open either the client or server side of the app but running npm run start, and you will see that the application is running in the terminal. We now need to make it so the front and back end are running at the same time, so they can communicate with one another. There are a few ways to do this, but the easiest I feel is using a Node process manager called PM2. PM2 allows us to run multiple processes at the same time, have the processes run on server restart, provide monitoring, and a bunch of other great features.

Start by installing PM2 npm i pm2 -g

PM2 can be ran directly on a file - for example you could run pm2 start server.js on your main Node or React build file, and this will run as a PM2 process, however we want to be able to run multiple processes at once, and the best way to do this is by setting up a PM2 config file. In the directory above the “app” and “server” (i.e. your “/home/ec2-user/” directory), run touch ecosystem.config.js. Then run sudo nano ecosystem.config.js to open up the file in the editor, and add in the following config:

module.exports = {
  apps : [{
    name: 'app',
    cwd: './app',
    script: 'npm',
    args: 'start',
    autorestart: true,
    watch: false,
    max_memory_restart: '1G'
  },{
    name: 'server',
    cwd: './server',
    script: 'npm',
    args: 'start',
    autorestart: true,
    watch: false,
    max_memory_restart: '1G'
  }]
};

This config file creates the two processes and names them what we put in the “name” section. As the config file is not in the actual directory we are running the process in (it’s in the directory above), we must have a “cwd” part of the config to tell PM2 where to run the start scripts from. Finally the “start” and “args” sections cover which script to run. This config file runs npm run start for both app and server repo’s, so you will want to update this if your start scripts are different.

Assuming there are no problems with the config file, start your application by running:

pm2 start

This automatically finds your config file and runs the instances. At any point you can check which services are running by running pm2 list.

Another useful PM2 command is to restart the servers. This is done by running pm2 restart all, and this is required when making changes to env files, Nginx configs etc which we’ll cover later.

Now PM2 is running both client and server projects, you’ll be able to visit each of the parts of the application using the default public DNS and which ever ports you assigned to app and server. For example, you should be able to visit the following:

App: http://ec2….eu-west-2.compute.amazonaws.com:8081/ Server: http://ec2….eu-west-2.compute.amazonaws.com:3000/

And these will show your application. It’s likely that your server might need additional configuration to set up a database connection (which we’ll cover later) but you can view the logs from app and server by running:

pm2 logs

This will give you an indication of any problems, but the important part is that both app and server parts are running simultaneously!

Thanks for reading part 1

I’ll be posting part 2 shortly which will carry on from here by setting up domain pointing and reverse proxy with Nginx, and setting up a self hosted PostgreSQL database.