How to manually provision web stack on AWS using EC2
In this article we will see how to run a simple multi-tier web application stack in aws cloud without utilising most of the managed services offered by aws except load balancer and route 53. So let’s dive in, below is the architecture we will be building, which hosts Java based web application.
Aws services we will be using in this setup
- EC2 instances for tomcat, RabbitMQ, MySQL and Memcached services
- ACM (Amazon Certificate Manager)
- Application load balancer (ALB)
- S3 for storing artifacts.
- Route 53
Brief explanation of setup:
The users will access our website using the URL (projects.isakmohammed.co.uk), we will make that URL to point to an ALB endpoint by adding a cname entry in dns settings of our domain name. (There are lot of domain registrars who provide the domain names of your choice at reasonable cost, I have bought the domain with GoDaddy and we will be using this throughout this setup). Users will connect to ALB using https, and we will enforce this by only allowing https traffic in ALB security group and by loading the certificate for https encryption from ACM into ALB. ALB will route the request to our tomcat instance. We will only allow tomcats instance to accept traffic from ALB on port 8080 by enforcing inbound rules in tomcat security group. Our application which is running inside tomcat instance will connect to backend instances with the help of route53. We will host information of backend services instances ip address in route53 private zones and frontend will connect to them using the names mentioned in the application source code. The Benefit of mapping names to IP address in route53 gives us the flexibility to replace backend instances in case of issues easily without making much changes in the application logic. Prerequisites
For this setup, you need the following:
- An AWS account
- AWSCLI installed on your local machine
- JDK8 installed on your local machine
- Maven installed on your local machine
- Editor of your choice
Note: we will be using default VPC for this setup, centos 7 ami for all the 3x backend instances and ubuntu 18.04 ami for tomcat instance as it will be easy to control the tomcat service with systemctl command without much setup.
Steps involved in the setup:
- Login to aws account & choose appropriate region nearest to you.
- Create security groups for the front-end and back-end services.
- Create key pairs
- Launch backend instances with user data.
- Configure route 53 to map the names of backend services to ip address of the instances on which backend services are running. This allows frontend to connect to backend services without worrying about ip addresses.
- Launch Frontend tomcat instances and update application properties file with backend services names configured in route53.
- Build Artifacts from source code on local machine and then upload to s3 bucket
- Then manually download artifacts from s3 on to tomcat ec2 instance by assigning IAM Roles to Tomcat Instance
- Setup ALB with HTTPS [cert from ACM]
- Map ALB endpoint to website name in Godaddy DNS and verify the setup
- Login to aws account & choose appropriate region nearest to your location:
It’s one of the aws best practices to login with non-root user credentials. Root account should never be used and is not recommended to perform everyday tasks, even the administrative ones. Rather root account should only be used to create your first IAM user, groups and roles. Then you need to lock it away securely and use it only to perform a few account and service management tasks.
Once you login into aws console chose the region from right hand top corner from drop down list
2. Creating security groups for front end and backend services
Once you have selected the region, search for ec2 service in Mgmt. console services search bar. Once you have landed on ec2 service page, click on security groups from left side column and then click on create security group.
We will setup the security group for ALB which allows only https traffic on port 443 as inbound rule, and leave the default outbound rule as it is, which allows all traffic to anywhere (0.0.0.0/0).
We will create the security group for tomcat instances which allows only https traffic from ALB on port 8080, as tomcat service runs on port 8080. So we will update the inbound rule as below, and you can see that I have just selected the security group of ALB in the source section of security group which allows tomcat instance to accept traffic only from ALB. We will also add an extra inbound rule to SSH into tomcat instance to download artifact from s3 and set it as default application, we will allow only SSH from our own IP address. We will leave the default outbound rule as it is, which allows all traffic to anywhere (0.0.0.0/0) for patching, updates, etc.
We will create the security group for our backend services MySQL, Memcached and RabbitMQ. MySQL will be running on 3306 port, Memcached on 11211 and RabbitMQ on 5762. We will add an inbound rule such that these services only accept traffic on this ports from tomcat instance security group. We also need to add an extra rule so that these 3x backend services communicate among themselves when responding to queries from our application servers.
3. Create key pairs to login to frontend instances:
Once we have created security groups, we will create a key pair to login to frontend instances. So, click on Key pairs on left hand column of EC2 console page and then click create key pair on top right corner. A key pair consists of a private and public key and is a set of security credentials that we use to prove our identity when connecting to an instance. We will give the name as ‘FullStackWeb’ and select pem format as we will using PowerShell (on windows machine) to connect to an instance. When you click create key pair it will download the public key into your download folder on your local machine and corresponding private key will be stored safely. (Note we didn’t need to SSH into backend instances as our user-data scripts will take care of setting up everything. ) 4. Launch Frontend and backend instances with user data [bash scripts] Launching Backend Instances: Launching EC2 instance and setting up MySQL Maria-dB server: We will first launch ec2 instances for the MySQL Maria-dB server with user data. We will select centos7 AMI from market place, and instance type of t2.micro should be fine. We will also enable protection against accidental termination, and then copy the bash script in the User data section. Bash script will install the dependencies for Maria-dB server, downloads the source code from internet to initialise the database schema with dump file and sets up the firewall to ensure MariaDB is accessible only from port 3306. So we are setting the firewall at 2x levels, one through security group and other through firewalls provided by os.
We will just go with default Root EBS volume of 8 GB storage and also enable delete on termination option. We will add tags
We will select the security group we created earlier for the backend
We will review and launch the instance using the key pair we created earlier
Launching EC2 instance and setting up Memcached service: We will launch EC2 instance for Memcached service and set it up using bash scripts with in user data section of EC2 instance. Again we will select centos7 AMI from market place, and instance type of t2.micro should be fine for this as well. We will also enable protection against accidental termination on EC2, and then copy the bash script in the User data section. Bash script will install Memcached and sets the port 11211.
We will just go with default Root EBS volume of 8 GB storage and also enable delete on termination option.
We will tag it
We will select the security group we set up earlier for backend services
We will review and launch using the same pem key we created earlier
Launching EC2 instance and setting up RabbitMq service: We will launch EC2 instance for RabbitMq service and set it up using bash scripts with in user data section of EC2 instance. Again we will select centos7 AMI from market place, and instance type of t2.micro should be fine for this as well. We will also enable protection against accidental termination on EC2, and then copy the bash script in the User data section. Bash script will install dependencies for RabbitMQ, downloads RabbitMQ rpm, downloading its key and then installing RabbitMQ rpm. Then start and enable RabbitMQ service, make some configuration change. Add RabbitMQ user called test with password as test, and set the user as administrator tag and finally restart the RabbitMQ service. We will just go with default Root EBS volume of 8 GB storage and also enable delete on termination option. We will tag it We will select the security group we set up earlier for backend services We will review it and launch using the same pem key we created earlier
- Configuring Route53 to map backend services ip address to names: Route53 integrates seamlessly with other aws services and can be used to map domain names to load balancers, amazon EC2 instances, S3 buckets, and other aws services. So, we will use it to map the backend Ip addresses to names, and update these names in application properties file to make frontend tomcat instance to connect with backend. So, go to route53 via services search bar and click create hosted zones, give domain name, select type as private hosted zone as we want traffic to be routed within same the default VPC. Select the default VPC as we have been using the default VPC for this project. Then click create hosted zone Once hosted zone is created, we will create records for all the backend services to map its ip addresses to names. 1st lets create it for Maria dB, click create records and then select simple routing, give the name you want tomcat instance to connect with db, and db ip address in the value filed. Then click create record Similarly we will create record for Memcached Similarly we will create record for RabbitMQ
6. Launching frontend tomcat instances: So we will provision ec2 instance for tomcat, we will select ubuntu 18.04 ami, type t2.micro, we will enable Protect against accidental termination option. Finally copy the bash script in user data section to We will go with default storage We will tag it as below We will select the security group we created earlier for frontend service and launch the instance with same key pair we created earlier. 7. Update application properties file with backend services names configured in route53 and building Application from our source code on local machine and then upload to s3 bucket So once all the frontend and backend instances are running we will build artifacts for the application using maven. For that we need to install jdk8 and maven. We can use chocolatey to install on windows or brew on mac, as show below I have already installed on my windows pc. Once we have the dependencies to build the artifact, we will update the application configuration file to ensure backend services are mapped correctly i.e. update backend services with their hosted zones names given in route53 by us. Once we update the application configuration file we will build the artifacts using ‘mvn install’ command this will build our application and generates artifacts ‘.war’ file as shown below in target folder Once we have the artifacts we will upload artifacts in s3 bucket, so that in tomcat server we can download this artifacts from the same bucket. In order to do this we need to install aws cli and configure with secret and access key. Again you can install aws cli using chocolatey on windows or brew on mac os, I will use chocolatey to install aws cli as shown below
We will create an IAM user so that he can create a bucket and upload the artifacts in the bucket, for this we need to ensure that we only give him the least privileges so that he can create and upload the artifacts, we will create a custom policy to do that. Once the user is created we will configure him on the aws cli with his access key, secret key, region and output format Once user is configured to use cli we will create bucket and upload the artifact So, artifacts are all set but tomcat instance needs to have permissions so that it can download the artifacts from s3 bucket, so we will create an IAM role and assign the permissions to tomcat ec2 instance, again we need to take care of least privileges policy, i.e. only grant the least permissions that are needed. So, we will go to IAM service and click create role, select type of trusted entity as aws services EC2 and add custom s3 policy which only allows to download the artifacts from s3 Give role name and click create role Once the role is created we will go to our Tomcat instance and modify its IAM role and assign the created role to it so that it can access s3 bucket. 8. Manually download artifacts from s3 on to tomcat ec2 instance: Once we assign necessary permission to tomcat instance we will SSH into it, become root user. We will delete /var/lib/tomcat8/webapps/ROOT directory which has default application, then we will download artifacts from s3 bucket and copy that into it the same directory as /var/lib/tomcat8/webapps/ROOT.war, so that tomcat serves our custom application. But for this we need aws cli on tomcat instance, so we will first install aws cli and perform this actions as shown below. Once awscli is installed, as our instance has permission to access s3, we will download the artifact directly into tomcat ec2 instance in /tmp/ directory and then copy it as /var/lib/tomcat8/webapps/ROOT.war. Then we start tomcat8 service. Once tomcat8 service is started it will extract ROOT.war and when we list it we will see the application configuration file. If we want to check whether our tomcat8 web application is able to connect to backend services on the names we inserted in private zones of route53, we can install telnet and check the connectivity. As shown below and we can see our tomcat8 instance was able to connect successfully to maria-db server. 9. Setup ALB with HTTPS [cert from ACM] As everything seems to be running perfectly we will setup the load balancer. First we will setup the target group by clicking on the target group under loadbalancer section on EC2 service page and click target group, we will select target type as instance, as ALB will be diverting the traffic to our tomcat instance, tomcat service is running on 8080, so we will select https port as 8080. We select the healthy threshold as 3 (The number of consecutive health checks successes required before considering an unhealthy target healthy by ALB) and click next We will select our tomcat ec2 instance and then click create target group Once target group is created we will create ALB, so click on ALB from the left side column of EC2 service page, click create load balancer, choose ALB type, give name, as we will only be allowing https traffic so we will configure listeners to port 443 for https and will select all availability zones as we will be setting up Autoscaling next. Then click configure security settings As we are permitting htpps traffic, we need the certificate for that and we will choose certificate from ACM which we had set earlier. Then click configure security groups Next select the security group we created earlier for ALB and click configure routing We will select the target group for ALB to route the traffic to and click register targets, review and create ALB finally. 10. Map ALB endpoint to website name in Godaddy DNS: Once ALB is provisioned, we will copy its end point and add it as cname in the domain provider dns setting to divert the traffic to ALB once the users hit URL projects.isakmohammed.co.uk Adding CNAME entry in Godaddy dns settings to divert traffic when users hit projects.isakmohammed.co.uk to alb endpoint Once entry is updated in domain registrar settings, if ALB is active and running we can verify the setup by hitting the url projects.isakmohammed.co.uk . We can see indeed traffic is forwarded to alb endpoint and its displaying the webpages we wanted and also https traffic is encrypted with amazon issued certificate. So, all good our setup is working fine as per the application source code logic. However there are some cons with this set up, as listed below. Cons:
- There is an extra operational overhead with this setup, as we are manually setting up the instances, ALB, etc.
- If we want to scale this setup again we have to perform some more manual steps of setting up Auto Scaling Group for frontend and backend instances.
- Regularly have to update software patches and security updates manually.
- All the above steps involves extra time and cost associated with it.
So the best approach to set this simple architecture is by using paas services offered by aws like managed RDS and Elastic bean stalk, etc, which not only reduces this operational overhead but also allows us to spend more time on product and application development rather than looking after usual maintenance.