A Project to demonstrate how to deploy a basic modern web app on AWS with Terraform.
May be used as a starting point for a more complete application. This stack should fall under AWS free tier (the first 12 months). Some components were not included because they don't have a free tier.
You have been asked to create a website for a modern company that has recently migrated their entire infrastructure to AWS. They want you to demonstrate a basic website with some text and an image, hosted and managed using modern standards and practices in AWS.
You can create your own application, or use open source or community software. The proof of concept is to demonstrate hosting, managing, and scaling an enterprise-ready system. This is not about website content or UI.
Requirements:
- Deliver the tooling to set up an application which displays a web page with text and an image in AWS. (AWS free-tier is fine)
- Provide and document a mechanism for scaling the service and delivering the content to a larger audience.
- Source code should be provided via a publicly accessible Github repository.
- Provide basic documentation to run the application along with any other documentation you think is appropriate.
From the background, I will deploy a simple Single Page Application (SPA) with a backend fetching some data from a database.
As the goal is to demonstrate a basic website on AWS, I have kept things simple by using a serverless architecture with the AWS following services:
- A S3 bucket to host the static files. Angular is used to generate the frontend that will display an image describing the architecture used and some data from a Dynamodb Table. The files in the bucket are NOT public. The bucket is used as an origin for a cloudfront distribution.
- A Cloudfront distribution will serve the web application in front of the S3 bucket. Using cloudfront will speed up
the website loading: the static content (HTML, CSS and JS) will be available from AWS Data centers around the
world. No action required to scale the frontend.
The distribution price class is setPriceClass_100
(North America, Europe and Israel). It defines on which edge location Cloudfront will serve the requests. In order to target another audience, change the price class. - An S3 bucket for Cloudfront standard access logs. It can be connected to AWS Athena for further analysis.
- A Lambda function will be used as a backend behind an HTTP API powered by AWS API Gateway. This API will expose a
single endpoint to get the users (
/users
) in the Dynamodb table described below.
If your backend and traffic is expected to grow significantly in size and complexity, you may consider using a Docker container on ECS. There is also a quota on Lambda function concurrent executions that we need to be aware of and monitor when serving the website to a larger audience. - Xray for tracing on the Lambda function. Lambda logs are pushed to a Cloudwatch log group.
- A Dynamodb table will store the data. The data consists of some fake data about users ( see users.json). Terraform reads that file and put the items in Dynamodb. A provisioned billing mode is used for this project. Depending on your usage, you may consider On Demand mode or increase the provisioned capacities.
This project also demonstrates the following features of Terraform:
- Cross-region deployment with the Cloudfront ACM certificate in us-east-1 (mandatory)
- Multi cloud by using OVH DNS zone instead of AWS Route53
- Terraform local provisioners to deploy the frontend to an S3 bucket only when there are changes
If you have an existing DNS Zone on OVH, you can leverage it to have a custom domain on
top your CloudFront distribution. To use it, set the variable ovh_domain_conf
.
Example:
# Will use the domain haidara.io on OVH to create a DNS record with the following format: ${var.prefix}-${var.env}.haidara.io
export TF_VAR_ovh_domain_conf='{"dns_zone_name": "haidara.io"}'
# Or this one will create demo.haidara.io`
export TF_VAR_ovh_domain_conf='{"dns_zone_name": "haidara.io", "subdomain": "demo"}'
The authentication is not covered by this project. For those who want to go further about authentication, go check out AWS Cognito and/or AWS Amplify.
Terraform is used to deploy the architecture above and copy the static files to S3. The static files are copied only when:
- The S3 bucket has changed
- The
index.html
has changed (a new build has been made with some changes in the files) - The
config.json
has changed.config.json
is the file containing some configuration such as the API URL and the environment name. The template build of this file is located at frontend/src/assets/config.tpl.json.
This mono repository has the following structure:
.
├── backend # The backend in Lambda
│ ├── README.md
│ ├── main.py
│ └── users.json # Some fake users list
├── frontend # Angular frontend application
│ ├── src
│ ├── README.md
│ ├── angular.json
│ ├── karma.conf.js
│ ├── package-lock.json
│ ├── package.json
│ ├── tsconfig.app.json
│ ├── tsconfig.json
│ └── tsconfig.spec.json
├── img
│ ├── architecture.drawio
│ ├── architecture.png
│ └── screenshot.png
├── README.md
├── api-lambda.tf # API Gateway, Lambda function
├── data.tf # Data sources
├── frontend.tf # Frontend resources: S3 buckets, cloudfront
├── main.tf # Terraform Providers
├── monitoring.tf # SNS, alarms
├── outputs.tf # Terraform outputs
├── ovh-acm.tf # OVH and ACM configuration
└── variables.tf # Variables for terraform
To deploy the application, you need:
- An AWS account and an IAM user with the required permissions. The user's credentials need to be configured in your terminal.
- [Optional] Angular CLI and Node to build the frontend. You can find the build in the releases page.
- AWS CLI to sync the static files to S3
- Terraform CLI
In case you want to build the frontend:
npm install -g @angular/cli
cd frontend
npm install
npm run build # Will generate the artifacts in dist/devops-challenge
Otherwise, download the zip front-devops-challenge-v1.0.0.zip
from the releases page and extract it in the frontend
folder. You should have this structure front/dist/devops-challenge
.
To deploy the application, from the root folder of the repository:
# Should export the required AWS variables before
terraform init
terraform apply # Then enter yes
The output should look like this:
users_endpoint = "https://f08q1l967c.execute-api.eu-west-2.amazonaws.com/users"
website_url = "https://d1n3neitxvtko9.cloudfront.net"
As mentioned above, no authentication mechanism is provided by this project. If the web application is meant to serve
some restricted content/features, AWS Cognito may help.
The allowed origins to access the API
is also set to *
for simplicity. One way to avoid cross origins requests is to put the API Gateway as another origin
behind the same cloudfront distribution at /api
.
To go further, one can enable AWS Web Application Firewall (WAF) on the cloudfront distribution and the API Gateway (if using a REST API). It will protect against some common web exploits and bots.
For restricting data access, a key KMS with a restricted policy can be applied to the Dynamodb Table. Only the necessary services, persons should have access to this key.
Some components and metrics to monitor:
- Alarm on Lambda function error metric (implemented)
- Alarms on Cloudfront metrics: 4xxErrorRate, 5xxErrorRate
- Alarms Dynamodb throttle metrics (implemented), ConsumedReadCapacityUnits and ConsumedWriteCapacityUnits
- API Gateway metrics
These alarms can be configured in Cloudwatch with an SNS topic destination (implemented). The alarm names start with the environment name to quickly identify which environnement is affected by the alarm.
As X-ray is enabled on the Lambda function, you may understand how your Lambda is behaving regarding its access to other services.
As everything is done with Terraform, we could implement the following jobs in any CI/CD tool:
- Jobs for linting/validation:
- Lint on backend: pylint,...
- Lint on frontend: tslint,...
- Lint on Terraform: tflint, terraform validate
- Build the frontend and export the build as artifacts:
- Tests:
- Unit tests on the backend
- E2E tests on the frontend by mocking the backend
- Terraform init/plan (needs the build artifacts)
- Terraform apply (needs the build artifacts). It may be a manual job
Before launching Terraform in a pipeline, we should first set up a S3 backend (for example) to store the state file.
Name | Version |
---|---|
terraform | >= 1.7 |
aws | ~> 5 |
external | ~> 2 |
null | ~> 3 |
ovh | ~> 0.37 |
Name | Version |
---|---|
aws | ~> 5 |
aws.cloudfront-us-east-1 | ~> 5 |
ovh | ~> 0.37 |
terraform | n/a |
Name | Source | Version |
---|---|---|
lambda_function | terraform-aws-modules/lambda/aws | 7.2.1 |
Name | Description | Type | Default | Required |
---|---|---|---|---|
aws_region | Region to deploy to | string |
"eu-west-3" |
no |
default_tags | Default tags to apply to resources | map(string) |
{ |
no |
env | Name of the environment | string |
"dev" |
no |
invalid_cache | Flag indicating if we should invalidate the Cloudfront Cache after each deployment of the files to the S3 bucket. | bool |
false |
no |
ovh_domain_conf | OVH DNS zone configuration if you want to use a custom domain. | object({ |
{ |
no |
prefix | A prefix appended to each resource | string |
"devops-challenge" |
no |
Name | Description |
---|---|
users_endpoint | API Gateway url to access users |
website_url | Cloudfront URL to access the website |