This repo was created to serve as a minimal CI/CD boilerplate for your serverless Docker script provision on Google Cloud using Terraform, shell scripts and Github Workflows. After the past year struggling to learn all tools and languages involved, I felt like sharing in case anyone sees value.
Template consists of:
- An automatic deployment and destruction module called from
./terraform/environments/staging/build.sh
and./terraform/environments/staging/destroy.sh
. - A Terraform module with a directory approach to environments, and two stages of provisioning: base and main infrastructures.
- Uses the new
google_cloud_run_v2_job
resource, and the Direct VPC Egress (new recommended practice replacing prior VPC Access Connector). - Uses cloud naming conventions described by stepan.wtf, but all can be customized.
- On first local build, sets up Github Actions environment, accessing Actions Secrets via a custom Terraform module and passing values from a dedicated Google Service Account.
- Uses the new
- A Docker module with a demo Python script ready to be dockerized. This script once deployed serves as the proof of success of correct deployment.
- It pulls from a docker image deployed to Artifact Registry, asserting the success of the Docker build and push; and of the provisioning of Google's Docker repository;
- It then requests from https://httpbin.org/get and prints the response, asserting the success of the VPC and Direct VPC Egress provison;
- And fetches a JSON key from a Service Account created during deployment, via Google Secrets Manager, asserting the success of these services.
- Finally, its successful scheduling and consistent execution asserts the success of the Cloud Run Job, Scheduler and Service Accounts involved.
.
├── .secrets
│ ├── dem-prj-s-gsa-g-terraform.json
│ └── github.env
├── LICENSE
├── README.md
├── bin
│ ├── _gcloud.sh
│ ├── _github_env.sh
│ ├── _read_project_id.sh
│ ├── docker.sh
│ ├── tf_base.sh
│ └── tf_main.sh
├── docker
│ └── demo-image
│ ├── Dockerfile
│ ├── demo.py
│ └── requirements.txt
└── terraform
├── environments
│ └── staging
│ ├── base
│ │ ├── apis.tf
│ │ ├── locals.tf
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── providers.tf
│ │ ├── terraform.tfstate
│ │ ├── terraform.tfvars
│ │ └── variables.tf
│ ├── build.sh
│ ├── destroy.sh
│ ├── github.tf
│ ├── locals.tf
│ ├── main.tf
│ ├── outputs.tf
│ ├── providers.tf
│ ├── terraform.tfvars
│ ├── variables.tf
│ └── vpc.tf
└── modules
└── github
├── actions.tf
├── providers.tf
└── variables.tf
- A new Google Cloud Project ID created from this demo.
- A Google Cloud Service Account with Owner Role and its Json file downloaded.
- Gcloud.
- Docker.
- Terraform.
- Github Personal Access Token (optional, for Github Actions setup).
- You must keep your secrets in the
.secrets
directory.- A google service account credentials file is required.
- A Github Private Access Token is optional (export it as $GITHUB_TOKEN from a
github.env
file):export GITHUB_TOKEN=github_pat_***
- In the
build.sh
anddestroy
scripts, fill in the default paths to your Google and Github secrets:gcreds="${GOOGLE_CREDENTIALS_PATH:-$ROOT_DIR/.secrets/dem-prj-s-gsa-g-terraform.json}" ghcreds="${GITHUB_CREDENTIALS_PATH:-$ROOT_DIR/.secrets/github.env}"
- Fill in all variables in both the base and main
terraform.tfvars
files.- Resource naming is handled directly in
.tf
files.
- Resource naming is handled directly in
- Make sure you understand both
build.sh
anddestroy.sh
scripts before executing. The default variables definition on top can be carefully used to tweak the script's default befaviour. Before reunning, assert and set executable permissions (chmod +x [script]
).
All building and provisioning are handled in the build.sh
script, which consists of three main steps:
- Base infrastructure:
- Enables all (but one) API services needed in the project.
- One extra necessary API service for the boostrapping (Cloud Resource Manager) is enabled prior to the Terraform init, in one of scripts called by the
build.sh
script.
- One extra necessary API service for the boostrapping (Cloud Resource Manager) is enabled prior to the Terraform init, in one of scripts called by the
- Creates the Cloud Storage bucket to serve as the Terraform backend.
- Creates the Artifact Registry repository that will host the Docker image.
- All Terraform code is kept in the dedicated
terraform/environments/staging/base
directory. - This stage is intended to only be ran during the first build.
- Once built, the Artifact Registry repository will be imported into the
main
infrastructure code. - Its only remaining responsability after first build is to serve as the Terraform dedicated bucket state local repository - if destroyed, bucket and main state will be destroyed.
- Once built, the Artifact Registry repository will be imported into the
- Enables all (but one) API services needed in the project.
- Docker image build and push:
- Builds and tags a local image from
docker/demo-image
. - Pushes to the newly created Artifact Registry.
- Builds and tags a local image from
- Main infrastructure:
- Connects to the newly created backend and provisions all other resources, including:
- Importing the Google Artifact repository from the
base
infrastructure. - Networking, including a private subnet, Firewall Rules and a NAT Router for egress.
- Cloud Run Job with Direct VPC Egress and a Scheduler configured to execute every 10 minutes.
- You should destroy once validaded, or change Scheduler frequency to avoid getting charged for the demo runs.
- Internal Service Accounts with correctly assigned IAM Roles:
- Cloud Run
- Cloud Scheduler
- Github Actions
- Demo external account
- Both the Github Actions and external account will have a private JSON key file generated.
- One will be passed to a Google Secret, to be accessed by the demo Docker job, and the other to a Github Actions Secret, which will be used by Github Actions authentication.
- Importing the Google Artifact repository from the
- Connects to the newly created backend and provisions all other resources, including:
./terraform/environments/staging/build.sh --from-base --github-actions
./terraform/environments/staging/build.sh --from-base
Backend bucket must be passed either via --assets-bucket [name]
or by exporting ASSETS_BUCKET
.
Keep
--github-actions
flag or Github Actions resources will be destroyed.
export ASSETS_BUCKET=[]
./terraform/environments/staging/build.sh [--github-actions]
or
./terraform/environments/staging/build.sh --assets-bucket [name] [--github-actions]
To avoid any Terraform provision.
./terraform/environments/staging/build.sh --docker-only
To avoid building and pushing Docker image.
./terraform/environments/staging/build.sh --assets-bucket [name] --skip-docker [--github-actions]
Similarly to build.sh
, destroy.sh
handles the destruction of all provisioned infrastructure, including the locally built Docker image. A --keep-base
and a --keep-docker
flag are available for more control on destruction.
Backend bucket must, like with build.sg
, be passed either via --assets-bucket [name]
or by exporting ASSETS_BUCKET
.
The base
Terraform module holds the Terraform state bucket state files. If destroyed, this demo will delete the bucket and all of its content based on the force_destroy = true
argument passed to the bucket resource. To avoid this destruction, edit the Terraform resource by removing the forece_destroy
argument or pass the --keep-base
flag to destroy.sh
:
./terraform/environments/staging/destroy.sh --assets-bucket [name] --keep-base
By default, the this demo will destroy everything it once provisioned - this includes the locally built Docker image. To avoid removing the images during destruction, pass the --keep-docker
flag to destroy.sh
.
./terraform/environments/staging/destroy.sh--assets-bucket [name] --keep-docker
These two flags can be combined:
./terraform/environments/staging/destroy.sh --assets-bucket [name] --keep-docker --keep-base
After destruction is complete, you will still need to delete the Google Service Account you created for this demo, and diable the Service Usage API manually.
Three Github Actions workflows are created:
all.yml
- Build and deploy all- Triggered only manually via actions console.
docker.yml
- Build and deploy Docker- Triggered by pushes to
main
branch ondocker/
paths.
- Triggered by pushes to
gcloud.yml
- Build and deploy Google Cloud- Triggered by pushes to
main
branch onterraform/
paths, except forbase/
files (since base state is handled locally).
- Triggered by pushes to
- Adding a
destroy.sh
process. - Improving
build.sh
anddestroy.sh
processes. - Adding a Secrets Manager environment to the Cloud Run container.
- Creating Github Workflows for CI/CD.
- Expand and modularize Github Worflows.
- Adding unit tests and other validations.
- Adding a Cloud Storage connection.
- Adding a Cloud Run Service with a public IP.
- Adding VPC Peering functionality.
- Tagging across all resources created.
- Separating terraform resources into modules.
- Equivalent structure with AWS ECS.
Feel free to collaborate!