Skip to content

Latest commit

 

History

History
201 lines (150 loc) · 14 KB

README.md

File metadata and controls

201 lines (150 loc) · 14 KB

Core/User Cluster

terraform-tests

This folder contains a Terraform module to deploy a kubernetes cluster in GCP. The cluster contains a core-node-pool and an user-node-pool. The cluster is configured to schedule user pods into the user pool. This cluster type is used by JupyterHub and Renku projects

Contents:

Getting Started

This module depends on you having GCP credentials of some kind. The module looks for a credential file in JSON format. You should export the following:

GOOGLE_APPLICATION_CREDENTIALS=/path/to/file.json

How to use this module

This repository defines a Terraform module, which you can use in your code by adding a module configuration and setting its source parameter to URL of this repository. See the tests folder for guidance

Requirements

Name Version
terraform >= 1.9.2
google 5.38.0
google-beta 5.38.0

Providers

No providers.

Modules

Name Source Version
gke terraform-google-modules/kubernetes-engine/google//modules/beta-private-cluster 31.0.0

Resources

No resources.

Inputs

Name Description Type Default Required
cluster_name Name of cluster string "default" no
core_pool_auto_repair Enable auto-repair of core-component pool bool true no
core_pool_auto_upgrade Enable auto-upgrade of core-component pool bool true no
core_pool_disk_size_gb Size of disk for core-component pool number 100 no
core_pool_disk_type Type of disk core-component pool string "pd-standard" no
core_pool_image_type Type of image core-component pool string "COS_CONTAINERD" no
core_pool_initial_node_count Number of initial nodes in core-component pool number 1 no
core_pool_local_ssd_count Number of SSDs core-component pool number 0 no
core_pool_machine_type Machine type for the core-component pool string "n1-highmem-4" no
core_pool_max_count Maximum number of nodes in the core-component pool number 3 no
core_pool_min_count Minimum number of nodes in the core-component pool number 1 no
core_pool_name Name for the core-component pool string "core-pool" no
core_pool_preemptible Make core-component pool preemptible bool false no
create_service_account Defines if service account specified to run nodes should be created. bool false no
deletion_protection Enable deletion protection for the cluster bool false no
enable_private_nodes (Beta) Whether nodes have internal IP addresses only bool true no
gce_pd_csi_driver (Beta) Whether this cluster should enable the Google Compute Engine Persistent Disk Container Storage Interface (CSI) Driver. bool true no
horizontal_pod_autoscaling Enable horizontal pod autoscaling addon bool true no
http_load_balancing Enable http load balancer add-on bool false no
ip_range_pods The range name for pods string "kubernetes-pods" no
ip_range_services The range name for services string "kubernetes-services" no
kubernetes_version The Kubernetes version of the masters. If set to 'latest' it will pull latest available version in the selected region. string "latest" no
logging_service The logging service that the cluster should write logs to. Available options include logging.googleapis.com, logging.googleapis.com/kubernetes (beta), and none string "logging.googleapis.com/kubernetes" no
maintenance_start_time Time window specified for daily maintenance operations in RFC3339 format string "03:00" no
master_ipv4_cidr_block (Beta) The IP range in CIDR notation to use for the hosted master network string "172.16.0.0/28" no
monitoring_service The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include monitoring.googleapis.com, monitoring.googleapis.com/kubernetes (beta) and none string "monitoring.googleapis.com/kubernetes" no
network The VPC network to host the cluster in. string "kubernetes-vpc" no
network_policy Enable network policy addon bool true no
node_zones The zones to host the cluster in (optional if regional cluster / required if zonal) list(string)
[
"us-east1-b"
]
no
project_id The project ID to host the cluster in string n/a yes
region The region to host the cluster in string n/a yes
regional Whether the master node should be regional or zonal bool true no
release_channel The release channel of this cluster. Accepted values are UNSPECIFIED, RAPID, REGULAR and STABLE. Defaults to REGULAR. string "REGULAR" no
remove_default_node_pool Remove default node pool while setting up the cluster bool false no
service_account_email Email of service account string n/a yes
subnetwork The subnetwork to host the cluster in string "kubernetes-subnet" no
user_pool_auto_repair Enable auto-repair of user pool bool true no
user_pool_auto_upgrade Enable auto-upgrade of user pool bool true no
user_pool_disk_size_gb Size of disk for user pool number 100 no
user_pool_disk_type Type of disk user pool string "pd-standard" no
user_pool_image_type Type of image user pool string "COS_CONTAINERD" no
user_pool_initial_node_count Number of initial nodes in user pool number 1 no
user_pool_local_ssd_count Number of SSDs user pool number 0 no
user_pool_machine_type Machine type for the user pool string "n1-highmem-4" no
user_pool_max_count Maximum number of nodes in the user pool number 20 no
user_pool_min_count Minimum number of nodes in the user pool number 1 no
user_pool_name Name for the user pool string "user-pool" no
user_pool_preemptible Make user pool preemptible bool false no

Outputs

Name Description
cluster_name Cluster name
horizontal_pod_autoscaling_enabled Whether the cluster enables horizontal pod autoscaling
http_load_balancing_enabled Whether the cluster enables HTTP load balancing
location The location (region or zone) in which the cluster master will be created
node_pools_names List of node pools names
region n/a
service_account The service account to default running nodes as if not overridden in node_pools.
zones List of zones in which the cluster resides

Local Development

Merging Policy

Use GitLab Flow.

  • Create feature branches for features and fixes from default branch
  • Merge only from PR with review
  • After merging to default branch a release is drafted using a github action. Check the draft and publish if you and tests are happy

Version managers

We recommend using asdf to manage your versions of Terrafom.

brew install asdf

Terraform

You can also install the latest version of terraform version via brew.

brew tap hashicorp/tap
brew install hashicorp/tap/terraform

Pre-commit hooks

You should make sure that pre-commit hooks are installed to run the formater, linter, etc. Install and configure terraform pre-commit hooks as follows:

Install dependencies

brew bundle install

Install the pre-commit hook globally

DIR=~/.git-template
git config --global init.templateDir ${DIR}
pre-commit init-templatedir -t pre-commit ${DIR}

To run the hooks specified in .pre-commit-config.yaml:

pre-commit run -a

GCloud

This is only needed if running tests locally. The google-cloud-sdk is included in the Brewfile so it should now be installed This repo includes a env.sh file that where you set the path to the google credentials file, then use

source env.sh

and

deactivate

to set and uset the GOOGLE_APPLICATION_CREDENTIALS variable.

Testing

The tests can be run locally with terraform test after running terraform init. You will need to supply org_id, folder_id, and billing_account variables through terraform.tfvars file. Please see terraform.tfvars.example file for an example.

CI

This project has three workflows enabled:

  1. PR labeler: When opening a PR to default branch, a label is given assigned automatically according to the name of your feature branch. The labeler follows the follows rules in pr-labeler.yml

  2. Release Drafter: When merging to master, a release is drafted using the Release-Drafter Action

  3. terraform test runs on PR, merge to main and releases.