Terraform Module for creating a Kubernetes Cluster on Cloud.dk
WARNING: This project is under active development and should be considered alpha.
- Terraform 0.12+
- Terraform Provider for Cloud.dk 0.3+
- Terraform Provider for SFTP 0.1+
- Creating the cluster
- Accessing the cluster
- Additional node pools
- Installed addons
- Variables
- Frequently asked questions
- Known issues
The default cluster configuration has the following specifications, which is only recommended for development purposes:
Type | Count | Memory | Processors |
---|---|---|---|
Load Balancer (API) | 1 | 1024 MB | 1 |
Master Node | 3 | 4096 MB | 2 |
Worker Node | 2 | 4096 MB | 2 |
You can create a new cluster with this configuration by following these steps:
-
Create a new file called
kubernetes_cluster.tf
with the following contents:module "kubernetes_cluster" { source = "github.com/danitso/terraform-module-clouddk-kubernetes-cluster" cluster_name = "the-name-of-your-cluster-without-spaces-and-special-characters" provider_token = var.provider_token } variable "provider_token" { description = "The API key" type = string }
-
Initialize your workspace
docker run -v .:/workspace -it --rm danitso/terraform:0.12 init
or using
cmd.exe
:docker run -v %CD%:/workspace -it --rm danitso/terraform:0.12 init
-
Create the cluster and provide an API key from my.cloud.dk when prompted
docker run -v .:/workspace -it --rm danitso/terraform:0.12 apply -auto-approve
or using
cmd.exe
:docker run -v %CD%:/workspace -it --rm danitso/terraform:0.12 apply -auto-approve
You can modify the configuration by changing the Input Variables inside the module
block.
NOTE: The danitso/terraform
image contains all the custom providers developed by Danitso. In case you do not want to use this image, you must manually download and install the required provider plugins for Terraform listed under the Requirements section.
If you have followed the steps in Creating the cluster without experiencing any problems, you should now be able to access the cluster with kubectl:
export KUBECONFIG="$(pwd -P)/conf/the_name_of_your_cluster.conf"
kubectl get nodes
or using cmd.exe
:
set KUBECONFIG=%CD%/conf/the_name_of_your_cluster.conf
kubectl get nodes
The kubectl
command should output something similar to this:
NAME STATUS ROLES AGE VERSION
k8s-master-node-clouddk-cluster-1 Ready master 2m v1.15.2
k8s-master-node-clouddk-cluster-2 Ready master 1m v1.15.2
k8s-master-node-clouddk-cluster-3 Ready master 1m v1.15.2
k8s-worker-node-clouddk-cluster-default-1 Ready <none> 1m v1.15.2
k8s-worker-node-clouddk-cluster-default-2 Ready <none> 1m v1.15.2
The nodes may still be initializing in which case you will see the status NotReady. This should change to Ready within a couple of minutes.
In case you need additional node pools with different hardware specifications or simply need to isolate certain services, you can go ahead and create a new one:
-
Append the following contents to the
kubernetes_cluster.tf
file:module "kubernetes_node_pool_custom" { source = "github.com/danitso/terraform-module-clouddk-kubernetes-cluster/modules/nodes" api_addresses = module.kubernetes_cluster.api_addresses api_ports = module.kubernetes_cluster.api_ports bootstrap_token = module.kubernetes_cluster.bootstrap_token certificate_key = module.kubernetes_cluster.certificate_key cluster_name = module.kubernetes_cluster.cluster_name control_plane_addresses = module.kubernetes_cluster.control_plane_addresses control_plane_ports = module.kubernetes_cluster.control_plane_ports master = false node_count = var.custom_node_count node_memory = 4096 node_pool_name = "custom" node_processors = 2 provider_location = module.kubernetes_cluster.provider_location provider_token = module.kubernetes_cluster.provider_token unattended_upgrades = true } variable "custom_node_count" { description = "The node count for the 'custom' node pool" default = 2 type = number }
-
Re-initialize your workspace
docker run -v .:/workspace -it --rm danitso/terraform:0.12 init
or using
cmd.exe
:docker run -v %CD%:/workspace -it --rm danitso/terraform:0.12 init
-
Apply the changes
docker run -v .:/workspace -it --rm danitso/terraform:0.12 apply -auto-approve
or using
cmd.exe
:docker run -v %CD%:/workspace -it --rm danitso/terraform:0.12 apply -auto-approve
This will create a new node pool with the name custom
, which can be targeted by using the label selector kubernetes.cloud.dk/node-pool=custom
.
The following addons are automatically installed by the module:
Adds support for services of type LoadBalancer
as well as other important features.
Adds a virtual network inside the cluster to allow containers to communicate across nodes.
The name of the cluster.
Default: clouddk-cluster
The number of load balancers.
Minimum: 1
Default: 1
The minimum amount of memory (in megabytes) for each load balancer.
Minimum: 512
Default: 1024
The minimum number of processors (cores) for each load balancer.
Minimum: 1
Default: 1
The number of master nodes.
Minimum: 3
Default: 3
The minimum amount of memory (in megabytes) for each master node.
Minimum: 2048
Default: 4096
The minimum number of processors (cores) for each master node.
Minimum: 1
Default: 2
Whether to enable unattended OS upgrades for the master nodes.
Default: true
The minimum amount of memory (in megabytes) for network storage servers.
Default: 4096
The minimum number of processors (cores) for network storage servers.
Default: 2
The cluster's geographical location.
Default: dk1
This variable is currently unused.
The API key.
This variable is currently unused.
The number of worker nodes in the default worker node pool.
Minimum: 1
Default: 2
The minimum amount of memory (in megabytes) for each node in the default worker node pool.
Minimum: 2048
Default: 4096
The name of the default worker node pool.
Default: default
The minimum number of processors for each node in the default worker node pool.
Minimum: 1
Default: 2
Whether to enable unattended OS upgrades for the worker nodes.
Default: true
The IP addresses for the Kubernetes API.
The CA certificate for the Kubernetes API.
The endpoints for the Kubernetes API.
The password for the Kubernetes API load balancing statistics page.
The Kubernetes API load balancing statistics URLs.
The username for the Kubernetes API load balancing statistics page.
The port numbers for the Kubernetes API.
The bootstrap token for the worker nodes.
The key for the certificate secret.
The name of the cluster.
The relative path to the configuration file for use with kubectl
.
The raw configuration for use with kubectl.
The control plane addresses.
The control plane ports.
The private IP addresses of the master nodes.
The public IP addresses of the master nodes.
The private SSH key for the master nodes.
The relative path to the private SSH key for the master nodes.
The public SSH key for the master nodes.
The relative path to the public SSH key for the master nodes.
The cluster's geographical location.
The API key.
The token for the Cluster Admin service account.
The private IP addresses of the worker nodes.
The public IP addresses of the worker nodes.
The private SSH key for the worker nodes.
The relative path to the private SSH key for the worker nodes.
The public SSH key for the worker nodes.
The relative path to the public SSH key for the worker nodes.
Unattended OS upgrades are scheduled to run on the nodes on a daily basis. These upgrades may require a reboot in order to take effect in which case a reboot is scheduled for 00:00 UTC and onwards. The nodes in each pool will reboot 15 minutes apart, which results in a schedule similar to this:
- Node 1 reboots at 00:00 UTC
- Node 2 reboots at 00:15 UTC
- Node 3 reboots at 00:30 UTC
- Node 4 reboots at 00:45 UTC
- Node 5 reboots at 01:00 UTC
The delay between each reboot is meant to reduce the impact on a pool. However, in case the pool has only 2 nodes, you will still lose 50% of the capacity for the duration of the reboot (up to 5 minutes). That's why we recommend that you always provision at least 3 nodes per pool for production clusters.
You can also disable unattended OS upgrades by setting the two input variables master_node_unattended_upgrades and worker_node_unattended_upgrades to false
. However, this is not recommended unless you have another maintenance procedure in place.
NOTE: Unattended OS upgrades are permanently disabled for load balancers and needs to be maintained manually. This should reduce the risk of a cluster outage in case the cluster only has a single load balancer for the API.
The first time the cluster is provisioned, the IP addresses of the load balancers and master nodes are included as alternative names in the SSL certificate. This certificate is only generated once, which results in verification issues when new load balancers are introduced afterwards.
The issue will be fixed in a future release of the module.