Skip to content

Releases: kbst/terraform-kubestack

v0.8.0-beta.0

14 May 11:17
@pst pst
0a4c91a
Compare
Choose a tag to compare
v0.8.0-beta.0 Pre-release
Pre-release
  • Introduce local development environment using KinD.
  • Publish provider specific images in addition to the current multi-cloud default to reduce image size and speed up CI/CD runs for single cloud use-cases.
    • multi-cloud: kubestack/framework:v0.8.0-beta.0 (same tag as before)
    • AKS: kubestack/framework:v0.8.0-beta.0-aks
    • EKS: kubestack/framework:v0.8.0-beta.0-eks
    • GKE: kubestack/framework:v0.8.0-beta.0-gke
    • KinD: kubestack/framework:v0.8.0-beta.0-kind
  • AKS, EKS, GKE and KinD starters use the specific image variants.
  • Entrypoint now starts as root then drops to regular user to configure user and groups correctly. This removes the need for the previously used lib-nss-wrapper.
  • Updated default nginx ingress controller to v0.30.0-kbst.1 which adds an overlay for the local KinD clusters

Upgrade Notes

  1. Update the version in Dockerfile and clusters.tf. Optionally, switch Dockerfile to the provider specific variant.
  2. Remove -u parameter from docker run commands in your pipeline and when running the container manually.

v0.7.1-beta.0

28 Apr 08:51
@pst pst
5e722f0
Compare
Choose a tag to compare
v0.7.1-beta.0 Pre-release
Pre-release

Speed up automation runs by not rebuilding the container image every time

Instead of building the entire image on every automation run, the Dockerfile in the repository now specifies an upstream image using FROM. This approach gives a good balance between speeding up automation runs but keeping the ability to extend the container image with custom requirements like for example credential helpers or custom CAs to verify self-signed certificates.

Upgrade Notes

  1. Replace ci-cd/Dockerfile:

    echo "FROM kubestack/framework:v0.7.1-beta.0" >  ci-cd/Dockerfile
    
  2. Remove obsolete ci-cd/entrypoint:

    # before v0.7.0-beta.0 entrypoint was called nss-wrapper
    rm  ci-cd/entrypoint
    

v0.7.0-beta.0

17 Apr 10:49
@pst pst
d33dd6c
Compare
Choose a tag to compare
v0.7.0-beta.0 Pre-release
Pre-release
  • Dockerfile is now Python 3.x based
  • EKS: Support root device encryption (thanks @cbek)
  • EKS: Use aws_eks_cluster_auth data source instead of previous shell script (thanks @darren-reddick)
  • Simplify default overlay layout and support custom layouts
  • Add authentication helper env vars KBST_AUTH_AWS, KBST_AUTH_AZ, KBST_AUTH_GCLOUD to Docker entrypoint to simplify automation.

Upgrade Notes

  1. Updated version in Dockerfile

    Update the Dockerfile to the one from this release to get the latest versions.

  2. Simplified overlay layout

    The overlay layout was simplified by removing the intermediate provider overlays (eks, aks and gke). When upgrading an existing repository, either consider adapting your overlay structure or overwriting the default using the manifest_path cluster module attribute.

    Examples:

    # AKS example
    module "aks_zero" {
      # [...]
      manifest_path = "manifests/overlays/aks/${terraform.workspace}"
    }
    
    # EKS example
    module "eks_zero" {
      # [...]
      manifest_path = "manifests/overlays/eks/${terraform.workspace}"
    }
    
    # GKE example
    module "gke_zero" {
      # [...]
      manifest_path = "manifests/overlays/gke/${terraform.workspace}"
    }

v0.6.0-beta.1

08 Feb 12:21
@pst pst
Compare
Choose a tag to compare
v0.6.0-beta.1 Pre-release
Pre-release
  • Bugfix release for the kustomize provider included in v0.6.0-beta.0 handling GroupVersionKind to GroupVersionResource conversion incorrectly, resulting in not found errors for ingress and most likely other resource kinds.

Upgrade Notes

Update the Dockerfile to the one from this release to get the fixed provider version.

Then, please refer to the upgrade notes of v0.6.0-beta.0.

v0.6.0-beta.0

01 Feb 13:41
@pst pst
32e89d5
Compare
Choose a tag to compare
v0.6.0-beta.0 Pre-release
Pre-release
  • Replace the provisioner for kustomize and kubectl integration used until now with the new terraform-provider-kustomize

Upgrade Notes

Remember to update both the version of the module in clusters.tf as well as the Dockerfile under ci-cd/.

Cluster services (AKS, EKS, GKE)

Replacing the previous provisioner based approach with a Terraform provider to integrate Kustomize with Terraform allows each Kubernetes resource to be tracked individually in Terraform state. This integrates resources fully into the Terraform lifecycle including in-place or re-create updates and purging.

To migrate existing clusters without downtime, two manual steps are required to import Kubernetes resources for the new provider.

  1. Remove ingress-kbst-default namespace from TF state

    Previously, the ingress-kbst-default namespace was managed both by kustomize as well as Terraform. Now the namespace is only managed by the new terraform-provider-kustomize.

    To prevent deletion and re-creation of the namespace resource and the service type loadbalancer which could cause downtime for applications, it's recommended to manually remove the namespace from Terraform state. So Terraform does not make any changes to it until it is reimported below.

  2. Import cluster service resources into TF state

    Finally, all Kubernetes resources from manifests/ need to be imported into Terraform state, otherwise the apply will fail with resource already exists errors.

After running below commands, the Terraform apply of the Kubestack version v0.6.0-beta.0 on a v0.5.0-beta.0 cluster will merely destroy the null_resource from TF state previously used to track changes to the manifests.

Migration instructions

Below commands work for clusters created using the quickstart. If your module is not called aks_zero, eks_zero or gke_zero you need to adapt the commands below. If you have additional resources you need to import them accordingly. Remember to use single quotes '' around resource names and IDs in the import command.

You can run below commands in the bootstrap container:

# Build the bootstrap container
docker build -t kbst-infra-automation:bootstrap ci-cd/

# Exec into the bootstrap container
docker run --rm -ti \
    -v `pwd`:/infra \
    -u `id -u`:`id -g` \
    kbst-infra-automation:bootstrap
AKS
# remove the namespace resource from TF state
terraform state rm module.aks_zero.module.cluster.kubernetes_namespace.current
# import the kubernetes resources into TF state
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller"]' 'apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role"]' 'rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration"]' '~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|tcp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|tcp-services'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|udp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|udp-services'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_Namespace|~X|ingress-kbst-default"]' '~G_v1_Namespace|~X|ingress-kbst-default'
terraform import 'module.aks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount"]' '~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount'
EKS
# remove the namespace resource from TF state
terraform state rm module.eks_zero.module.cluster.kubernetes_namespace.current
# import the kubernetes resources into TF state
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller"]' 'apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role"]' 'rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration"]' '~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|tcp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|tcp-services'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|udp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|udp-services'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_Namespace|~X|ingress-kbst-default"]' '~G_v1_Namespace|~X|ingress-kbst-default'
terraform import 'module.eks_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount"]' '~G_v1_ServiceAccount|ingress-kbst-default|nginx-ingress-serviceaccount'
GKE
# remove the namespace resource from TF state
terraform state rm module.gke_zero.module.cluster.kubernetes_namespace.current
# import the kubernetes resources into TF state
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller"]' 'apps_v1_Deployment|ingress-kbst-default|nginx-ingress-controller'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRoleBinding|~X|nginx-ingress-clusterrole-nisa-binding'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole"]' 'rbac.authorization.k8s.io_v1beta1_ClusterRole|~X|nginx-ingress-clusterrole'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding"]' 'rbac.authorization.k8s.io_v1beta1_RoleBinding|ingress-kbst-default|nginx-ingress-role-nisa-binding'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role"]' 'rbac.authorization.k8s.io_v1beta1_Role|ingress-kbst-default|nginx-ingress-role'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration"]' '~G_v1_ConfigMap|ingress-kbst-default|nginx-configuration'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|tcp-services"]' '~G_v1_ConfigMap|ingress-kbst-default|tcp-services'
terraform import 'module.gke_zero.module.cluster.module.cluster_services.kustomization_resource.current["~G_v1_ConfigMap|ingress-kbst-default|udp-services"]' '~G...
Read more

v0.5.0-beta.0

13 Jan 19:46
@pst pst
9b931bd
Compare
Choose a tag to compare
v0.5.0-beta.0 Pre-release
Pre-release
  • EKS: Allow configuring mapRoles, mapUsers and mapAccounts.
    See #69 for usage details.
  • EKS: Add security groups to allow apiserver webhook communication.
  • Update versions of Terraform and used providers.
  • Update versions of cloud provider CLIs.
  • Update version of Kustomize and add apiVersion to kustomization files.

Thanks @youngnicks, @piotrszlenk and @cbek for contributions to this release.

Upgrade Notes

Cluster services (AKS, EKS, GKE)

The previous release included a version of the nginx ingress controller cluster service which had the version set as a label and as a labelSelector. Since the labelSelectors are immutable, this causes applying the update to the deployment to fail. This issue has since been fixed in the nginx ingress controller base, however, for existing clusters this requires the deployment to be recreated manually to update to this release.

AKS

Upstream has added support for multiple node pools. This was implemented by switching from AvailabilitySets to VirtualMachineScaleSets. This change is reflected in the azurerm Terraform provider by renaming the agent_pool_profile attribute to default_node_pool. This requires recreating AKS clusters. While backwards compatibility is an important goal for Kubestack, it would require a lot of complexity to support the upstream changes in Terraform which isn't justified for an early beta release.

To avoid a service disruption consider creating a new cluster pair, migrate the workloads, then destroy the previous one by temporarily loading the old and new module version in clusters.tf.

v0.4.0-beta.0

16 Jun 12:36
@pst pst
ed2462d
Compare
Choose a tag to compare
v0.4.0-beta.0 Pre-release
Pre-release
  • No changes to clusters compared to v0.3.0-beta.0
  • Upgrades syntax to Terraform 0.12

Upgrade Notes

This release changes the upstream modules to the new Terraform 0.12 configuration language syntax. Likewise, repositories bootstrapped from the quickstart, need to be updated aswell. There are two small changes required.

  1. Change configuration variable syntax in clusters.tf like in the example below:
    -  configuration = "${var.clusters["eks_zero"]}"
    +  configuration = var.clusters["eks_zero"]
    
  2. Update the variable type definition in variables.tf like in the example below:
    -  type        = "map"
    +  type        = map(map(map(string)))
    

Last but not least, remember to upgrade the Terraform version in the Dockerfile. Depending on when you bootstrapped your repository, there may be additional changes in that Dockerfile worth copying.

v0.3.0-beta.0

16 Jun 12:12
@pst pst
1935aee
Compare
Choose a tag to compare
v0.3.0-beta.0 Pre-release
Pre-release
  • GKE: Auto scaling - replace cluster default with separate node pool with auto scaling enabled
  • GKE and AWS: Add node pool modules in preparation for additional node pool support per cluster
  • GKE: Remove depracated node_medatadata feature and replace deprecated region with location parameter
  • Update versions of Terraform providers

Upgrade Notes

GKE

Temporarily set remove_default_node_pool = false in the cluster pair's config. Then apply once, to spawn the new node pool. Once that's done. Remove the variable again, and apply a second time to remove the now obsolete previous node pool. Compare below diff for an example of configuration changes from v0.2.1-beta.0 to v0.3.0-beta.0 for autoscaling and including the temporary remove_default_node_pool = false.

It is recommended to manually cordon the nodes of the old node pool and wait for workloads to be migrated by K8s, before applying the second time and removing the default node pool.

$ git diff v0.2.1-beta.0 -- tests/config.auto.tfvars
diff --git a/tests/config.auto.tfvars b/tests/config.auto.tfvars
index 2dd773e..4bbdae6 100644
--- a/tests/config.auto.tfvars
+++ b/tests/config.auto.tfvars
@@ -25,14 +25,16 @@ clusters = {
       name_prefix                = "testing"
       base_domain                = "infra.serverwolken.de"
       cluster_min_master_version = "1.13.6"
-      cluster_initial_node_count = 1
+      cluster_min_node_count     = 1
+      cluster_max_node_count     = 3
       region                     = "europe-west1"
-      cluster_additional_zones   = "europe-west1-b,europe-west1-c,europe-west1-d"
+      cluster_node_locations     = "europe-west1-b,europe-west1-c,europe-west1-d"
+      remove_default_node_pool   = false
     }
 
     # Settings for Ops-cluster
     ops = {
-      cluster_additional_zones = "europe-west1-b"
+      cluster_node_locations = "europe-west1-b"
     }
   }

EKS

Manually move the autoscaling group and launch configurations in the state to reflect the new module hierachy like in the example below. After that, there should be no changes to apply when upgrading from v0.2.1-beta.0 to v0.3.0-beta.0.

kbst@298d3d14f141:/infra$ terraform state mv module.eks_zero.module.cluster.aws_autoscaling_group.nodes module.eks_zero.module.cluster.module.node_pool.aws_autoscaling_group.nodes
Moved module.eks_zero.module.cluster.aws_autoscaling_group.nodes to module.eks_zero.module.cluster.module.node_pool.aws_autoscaling_group.nodes

kbst@298d3d14f141:/infra$ terraform state mv module.eks_zero.module.cluster.aws_launch_configuration.nodes module.eks_zero.module.cluster.module.node_pool.aws_launch_configuration.nodes
Moved module.eks_zero.module.cluster.aws_launch_configuration.nodes to module.eks_zero.module.cluster.module.node_pool.aws_launch_configuration.nodes

AKS

No changes in this release.

v0.2.1-beta.0

13 Jun 19:51
@pst pst
cbecddd
Compare
Choose a tag to compare
v0.2.1-beta.0 Pre-release
Pre-release
  • Add support for localhost labs using Kubernetes in Docker (kind)
  • Fix AWS correctly assuming cross account roles also for aws-iam-authenticator
  • Bump GKE min_master_version to 1.13.6

v0.2.0-beta.1

30 Apr 13:18
@pst pst
ab7610f
Compare
Choose a tag to compare
v0.2.0-beta.1 Pre-release
Pre-release
  • Fix az command line by locking version and installing dependencies
  • Expose AKS variables