Releases: kbst/terraform-kubestack
v0.12.0-beta.0
- AKS: Support configurable CNI #135 thanks @feend78
- Modules provide
current_config
output #140 thanks @feend78 - GKE: Add support for private nodes #132 thanks @Spazzy757
- Update the default nginx ingress version to v0.40.2-kbst.0 #143
Upgrade Notes
EKS
- No EKS specific changes.
GKE
- GKE upstream changed the default for new clusters to private nodes. Kubestack is following the new default with this release. Existing GKE clusters need to set
enable_private_nodes = false
to retain the previous configuration. Changing the private nodes setting requires recreating the cluster.
AKS
- The AKS module by default now uses calico for network policies. Previously created AKS clusters have to set
network_policy = null
inconfig.auto.tfvars
to retain the previous configuration. Changing thenetwork_policy
requires recreating the cluster.
v0.11.0-beta.0
- No changes to clusters compared to v0.10.0-beta.0
- Upgrades to Terraform 0.13 #139
- Vendors provider binaries in the container image 5f423a7
Upgrade Notes
This release updates Terraform to v0.13.4
. There are no other changes in this release. This is on purpose, and following the upstream TF 0.13 upgrade instructions.
Changes required by the upgrade to modules are part of this release and the Dockerfile has been updated to include the new Terraform version. As part of the TF 0.13 upgrade, Hashicorp released namespaced providers on the official registry. The kustomization
provider has been published on the registry. To reflect this change in your state, it is required to run the following command manually, before committing the change and running the CI/CD pipeline.
Manual step: terraform state replace-provider
-
Update to version
v0.11.0-beta.0
inclusters.tf
andDockerfile*
-
Run a shell in the container
docker build -t kbst-infra-automation:bootstrap . docker run --rm -ti \ -v `pwd`:/infra \ kbst-infra-automation:bootstrap
-
Make sure you are authenticated with your cloud provider
-
Run the state replace-provider command
terraform init terraform workspace select ops terraform state replace-provider registry.terraform.io/-/kustomization registry.terraform.io/kbst/kustomization terraform init terraform apply
-
Commit, push and merge this change into master to validate it against the
ops
environment -
Once this completed for ops, run the same commands also for
apps
and then tag the merge committerraform init terraform workspace select apps terraform state replace-provider registry.terraform.io/-/kustomization registry.terraform.io/kbst/kustomization terraform init terraform apply
v0.10.0-beta.0
- Adds cluster-local modules for the new development environment feature
- Updates kind provider to v0.0.3
- Adds disable_default_ingress support to kind module
Upgrade Notes
This release allows teams to update their existing repositories to the new local development environment feature. To make this straightforward, there are no changes to actual AKS, EKS or GKE clusters between this and the previous v0.9.4-beta.0
release.
AKS, EKS and GKE
To update existing repositories to support the local development environment update the version in clusters.tf
and Dockerfile
to v0.10.0-beta.0
. Additionally, you need to install the kbst
CLI and get the Dockerfile.loc
delivered with the latest starters.
The easiest way to do this is to follow the first tutorial step and then copy the Dockerfile.loc
over from the temporary starter directory into your actual repository.
v0.9.4-beta.0
- Transitional release for AKS to upgrade to the new Terraform provider and AKS APIs. Has breaking changes.
Upgrade Notes
EKS and GKE
There are no changes to EKS or GKE in this release. This release only affects AKS clusters.
AKS
There are two changes that will destroy and re-create various resources, that are required due to upstream changes. The azurerm
provider starting with version v2.14.0
uses a different AKS API and this causes two issues:
- It requires instances with min. 2 CPU cores and 4GB memory. Changing the instance type forces a destroy and recreate.
- Loadbalancer's created by Kubernetes use
Standard
SKU, previously created IPs are ofBasic
SKU. Changing the SKU forces a new IP.
Upgrade by migrating workloads
To upgrade without downtime users will need to create a second cluster with the new module version, migrate workloads and then remove the old cluster module from their configuration.
Upgrade with a maintenance window
Alternatively, users can upgrade existing clusters with the following steps. Following these steps destroys and recreates the cluster and the public IP and this is likely to cause disruptions to cluster ingress and stateful workloads. You can follow the documentation for adding a new cluster and then removing a cluster.
Steps to upgrade existing clusters:
- First upgrade to
v0.9.3-beta.1
and disable the default ingress by settingdisable_default_ingress = true
inconfig.auto.tfvars
. Then apply this change. This will destroy all Ingress related resources, otherwise the next step will fail with a circular dependency. - Now upgrade to
v0.9.4-beta.0
. Applying this will recreate the cluster if your current instance type does not meet the minimum requirements. - Finally, re-enable the default ingress by setting
disable_default_ingress = false
or removing the attribute. Applying this will create a new IP and DNS resources for cluster ingress again.
v0.9.3-beta.1
- Small fixup release renaming
base_key
toconfiguration_base_key
for the custom environments feature.
Upgrade Notes
- Update the version in
Dockerfile
andclusters.tf
tov0.9.3-beta.1
.
There are no changes to clusters in this release.
v0.9.3-beta.0
- Configuration inheritance now supports both custom names instead of
apps
andops
for the Terraform workspaces as well as more than two environments. This enables various alternative cluster-environment architectures. #127 - Default ingress can now be disabled by setting
disable_default_ingress = true
inconfig.auto.tfvars
. This excludes all ingress related infrastructure from being provisioned. Additionally, users may want to remove Nginx ingress frommanifests/
. #128
Upgrade Notes
- Update the version in
Dockerfile
andclusters.tf
tov0.9.3-beta.0
.
There are no changes to clusters in this release.
v0.9.2-beta.1
v0.9.2-beta.0
- Update Kustomize provider to
v0.2.0-beta.0
#117- Improved resiliency for a number of edge cases when creating and deleting K8s resources
- AKS: Fix
kube_dashboard
flapping #118 - EKS: Fix
aws-auth
configmap already exists #119
Upgrade Notes
- Update the version in
Dockerfile
andclusters.tf
tov0.9.2-beta.0
.
There are no changes to clusters in this release.
v0.9.1-beta.0
- Update Kustomize and Kustomize provider
Upgrade Notes
- Update the version in
Dockerfile
andclusters.tf
tov0.9.1-beta.0
.
There are no changes to clusters in this release.
v0.9.0-beta.0
- Updates dependency versions in Dockerfile including Terraform and provider CLIs
- Updates Terraform provider versions in modules
- GKE: Bump min_master_version to 1.16
- EKS: Replace worker node
aws_autoscaling_group
with dedicatedaws_eks_node_group
resource.
Upgrade Notes
GKE and AKS
- Update the version in
Dockerfile
andclusters.tf
tov0.9.0-beta.0
.
EKS
To migrate from the previously used aws_autoscaling_group
to the new dedicated aws_eks_node_group
resource without interruptions to the cluster workloads, a transitional release is provided. This transitional release will have worker nodes started using both the aws_autoscaling_group
and aws_eks_node_group
, allowing you to cordon the old nodes before they are removed when finally updating to v0.9.0-beta.0
.
- Update the version in
Dockerfile
andclusters.tf
to the transitional releasev0.8.1-beta.0
. - Apply the transitional release to your ops and apps environments.
- Manually cordon old nodes using
kubectl
and wait for workloads to be moved. - Update the version in
Dockerfile
andclusters.tf
tov0.9.0-beta.0
.