diff --git a/README.md b/README.md index a15ab9a..fedfd26 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,11 @@ +

Fury EKS Installer

+ -![Release](https://img.shields.io/badge/Latest%20Release-v1.9.0-blue) +![Release](https://img.shields.io/badge/Latest%20Release-v1.10.0-blue) ![License](https://img.shields.io/github/license/sighupio/fury-eks-installer?label=License) [![Slack](https://img.shields.io/badge/slack-@kubernetes/fury-yellow.svg?logo=slack&label=Slack)](https://kubernetes.slack.com/archives/C0154HYTAQH) @@ -34,12 +36,15 @@ The [EKS module][eks-module] deploys a **private control plane** cluster, where The [VPC and VPN module][vpc-vpn-module] setups all the necessary networking infrastructure and a bastion host. -The bastion host includes a OpenVPN instance easily manageable by using [furyagent][furyagent] to provide access to the cluster. +The bastion host includes an OpenVPN instance easily manageable by using [furyagent][furyagent] to provide access to the cluster. > 🕵🏻‍♂️ [Furyagent][furyagent] is a tool developed by SIGHUP to manage OpenVPN and SSH user access to the bastion host. ## Usage +> ⚠️ **WARNING**: +> if you are upgrading from v1.9.x to v1.10.0, please read [the upgrade guide](docs/upgrades/v1.9-to-v1.10.0.md) first. + ### Requirements - **AWS Access Credentials** of an AWS Account with the following [IAM permissions](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/iam-permissions.md). diff --git a/docs/UPGRADE-1.10.md b/docs/upgrades/v1.9-to-v1.10.0.md similarity index 65% rename from docs/UPGRADE-1.10.md rename to docs/upgrades/v1.9-to-v1.10.0.md index ff7f286..de8112f 100644 --- a/docs/UPGRADE-1.10.md +++ b/docs/upgrades/v1.9-to-v1.10.0.md @@ -1,10 +1,10 @@ -# Upgrade from v1.9.x to v1.10.x +# Upgrade from v1.9.x to v1.10.0 -In this version of `fury-eks-installer`, we had to introduce support for launch templates due to the [deprecation of launch configurations](https://aws.amazon.com/blogs/compute/amazon-ec2-auto-scaling-will-no-longer-add-support-for-new-ec2-features-to-launch-configurations/). +In version 1.10.0 of `fury-eks-installer`, we introduced support for launch templates in Node Pools configuration due to the [deprecation of launch configurations](https://aws.amazon.com/blogs/compute/amazon-ec2-auto-scaling-will-no-longer-add-support-for-new-ec2-features-to-launch-configurations/). -To achieve this goal, we introduced a new variable `node_pools_launch_kind` (that defaults to `launch_templates`) to select wheter to use launch templates, launch configurations or both: +To achieve this goal, we introduced a new variable `node_pools_launch_kind` (that defaults to `launch_templates`) to select wheter to use launch templates, launch configurations, or both: -- Selecting `launch_configurations` in existing Fury clusters would change nothing. +- Selecting `launch_configurations` in existing Fury clusters will not make changes. - Selecting `launch_templates` for existing clusters will delete the current node pools and create new ones. - Selecting `both` for existing clusters will create new node pools that use the `launch_templates` and leave the old with `launch_configurations` intact. @@ -17,10 +17,10 @@ Continue reading for how to migrate an existing cluster from `launch_configurati > ⚠️ **WARNING** > If any of the node fails before migrating to launch templates, the pods will have nowhere to be scheduled. If you can't cordon all the nodes at once, take note of the existing nodes and start cordoning them after the new nodes from the launch templates start joining the cluster. -2. Add `node_pools_launch_kind = "both"` to your Terraform module configuration and apply. +2. Add `node_pools_launch_kind = "both"` to your Terraform module (or furyctl) configuration and apply. 3. Wait for the new nodes to join the cluster. 4. Drain the old nodes that you cordoned in step 1 using `kubectl drain --ignore-daemonsets --delete-local-data `. Now all the pods are running on nodes created with launch templates. -5. Change `node_pools_launch_kind` to `"launch_templates"` in your Terraform module configuration and apply. This will delete the old nodes created by launch configurations and leave you with the new nodes created by launch templates +5. Change `node_pools_launch_kind` to `"launch_templates"` in your Terraform module (or furyctl) configuration and apply. This will delete the old nodes created by launch configurations and leave you with the new nodes created by launch templates diff --git a/modules/eks/README.md b/modules/eks/README.md index 124fb35..3334567 100644 --- a/modules/eks/README.md +++ b/modules/eks/README.md @@ -6,48 +6,48 @@ ## Requirements -| Name | Version | -|------|---------| -| terraform | 0.15.4 | -| aws | 3.37.0 | -| kubernetes | 1.13.3 | +| Name | Version | +| ---------- | ------- | +| terraform | 0.15.4 | +| aws | 3.37.0 | +| kubernetes | 1.13.3 | ## Providers | Name | Version | -|------|---------| -| aws | 3.37.0 | +| ---- | ------- | +| aws | 3.37.0 | ## Inputs -| Name | Description | Default | Required | -|------|-------------|---------|:--------:| -| cluster\_log\_retention\_days | Kubernetes Cluster log retention in days. Defaults to 90 days. | `90` | no | -| cluster\_name | Unique cluster name. Used in multiple resources to identify your cluster resources | n/a | yes | -| cluster\_version | Kubernetes Cluster Version. Look at the cloud providers documentation to discover available versions. EKS example -> 1.16, GKE example -> 1.16.8-gke.9 | n/a | yes | -| dmz\_cidr\_range | Network CIDR range from where cluster control plane will be accessible | n/a | yes | -| eks\_map\_accounts | Additional AWS account numbers to add to the aws-auth configmap | n/a | yes | -| eks\_map\_roles | Additional IAM roles to add to the aws-auth configmap | n/a | yes | -| eks\_map\_users | Additional IAM users to add to the aws-auth configmap | n/a | yes | -| network | Network where the Kubernetes cluster will be hosted | n/a | yes | -| node\_pools | An object list defining node pools configurations | `[]` | no | -| node\_pools\_launch\_kind | Which kind of node pools to create. Valid values are: launch\_templates, launch\_configurations, both. | `"launch_templates"` | no | -| resource\_group\_name | Resource group name where every resource will be placed. Required only in AKS installer (*) | `""` | no | -| ssh\_public\_key | Cluster administrator public ssh key. Used to access cluster nodes with the operator\_ssh\_user | n/a | yes | -| subnetworks | List of subnets where the cluster will be hosted | n/a | yes | -| tags | The tags to apply to all resources | `{}` | no | +| Name | Description | Default | Required | +| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------- | :------: | +| cluster\_log\_retention\_days | Kubernetes Cluster log retention in days. Defaults to 90 days. | `90` | no | +| cluster\_name | Unique cluster name. Used in multiple resources to identify your cluster resources | n/a | yes | +| cluster\_version | Kubernetes Cluster Version. Look at the cloud providers documentation to discover available versions. EKS example -> 1.16, GKE example -> 1.16.8-gke.9 | n/a | yes | +| dmz\_cidr\_range | Network CIDR range from where cluster control plane will be accessible | n/a | yes | +| eks\_map\_accounts | Additional AWS account numbers to add to the aws-auth configmap | n/a | yes | +| eks\_map\_roles | Additional IAM roles to add to the aws-auth configmap | n/a | yes | +| eks\_map\_users | Additional IAM users to add to the aws-auth configmap | n/a | yes | +| network | Network where the Kubernetes cluster will be hosted | n/a | yes | +| node\_pools | An object list defining node pools configurations | `[]` | no | +| node\_pools\_launch\_kind | Which kind of node pools to create. Valid values are: launch\_templates, launch\_configurations, both. | `"launch_templates"` | no | +| resource\_group\_name | Resource group name where every resource will be placed. Required only in AKS installer (*) | `""` | no | +| ssh\_public\_key | Cluster administrator public ssh key. Used to access cluster nodes with the operator\_ssh\_user | n/a | yes | +| subnetworks | List of subnets where the cluster will be hosted | n/a | yes | +| tags | The tags to apply to all resources | `{}` | no | ## Outputs -| Name | Description | -|------|-------------| -| cluster\_certificate\_authority | The base64 encoded certificate data required to communicate with your cluster. Add this to the certificate-authority-data section of the kubeconfig file for your cluster | -| cluster\_endpoint | The endpoint for your Kubernetes API server | -| eks\_cluster\_oidc\_issuer\_url | The URL on the EKS cluster OIDC Issuer | -| eks\_worker\_iam\_role\_name | Default IAM role name for EKS worker groups | -| eks\_worker\_security\_group\_id | Security group ID attached to the EKS workers. | -| eks\_workers\_asg\_names | Names of the autoscaling groups containing workers. | -| operator\_ssh\_user | SSH user to access cluster nodes with ssh\_public\_key | +| Name | Description | +| -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| cluster\_certificate\_authority | The base64 encoded certificate data required to communicate with your cluster. Add this to the certificate-authority-data section of the kubeconfig file for your cluster | +| cluster\_endpoint | The endpoint for your Kubernetes API server | +| eks\_cluster\_oidc\_issuer\_url | The URL on the EKS cluster OIDC Issuer | +| eks\_worker\_iam\_role\_name | Default IAM role name for EKS worker groups | +| eks\_worker\_security\_group\_id | Security group ID attached to the EKS workers. | +| eks\_workers\_asg\_names | Names of the autoscaling groups containing workers. | +| operator\_ssh\_user | SSH user to access cluster nodes with ssh\_public\_key | ## Usage