Skip to content

Commit

Permalink
docs: format and copy improvements
Browse files Browse the repository at this point in the history
- format main and eks package readme
- link the upgrade guide in the main readme
- rename upgrade guide to follow the naming convention we use for the distro.
- improve upgrade guide copy
- prepare for release v1.10.0
  • Loading branch information
ralgozino committed Dec 20, 2022
1 parent c0072ec commit a8274a4
Show file tree
Hide file tree
Showing 3 changed files with 45 additions and 40 deletions.
9 changes: 7 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
<!-- markdownlint-disable MD033 -->
<h1>
<img src="./docs/assets/fury_installer.png?raw=true" align="left" width="105" style="margin-right: 15px"/>
Fury EKS Installer
</h1>
<!-- markdownlint-enable MD033 -->

![Release](https://img.shields.io/badge/Latest%20Release-v1.9.0-blue)
![Release](https://img.shields.io/badge/Latest%20Release-v1.10.0-blue)
![License](https://img.shields.io/github/license/sighupio/fury-eks-installer?label=License)
[![Slack](https://img.shields.io/badge/slack-@kubernetes/fury-yellow.svg?logo=slack&label=Slack)](https://kubernetes.slack.com/archives/C0154HYTAQH)

Expand Down Expand Up @@ -34,12 +36,15 @@ The [EKS module][eks-module] deploys a **private control plane** cluster, where

The [VPC and VPN module][vpc-vpn-module] setups all the necessary networking infrastructure and a bastion host.

The bastion host includes a OpenVPN instance easily manageable by using [furyagent][furyagent] to provide access to the cluster.
The bastion host includes an OpenVPN instance easily manageable by using [furyagent][furyagent] to provide access to the cluster.

> 🕵🏻‍♂️ [Furyagent][furyagent] is a tool developed by SIGHUP to manage OpenVPN and SSH user access to the bastion host.
## Usage

> ⚠️ **WARNING**:
> if you are upgrading from v1.9.x to v1.10.0, please read [the upgrade guide](docs/upgrades/v1.9-to-v1.10.0.md) first.
### Requirements

- **AWS Access Credentials** of an AWS Account with the following [IAM permissions](https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/iam-permissions.md).
Expand Down
12 changes: 6 additions & 6 deletions docs/UPGRADE-1.10.md → docs/upgrades/v1.9-to-v1.10.0.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Upgrade from v1.9.x to v1.10.x
# Upgrade from v1.9.x to v1.10.0

In this version of `fury-eks-installer`, we had to introduce support for launch templates due to the [deprecation of launch configurations](https://aws.amazon.com/blogs/compute/amazon-ec2-auto-scaling-will-no-longer-add-support-for-new-ec2-features-to-launch-configurations/).
In version 1.10.0 of `fury-eks-installer`, we introduced support for launch templates in Node Pools configuration due to the [deprecation of launch configurations](https://aws.amazon.com/blogs/compute/amazon-ec2-auto-scaling-will-no-longer-add-support-for-new-ec2-features-to-launch-configurations/).

To achieve this goal, we introduced a new variable `node_pools_launch_kind` (that defaults to `launch_templates`) to select wheter to use launch templates, launch configurations or both:
To achieve this goal, we introduced a new variable `node_pools_launch_kind` (that defaults to `launch_templates`) to select wheter to use launch templates, launch configurations, or both:

- Selecting `launch_configurations` in existing Fury clusters would change nothing.
- Selecting `launch_configurations` in existing Fury clusters will not make changes.
- Selecting `launch_templates` for existing clusters will delete the current node pools and create new ones.
- Selecting `both` for existing clusters will create new node pools that use the `launch_templates` and leave the old with `launch_configurations` intact.

Expand All @@ -17,10 +17,10 @@ Continue reading for how to migrate an existing cluster from `launch_configurati
> ⚠️ **WARNING**
> If any of the node fails before migrating to launch templates, the pods will have nowhere to be scheduled. If you can't cordon all the nodes at once, take note of the existing nodes and start cordoning them after the new nodes from the launch templates start joining the cluster.
2. Add `node_pools_launch_kind = "both"` to your Terraform module configuration and apply.
2. Add `node_pools_launch_kind = "both"` to your Terraform module (or furyctl) configuration and apply.

3. Wait for the new nodes to join the cluster.

4. Drain the old nodes that you cordoned in step 1 using `kubectl drain --ignore-daemonsets --delete-local-data <node_name>`. Now all the pods are running on nodes created with launch templates.

5. Change `node_pools_launch_kind` to `"launch_templates"` in your Terraform module configuration and apply. This will delete the old nodes created by launch configurations and leave you with the new nodes created by launch templates
5. Change `node_pools_launch_kind` to `"launch_templates"` in your Terraform module (or furyctl) configuration and apply. This will delete the old nodes created by launch configurations and leave you with the new nodes created by launch templates
64 changes: 32 additions & 32 deletions modules/eks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,48 +6,48 @@

## Requirements

| Name | Version |
|------|---------|
| terraform | 0.15.4 |
| aws | 3.37.0 |
| kubernetes | 1.13.3 |
| Name | Version |
| ---------- | ------- |
| terraform | 0.15.4 |
| aws | 3.37.0 |
| kubernetes | 1.13.3 |

## Providers

| Name | Version |
|------|---------|
| aws | 3.37.0 |
| ---- | ------- |
| aws | 3.37.0 |

## Inputs

| Name | Description | Default | Required |
|------|-------------|---------|:--------:|
| cluster\_log\_retention\_days | Kubernetes Cluster log retention in days. Defaults to 90 days. | `90` | no |
| cluster\_name | Unique cluster name. Used in multiple resources to identify your cluster resources | n/a | yes |
| cluster\_version | Kubernetes Cluster Version. Look at the cloud providers documentation to discover available versions. EKS example -> 1.16, GKE example -> 1.16.8-gke.9 | n/a | yes |
| dmz\_cidr\_range | Network CIDR range from where cluster control plane will be accessible | n/a | yes |
| eks\_map\_accounts | Additional AWS account numbers to add to the aws-auth configmap | n/a | yes |
| eks\_map\_roles | Additional IAM roles to add to the aws-auth configmap | n/a | yes |
| eks\_map\_users | Additional IAM users to add to the aws-auth configmap | n/a | yes |
| network | Network where the Kubernetes cluster will be hosted | n/a | yes |
| node\_pools | An object list defining node pools configurations | `[]` | no |
| node\_pools\_launch\_kind | Which kind of node pools to create. Valid values are: launch\_templates, launch\_configurations, both. | `"launch_templates"` | no |
| resource\_group\_name | Resource group name where every resource will be placed. Required only in AKS installer (*) | `""` | no |
| ssh\_public\_key | Cluster administrator public ssh key. Used to access cluster nodes with the operator\_ssh\_user | n/a | yes |
| subnetworks | List of subnets where the cluster will be hosted | n/a | yes |
| tags | The tags to apply to all resources | `{}` | no |
| Name | Description | Default | Required |
| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------- | :------: |
| cluster\_log\_retention\_days | Kubernetes Cluster log retention in days. Defaults to 90 days. | `90` | no |
| cluster\_name | Unique cluster name. Used in multiple resources to identify your cluster resources | n/a | yes |
| cluster\_version | Kubernetes Cluster Version. Look at the cloud providers documentation to discover available versions. EKS example -> 1.16, GKE example -> 1.16.8-gke.9 | n/a | yes |
| dmz\_cidr\_range | Network CIDR range from where cluster control plane will be accessible | n/a | yes |
| eks\_map\_accounts | Additional AWS account numbers to add to the aws-auth configmap | n/a | yes |
| eks\_map\_roles | Additional IAM roles to add to the aws-auth configmap | n/a | yes |
| eks\_map\_users | Additional IAM users to add to the aws-auth configmap | n/a | yes |
| network | Network where the Kubernetes cluster will be hosted | n/a | yes |
| node\_pools | An object list defining node pools configurations | `[]` | no |
| node\_pools\_launch\_kind | Which kind of node pools to create. Valid values are: launch\_templates, launch\_configurations, both. | `"launch_templates"` | no |
| resource\_group\_name | Resource group name where every resource will be placed. Required only in AKS installer (*) | `""` | no |
| ssh\_public\_key | Cluster administrator public ssh key. Used to access cluster nodes with the operator\_ssh\_user | n/a | yes |
| subnetworks | List of subnets where the cluster will be hosted | n/a | yes |
| tags | The tags to apply to all resources | `{}` | no |

## Outputs

| Name | Description |
|------|-------------|
| cluster\_certificate\_authority | The base64 encoded certificate data required to communicate with your cluster. Add this to the certificate-authority-data section of the kubeconfig file for your cluster |
| cluster\_endpoint | The endpoint for your Kubernetes API server |
| eks\_cluster\_oidc\_issuer\_url | The URL on the EKS cluster OIDC Issuer |
| eks\_worker\_iam\_role\_name | Default IAM role name for EKS worker groups |
| eks\_worker\_security\_group\_id | Security group ID attached to the EKS workers. |
| eks\_workers\_asg\_names | Names of the autoscaling groups containing workers. |
| operator\_ssh\_user | SSH user to access cluster nodes with ssh\_public\_key |
| Name | Description |
| -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| cluster\_certificate\_authority | The base64 encoded certificate data required to communicate with your cluster. Add this to the certificate-authority-data section of the kubeconfig file for your cluster |
| cluster\_endpoint | The endpoint for your Kubernetes API server |
| eks\_cluster\_oidc\_issuer\_url | The URL on the EKS cluster OIDC Issuer |
| eks\_worker\_iam\_role\_name | Default IAM role name for EKS worker groups |
| eks\_worker\_security\_group\_id | Security group ID attached to the EKS workers. |
| eks\_workers\_asg\_names | Names of the autoscaling groups containing workers. |
| operator\_ssh\_user | SSH user to access cluster nodes with ssh\_public\_key |

## Usage

Expand Down

0 comments on commit a8274a4

Please sign in to comment.