kloud-3s is a set of terraform modules that deploys a secure, functional ( kubernetes) k3s cluster on a number of cloud providers.
The following are currently tested;
- Hetzner Cloud
- Vultr
- DigitalOcean
- Linode
- UpCloud**
- ScaleWay
- OVH
- Google Cloud
- Azure
- Amazon Web Services
- Alibaba Cloud
- Oracle Cloud
You may support the project by using the referral links above.
This project is inspired by and borrows from the awesome work done by Hobby-Kube
kloud-3s follows the hobby-kube guidelines for setting up a secure kubernetes cluster.
As this guide is comprehensive, the information will not be repeated here.
kloud-3s aims to add the following;
- Use a LightWeight kubernetes distribution i.e. k3s .
- Allow clean scale-up and scale down of nodes.
- Improve supported OS'. kloud-3s supports Windows with git-bash.
- Bootstrap installation with only minimal variables required.
- Resolve cluster and pod networking issues that ranks as the most common issue for k3s installations.
- Support multiple kubernetes CNI's and support preservation of source IP's out of the box. The following have been tested;
The embedded k3s flannel (does not preserve source IP).
- Add testing by using some popular cloud-native projects, namely:
A successful deployment will complete without any error from these deployments.
-
kloud-3s is opinionated and does not support every available use-case. The objective is to provide a consistent deployment experience across the supported cloud providers.
-
For maximum portability kloud-3s does not use an ssh-agent for it's terraform modules. It does not require existing ssh-keys but users may define their existing key paths.
-
Enforce encrypted communication between nodes and use vpc/private networks if supported by cloud provider.
-
Although not required, kloud-3s suggests using it with a domain you own.
-
kloud-3s creates an
A record
and wildcardCNAME record
of the domain value you provided pointing to your master node. -
kloud-3s ensures functionality of the following across all its supported cloud providers. The support matrix will be updated as other versions are tested.
Software Version Ubuntu 20.04 LTS K3S v1.21.1+k3s1 -
kloud-3s tests a successful deployment by using traefik and cert-manager deployments sending requests to the following endpoints;
Test Response Code Certificate Issuer curl -Lkv test.your.domain
200
None
curl -Lkv whoami.your.domain
200
Fake LE
curl -Lkv dash.your.domain
200
LetsEncrypt
kloud-3s requires the following installed on your system
- terraform
wireguard- jq
- kubectl
- git-bash if on Windows
-
Clone the repo
$ git clone https://github.com/jawabuu/kloud-3s.git
-
Switch to desired cloud provider under the deploy directory. For example, to deploy kloud-3s on digitalocean
$ cd kloud-3s/deploy/digitalocean
-
Copy tfvars.example to terraform.tfvars
deploy/digitalocean$ cp tfvars.example terraform.tfvars
-
Using your favourite editor, update values in terraform.tfvars marked required
deploy/digitalocean$ nano terraform.tfvars # DNS Settings create_zone = "true" domain = "kloud-3s.my.domain" # We are using digitalocean for dns # Resource Settings digitalocean_token = <required>
-
Run
terraform init
to initalize modulesdeploy/digitalocean$ terraform init
-
Run
terraform plan
to view changes terraform will makedeploy/digitalocean$ terraform apply
-
Run
terraform apply
to create your resourcesdeploy/digitalocean$ terraform apply --auto-approve
-
Set
KUBECONFIG
by running$(terraform output kubeconfig)
deploy/digitalocean$ $(terraform output kubeconfig)
-
Check resources
kubectl get po -A -o wide
deploy/digitalocean$ kubectl get po -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system cilium-operator-77d99f8578-hqhtx 1/1 Running 0 60s 10.0.1.2 kube2 <none> <none> kube-system metrics-server-6d684c7b5-nrphr 1/1 Running 0 20s 10.42.1.68 kube2 <none> <none> kube-system cilium-ggjxz 1/1 Running 0 60s 10.0.1.2 kube2 <none> <none> kube-system coredns-6c6bb68b64-54dgw 1/1 Running 0 33s 10.42.0.3 kube1 <none> <none> kube-system cilium-9t6f7 1/1 Running 0 51s 10.0.1.1 kube1 <none> <none> cert-manager cert-manager-9b8969d86-4ppxb 1/1 Running 0 30s 10.42.0.8 kube1 <none> <none> whoami whoami-5c8d94f78-qg2pc 1/1 Running 0 20s 10.42.1.217 kube2 <none> <none> whoami whoami-5c8d94f78-8bgtc 1/1 Running 0 16s 10.42.0.231 kube1 <none> <none> default traefik-76695c9b69-t25j2 1/1 Running 0 27s 10.42.0.94 kube1 <none> <none> metallb-system speaker-rwc46 1/1 Running 0 8s 10.0.1.2 kube2 <none> <none> metallb-system controller-65c5689b94-vdcpn 1/1 Running 0 8s 10.42.1.90 kube2 <none> <none> metallb-system speaker-zmpnx 1/1 Running 0 12h 10.0.1.1 kube1 <none> <none> default net-8c845cc87-vml5w 1/1 Running 0 7s 10.42.1.52 kube2 <none> <none> cert-manager cert-manager-webhook-8c5db9fb6-b59tj 1/1 Running 0 28s 10.42.0.250 kube1 <none> <none> kube-system local-path-provisioner-58fb86bdfd-k7cfw 1/1 Running 4 34s 10.42.1.13 kube2 <none> <none> cert-manager cert-manager-cainjector-8545fdf87c-jddnl 1/1 Running 6 27s 10.42.0.219 kube1 <none> <none>
-
SSH to master easily with
$(terraform output ssh-master)
deploy/digitalocean$ $(terraform output ssh-master)
For any given provider under the deploy directory, only the terraform.tfvars and main.tf files need to be modified. Refer to the variables.tf file which contains information on the various variables that you can override in terraform.tfvars
Quick Reference
Common variables for deployment
common variables | default | description |
---|---|---|
node_count | 3 | Number of nodes in cluster |
create_zone | false | Create a domain zone if it does not exist |
domain | none | Domain for the deployment |
k3s_version | latest | This is set to v1.21.1+k3s1 |
cni | weave | Choice of CNI among default,flannel,cilium,calico,weave |
overlay_cidr | 10.42.0.0/16 | pod cidr for k3s |
service_cidr | 10.43.0.0/16 | service cidr for k3s |
vpc_cidr | 10.115.0.0/24 | vpc cidr for supported providers |
vpn_iprange | 10.0.1.0/24 | vpn cidr for wireguard |
ha_cluster | false | Create a high-availabilty cluster |
ha_nodes | 3 | Number of controller nodes in HA cluster |
enable_floatingip | false | Use a floating ip |
install_app.floating_ip | false | Install floating-ip controller |
create_certs | false | Create letsencrypt certs |
ssh_key_path | ./../../.ssh/tf-kube |
Filepath for ssh private key |
ssh_pubkey_path | ./../../.ssh/tf-kube.pub |
Filepath for ssh public key |
ssh_keys_dir | ./../../.ssh |
Directory to store created ssh keys |
kubeconfig_path | ./../../.kubeconfig |
Directory to store kubeconfig file |
Provider variables
************ | Authentication | Machine Size | Machine OS | Machine Region |
---|---|---|---|---|
DigitalOcean | digitalocean_token | digitalocean_size | digitalocean_image | digitalocean_region |
HetznerCloud | hcloud_token | hcloud_type | hcloud_image | hcloud_location |
Vultr | vultr_api_key | vultr_plan | vultr_os | vultr_region |
Linode | linode_token | linode_type | linode_image | linode_region |
UpCloud | upcloud_username, upcloud_password | upcloud_plan | upcloud_image | upcloud_zone |
ScaleWay | scaleway_organization_id, scaleway_access_key, scaleway_secret_key | scaleway_type | scaleway_image | scaleway_zone |
OVH | tenant_name, user_name, password | ovh_type | ovh_image | region |
creds_file | size | image | region, region_zone | |
Azure | client_id, client_secret, tenant_id, subscription_id | size | - | region |
AWS | aws_access_key, aws_secret_key | size | image | region |
AlibabaCloud | alicloud_access_key, alicloud_secret_key | size | - | scaleway_zone |
- Support multi-master k3s HA clusters
- Add module to optionally bootstrap basic logging, monitoring and ingress services in the vein of kube-prod by bitnami.
- Security hardening for production workloads
- Support more cloud providers
- Support more DNS providers
- Add Cloud Controller Manager module
- Implement K3S Auto upgrades
**Terraform v0.13 only