Skip to content

acend/terraform-k8s-cluster-lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Acend Kubernetes Training Cluster Setup with Terraform

Overview

This setup provisions a Kubernetes Cluster to be used with our trainings.

We use Hetzner as our cloud provider and RKE2 to create the kubernetes cluster. Kubernetes Cloud Controller Manager for Hetzner Cloud to provision lobalancer from a Kubernetes service (type Loadbalancer) objects and also configure the networking & native routing for the Kubernetes cluster network traffic.

Cluster setup is based on our infrastructure setup.

In order to deploy our acend Kubernetes Cluster the following steps are necessary:

  1. Terraform to deploy base infrastructure
    • VM's for controlplane and worker nodes
    • Network
    • Loadbalancer for Kubernetes API and RKE2
    • Firewall
    • Hetzner Cloud Controller Manager for the Kubernetes Cluster Networking
  2. Terraform to deploy and then ootstrap ArgoCD using our training-setup
  3. ArgoCD to deploy resources student/user resources and other components like
    • Storage Provisioner (hcloud csi, longhorn)
    • Ingresscontroller
    • Cert-Manager
    • Gitea
    • etc

See our training-setup for details on how the bootstrapping works.

For more details on the cluster design and setup see the documentation in our main infrastructure repository.

Components

argocd

ArgoCD is used to deploy components (e.g.) onto the cluster. ArgoCD is also used for the training itself.

There is a local admin account. The password can be extracted with terraform output argocd-admin-password

Each student/user also get a local account.

cert-manager

Cert Manager is used to issue Certificates (Let's Encrypt). The ACME Webhook for the hosttech DNS API is used for dns01 challenges with our DNS provider.

The following ClusterIssuer are available:

  • letsencrypt-prod: for general http01 challenge.
  • letsencrypt-prod-acend: for dns01 challenge using the hosttech acme webhook. The token for hosttech is stored in the hosttech-secret Secret in Namespace cert-manager

Hetzner Kubernetes Cloud Controller Manager

The Kubernetes Cloud Controller Manager for Hetzner Cloud is deployed and allows to provision LoadBalancer based on Services with type LoadBalancer. The Cloud Controller Manager is also resposible to create all the necessary routes between the Kubernete Nodes. See Network Support for details.

Hetzner CSI

To provision storage we use Hetzner CSI Driver.

The StorageClass hcloud-volumes is available. Be aware, hcloud-volumes are provisioned at our cloud provider and do cost. Furthermore we have limits ou how much storage we can provision or more precise, attache to a VM.

Ingresscontroller: haproxy

haproxy is used as ingress controller. haproxy is the default IngressClass

Longhorn

As our Kubernetes Nodes have enough local disk available, we use longhorn as a additional storage solution. The longhorn storageclass is set as the default storage class.

Gitea

We use a local Gitea installation that is used in our trainings.

Training Environment

The training environment contains the following per student/user:

  • Credentials
  • All necessary namespaces
  • RBAC to access the namespaces
  • a Webshell per student/user.
  • a Gitea account and a Git repository clone of our argocd-training-example

It is deployed with ArgoCD using ApplicationSets. The ApplicationSets are deployed with Terraform

Access to the training environment

There is a Welcome page deployed at https://welcome.${cluster_name}.{cluster_domain} which contains a list for each student/user with the URL for the Webshell and also credentials.

Usage

This repo shall be used as a module from in our training-setup

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages