Gardener uses Kubernetes to manage Kubernetes clusters. This documentation describes how to install Gardener on an existing Kubernetes cluster of your IaaS provider.
Where reference is made in this document to the base cluster, we are actually referring to the existing cluster where you will install Gardener. This helps to distinguish them from the clusters that you will create after the installation using Gardener. Once it's installed, it is also referred to as garden cluster. Whenever you create clusters, Gardener will create seed clusters and shoot clusters. In this documentation we will only cover the installation of clusters in one region of one IaaS provider. More information: Architecture.
- The installation was tested on Linux and MacOS
- You need to have the following tools installed:
- You need a base cluster. Currently, the installation tools supports to install Gardener on the following Kubernetes clusters:
- Kubernetes version >= 1.11 or enable the feature gate
CustomResourceSubresources
for 1.10 clusters - Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP)
- Elastic Container Service for Kubernetes (EKS) or Kubernetes Operations (kops) on Amazon Web Services (AWS)
- Standard EKS clusters impose some additional difficulties for deploying a Gardener, one example being the EKS networking plugin that uses the same CIDR for nodes and pods, which Gardener can't handle. We are working on an improved documentation for this case. In the meantime, it is recommended to use other means for getting the initial cluster to avoid additional efforts.
- Azure Kubernetes Service (AKS) on Microsoft Azure
- Kubernetes version >= 1.11 or enable the feature gate
- Your base cluster needs at least 4 nodes with a size of 8GB for each node
- This is only a rough estimate for the required resources, you can also use fewer or more nodes if the node size is adjusted accordingly
- If you don't create additional seeds, all shoots' controlplanes will be hosted on your base cluster and these minimal requirements won't hold
- You need a service account for the virtual machine instance of your IaaS provider where your Kubernetes version runs
- You need to have permissions to access your base cluster's private key
- You are connected to your Kubernetes cluster (environment variable
KUBECONFIG
is set) - You need to have the Vertical Pod Autoscaler (VPA) installed on the base cluster and each seed cluster (Gardener deploys it on shooted seeds automatically).
To install Gardener in your base cluster, a command line tool sow is used. It depends on other tools to be installed. To make it simple, we have created a Docker image that already contains sow
and all required tools. To execute sow
you call a wrapper script which starts sow
in a Docker container (Docker will download the image from eu.gcr.io/gardener-project/sow if it is not available locally yet). Docker executes the sow command with the given arguments, and mounts parts of your file system into that container so that sow
can read configuration files for the installation of Gardener components, and can persist the state of your installation. After sow
's execution Docker removes the container again.
Which version of sow
is compatible with this version of garden-setup is specified in the SOW_VERSION file. Other versions might work too, but especially older versions of sow
are probably incompatible with newer versions of garden-setup.
-
Clone the
sow
repository and add the path to our wrapper script to yourPATH
variable so you can callsow
on the command line.# setup for calling sow via the wrapper git clone "https://github.com/gardener/sow" cd sow export PATH=$PATH:$PWD/docker/bin
-
Create a directory
landscape
for your Gardener landscape and clone this repository into a subdirectory calledcrop
:cd .. mkdir landscape cd landscape git clone "https://github.com/gardener/garden-setup" crop
-
If you don't have your
kubekonfig
stored locally somewhere yet, download it. For example, for GKE you would use the following command:gcloud container clusters get-credentials <your_cluster> --zone <your_zone> --project <your_project>
-
Save your
kubeconfig
somewhere in yourlandscape
directory. For the remaining steps we will assume that you saved it using file pathlandscape/kubeconfig
. -
In your
landscape
directory, create a configuration file calledacre.yaml
. The structure of the configuration file is described below. Note that the relative file path./kubeconfig
file must be specified in fieldlandscape.cluster.kubeconfig
in the configuration file.Do not use file
acre.yaml
in directorycrop
. This file is used internally by the installation tool. -
The Gardener itself, but also garden-setup can only handle kubeconfigs with standard authentication methods (basic auth, token, ...). Authentication methods that require a third party tool, e.g. the
aws
orgcloud
CLI, are not supported.-
If you created the base cluster using GKE, you can convert your
kubeconfig
file to one that uses basic authentication by using thesow convertkubeconfig
command:sow convertkubeconfig
When asked for credentials, enter the ones that the GKE dashboard shows when clicking on
show credentials
.sow
will replace the file specified inlandscape.cluster.kubeconfig
of youracre.yaml
file by a kubeconfig file that uses basic authentication.The basic autentication is disabled by default starting with Kubernetes
1.12
, see more details here.In case it is disabled on your cluster, the following command can be used to enable it:
gcloud container clusters update <your-cluster> --enable-basic-auth
-
If you are not using GKE and don't know how to get a kubeconfig with standard authentication, you can also create a serviceaccount, grant it cluster-admin privileges by adding it to the corresponding
ClusterRoleBinding
, and construct a kubeconfig using that serviceaccount's token.
-
-
Open a second terminal window which current directory is your
landscape
directory. Set theKUBECONFIG
environment variable as specified inlandscape.cluster.kubeconfig
, and watch the progress of the Gardener installation:export KUBECONFIG=./kubeconfig watch -d kubectl -n garden get pods,ingress,sts,svc
-
In your first terminal window, use the following command to check in which order the components will be installed. Nothing will be deployed yet and you can test this way if your syntax in
acre.yaml
is correct:sow order -A
-
If there are no error messages, use the following command to deploy Gardener on your base cluster:
sow deploy -A
-
sow
now starts to install Gardener in your base cluster. The installation can take about 30 minutes.sow
prints out status messages to the terminal window so that you can check the status of the installation. The other terminal window will show the newly created Kubernetes resources after a while and if their deployment was successful. Wait until the last component is deployed and all created Kubernetes resources are in statusRunning
. -
Use the following command to find out the URL of the Gardener dashboard.
sow url
More information: Most Important Commands and Directories
As a part of garden-setup, a kube-apiserver
and kube-controller-manager
will be deployed into your base cluster, creating the so-called 'virtual' cluster. The name comes from the fact that it behaves like a kubernetes cluster, but there aren't any nodes behind this kube-apiserver and thus no workload will actually run on this cluster. This kube-apiserver is then extended by the Gardener apiserver.
At first glance, this feels unintuitive. Why do we create another kube-apiserver which needs its own kubeconfig? However, there are two major reasons for this approach:
The kube-apiserver needs to be configured in a certain way so that it can be used for a Gardener landscape. For example, the Gardener dashboard needs some OIDC configuration to be set on the kube-apiserver, otherwise authentication at the dashboard won't work. However, since garden-setup relies on a base cluster created by other means, many people will probably use a managed kubernetes service (like GKE) to create the initial cluster - but most of the managed services do not grant access to the kube-apiserver to the end-users. By deploying an own kube-apiserver, garden-setup ensures full control over its configuration, which improves stability and reduces complexity of the landscape setup.
Garden-setup also deploys an own etcd for the kube-apiserver. Because the kube-apiserver - and thus its etcd - is only being used for Gardener resources, restoring the state of a Gardener landscape from an etcd backup is significantly easier than it would be if the Gardener resources were mixed with other resources in the etcd.
The major disadvantage of this approach is that two kubeconfigs are needed to operate the Gardener: one for the base cluster, where all the pods are running, and one for the 'virtual' cluster where the Gardener resources - shoot
, seed
, cloudprofile
, ... - are maintained. The kubeconfig for the 'virtual' cluster can be found in the landscape folder at export/kube-apiserver/kubeconfig
or it can be pulled from the secret garden-kubeconfig-for-admin
in the garden
namespace of the base cluster after the kube-apiserver
component of garden-setup has been deployed.
Use the kubeconfig at export/kube-apiserver/kubeconfig
to access the cluster where the Gardener resources - shoot
, seed
, cloudprofile
, and so on - are maintained.
This file will be evaluated using spiff
, a dynamic templating language for yaml files. For example, this simplifies the specification of field values that are used multiple times in the yaml file. For more information, see the spiff repository.
Please note that, for the sake of clarity, not all configuration options are listed in this readme. Instead, the more advanced configuration options have been moved into a set of additional documentation files. You can access these pages via their index and they are usually linked in their corresponding sections below.
landscape: name: <Identifier> # general Gardener landscape identifier, for example, `my-gardener` domain: <prefix>.<cluster domain> # unique basis domain for DNS entries cluster: # information about your base cluster kubeconfig: <relative path + filename> # path to your `kubeconfig` file, rel. to directory `landscape` (defaults to `./kubeconfig`) networks: # CIDR IP ranges of base cluster nodes: <CIDR IP range> pods: <CIDR IP range> services: <CIDR IP range> iaas: - name: (( iaas[0].type )) # name of the seed type: <gcp|aws|azure|openstack|vsphere> # iaas provider region: <major region>-<minor region> # region for initial seed zones: # remove zones block for Azure - <major region>-<minor region>-<zone> # example: europe-west1-b - <major region>-<minor region>-<zone> # example: europe-west1-c - <major region>-<minor region>-<zone> # example: europe-west1-d credentials: # provide access to IaaS layer used for creating resources for shoot clusters - name: # see above type: <gcp|aws|azure|openstack> # see above region: <major region>-<minor region> # region for seed zones: # remove zones block for Azure - <major region>-<minor region>-<zone> # Example: europe-west1-b - <major region>-<minor region>-<zone> # Example: europe-west1-c - <major region>-<minor region>-<zone> # Example: europe-west1-d cluster: # information about your seed's base cluster networks: # CIDR IP ranges of seed cluster nodes: <CIDR IP range> pods: <CIDR IP range> services: <CIDR IP range> kubeconfig: # kubeconfig for seed cluster apiVersion: v1 kind: Config ... credentials: etcd: # optional for gcp/aws/azure/openstack, default values based on `landscape.iaas` backup: type: <gcs|s3|abs|swift> # type of blob storage resourceGroup: # Azure resource group you would like to use for your backup region: (( iaas.region )) # region of blob storage (default: same as above) credentials: (( iaas.credentials )) # credentials for the blob storage's IaaS provider (default: same as above) dns: # optional for gcp/aws/azure/openstack, default values based on `landscape.iaas` type: <google-clouddns|aws-route53|azure-dns|openstack-designate|cloudflare-dns|infoblox-dns> # dns provider credentials: (( iaas.credentials )) # credentials for the dns provider identity: users: - email: # email (used for Gardener dashboard login) username: # username (displayed in Gardener dashboard) password: # clear-text password (used for Gardener dashboard login) - email: # see above username: # see above hash: # bcrypted hash of password, see above cert-manager: email: # email for acme registration server: <live|staging|self-signed|url> # which kind of certificates to use for the dashboard/identity ingress (defaults to `self-signed`) privateKey: # optional existing user account's private key
landscape:
name: <Identifier>
Arbitrary name for your landscape. The name will be part of the names for resources, for example, the etcd buckets.
domain: <prefix>.<cluster domain>
Basis domain for DNS entries. As a best practice, use an individual prefix together with the cluster domain of your base cluster.
cluster:
kubeconfig: <relative path + filename>
networks:
nodes: <CIDR IP range>
pods: <CIDR IP range>
services: <CIDR IP range>
Information about your base cluster, where the Gardener will be deployed on.
landscape.cluster.kubeconfig
contains the path to your kubeconfig, relative to your landscape directory. It is recommended to create a kubeconfig file in your landscape directory to be able to sync all files relevant for your installation with a git repository. This value is optional and will default to ./kubeconfig
if not specified.
landscape.cluster.networks
contains the CIDR ranges of your base cluster.
Finding out CIDR ranges of your cluster is not trivial. For example, GKE only tells you a "pod address range" which is actually a combination of pod and service CIDR. However, since the kubernetes
service typically has the first IP of the service IP range and most methods to get a kubernetes cluster tell you at least something about the CIDRs, it is usually possible to find out the CIDRs with a little bit of educated guessing.
iaas:
- name: (( type )) # name of the seed
type: <gcp|aws|azure|openstack|vsphere> # iaas provider
region: <major region>-<minor region> # region for initial seed
zones: # remove zones block for Azure
- <major region>-<minor region>-<zone> # example: europe-west1-b
- <major region>-<minor region>-<zone> # example: europe-west1-c
- <major region>-<minor region>-<zone> # example: europe-west1-d
credentials: # provide access to IaaS layer used for creating resources for shoot clusters
- name: # see above
type: <gcp|aws|azure|openstack|vsphere> # see above
region: <major region>-<minor region> # region for seed
zones: # remove zones block for Azure
- <major region>-<minor region>-<zone> # example: europe-west1-b
- <major region>-<minor region>-<zone> # example: europe-west1-c
- <major region>-<minor region>-<zone> # example: europe-west1-d
cluster: # information about your seed's base cluster
networks: # CIDR IP ranges of seed cluster
nodes: <CIDR IP range>
pods: <CIDR IP range>
services: <CIDR IP range>
kubeconfig: # kubeconfig for seed cluster
apiVersion: v1
kind: Config
...
credentials:
Contains the information where Gardener will create intial seed clusters and cloudprofiles to create shoot clusters.
Field | Type | Description | Examples | Iaas Provider Documentation |
---|---|---|---|---|
name |
Custom value | Name of the seed/cloudprofile. Must be unique. | gcp |
|
type |
Fixed value | IaaS provider for the seed. | gcp |
|
region |
IaaS provider specific | Region for the seed cluster. The convention to use <major region>-<minor region> does not apply to all providers. In Azure, use az account list-locations to find out the location name ( name attribute = lower case name without spaces). |
europe-west1 (GCP)eu-west-1 (AWS) westeurope (Azure) |
GCP (HowTo), GCP (overview); AWS (HowTo), AWS (Overview); Azure (Overview), Azure (HowTo) |
zones |
IaaS provider specific | Zones for the seed cluster. Not needed for Azure. | europe-west1-b (GCP) |
GCP (HowTo), GCP (overview); AWS (HowTo), AWS (Overview) |
credentials |
IaaS provider specific | Credentials in a provider-specific format. | See table with yaml keys below. | GCP, AWS, Azure |
cluster.kubeconfig |
Kubeconfig | The kubeconfig for your seed base cluster. Must have basic auth authentification. | ||
cluster.networks |
CIDRs | The CIDRs of your seed cluster. See landscape.cluster for more information. |
Here a list of configurations can be given. The setup will create one cloudprofile and seed per entry. Currently, you will have to provide the cluster you want to use as a seed - in future, the setup will be able to create a shoot and configure that shoot as a seed. The type
should match the type of the underlying cluster.
The first entry of the landscape.iaas
list is special:
- It has to exist - the list needs at least one entry.
- Don't specify the
cluster
node for it - it will configure your base cluster as seed.- Its
type
should match the one of your base cluster.
- Its
See the advanced documentation for more advanced configuration options and information about Openstack.
It's also possible to have the setup create shoots and then configure them as seeds. This has advantages compared to configuring existing clusters as seeds, e.g. you don't have to provide the clusters as they will be created automatically, the shooted seed clusters can leverage the Gardener's autoscaling capabilities, ...
How to configure shooted seeds is explained in the advanced documentation.
The credentials will be used to give Gardener access to the IaaS layer:
- To create a secret that will be used on the Gardener dashboard to create shoot clusters.
- To allow the control plane of the seed clusters to store the etcd backups of the shoot clusters.
Use the following yaml keys depending on your provider (excerpts):
AWS | GCP |
---|---|
credentials: |
credentials: |
Azure | Openstack |
credentials: |
credentials: |
The region
field in the openstack credentials is only evaluated within the dns
block (as iaas
and etcd.backup
have their own region fields, which will be used instead).
etcd:
backup:
# active: true
type: <gcs|s3|abs|swift>
resourceGroup: ...
region: (( iaas.region ))
credentials: (( iaas.credentials ))
Configuration of what blob storage to use for the etcd key-value store. If your IaaS provider offers a blob storage you can use the same values for etc.backup.region
and etc.backup.credentials
as above for iaas.region
and iaas.credentials
correspondingly by using the (( foo )) expression of spiff.
If the type of landscape.iaas[0]
is one of gcp
, aws
, azure
, or openstack
, this block can be defaulted - either partly or as a whole - based on values from landscape.iaas
. The resourceGroup
, which is necessary for Azure, cannot be defaulted and must be specified. Make sure that the specified resourceGroup
is empty and unused as deleting the cluster using sow delete all
deletes this resourceGroup
.
Field | Type | Description | Example | Iaas Provider Documentation |
---|---|---|---|---|
backup.active |
Boolean | If set to false , deactivates the etcd backup for the virtual cluster etcd. Defaults to true . |
true |
n.a. |
backup.type |
Fixed value | Type of your blob store. Supported blob stores: gcs (Google Cloud Storage), s3 (Amazon S3), abs (Azure Blob Storage), and swift (Openstack Swift). |
gcs |
n.a. |
backup.resourceGroup |
IaaS provider specific | Azure specific. Create an Azure blob store first which uses a resource group. Provide the resource group here. | my-Azure-RG |
Azure (HowTo) |
backup.region |
IaaS provider specific | Region of blob storage. | (( iaas.region )) |
GCP (overview), AWS (overview) |
backup.credentials |
IaaS provider specific | Service account credentials in a provider-specific format. | (( iaas.creds )) |
GCP, AWS, Azure |
dns:
type: <google-clouddns|aws-route53|azure-dns|openstack-designate|cloudflare-dns|infoblox-dns>
credentials:
Configuration for the Domain Name Service (DNS) provider. If your IaaS provider also offers a DNS service you can use the same values for dns.credentials
as for iaas.creds
above by using the (( foo )) expression of spiff. If they belong to another account (or to another IaaS provider) the appropriate credentials (and their type) have to be configured.
Similar to landscape.etcd
, this block - and parts of it - are optional if the type of landscape.iaas[0]
is one of gcp
, aws
, azure
, or openstack
. Missing values will be derived from landscape.iaas
.
Field | Type | Description | Example | IaaS Provider Documentation |
---|---|---|---|---|
type |
Fixed value | Your DNS provider. Supported providers: google-clouddns (Google Cloud DNS), aws-route53 (Amazon Route 53), azure-dns (Azure DNS), openstack-designate (Openstack Designate), cloudflare-dns (Cloudflare DNS), and infoblox-dns (Infoblox DNS). |
google-clouddns |
n.a. |
credentials |
IaaS provider specific | Service account credentials in a provider-specific format (see above). | (( iaas.credentials )) |
GCP, AWS, Azure |
The credentials to use Cloudflare DNS consist of a single key apiToken
, containing your API token.
For Infoblox DNS, you have to specify USERNAME
, PASSWORD
, and HOST
in the credentials
node. For a complete list of optional credentials keys see here
identity:
users:
- email:
username:
password:
- email:
username:
hash:
Configures the identity provider that allows access to the Gardener dashboard. The easiest method is to provide a list of users
, each containing email
, username
, and either a clear-text password
or a bcrypted hash
of the password.
You can then login into the dashboard using one of the specified email/password combinations.
ingress:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true" # example for internal loadbalancers on aws
...
You can add annotations for the ingress controller load balancer service. This can be used for example to deploy an internal load balancer on your cloud provider (see the example for aws above).
cert-manager:
email:
server: <live|staging|self-signed|url>
privateKey: # optional
The setup deploys a cert-manager to provide a certificate for the Gardener dashboard, which can be configured here.
The entire landscape.cert-manager
block is optional.
If not specified, landscape.cert-manager.server
defaults to self-signed
. This means, that a selfs-signed CA will be created, which is used by the cert-manager (using a CA issuer) to sign the certificate. Since the CA is not publicly trusted, your webbrowser will show a 'untrusted certificate' warning when accessing the dashboard.
The landscape.cert-manager.email
field is not evaluated in self-signed
mode.
If set to live
, the cert-manager will use the letsencrypt ACME server to get trusted certificates for the dashboard. Beware the rate limits of letsencrypt.
Letsencrypt requires an email address and will send information about expiring certificates to that address. If landscape.cert-manager.email
is not specified, landscape.identity.users[0].email
will be used. One of the two fields has to be present.
If set to staging
, the cert-manager will use the letsencrypt staging server. This is for testing purposes mainly. The communication with letsencrypt works exactly as for the live
case, but the staging server does not produce trusted certificates, so you will still get the browser warning. The rate limits are significantly higher for the staging server, though.
If set to anything else, it is assumed to be the URL of an ACME server and the setup will create an ACME issuer for it.
See the advanced configuration for more configuration options.
If the given email address is already registers at letsencrypt, you can specify the private key of the associated user account with landscape.cert-manager.privateKey
.
-
Run
sow delete -A
to delete all components from your base Kubernetes cluster in inverse order. -
During the deletion, the corresponding contents in directories
gen
,export
, andstate
in yourlandscape
directory are deleted automatically as well.
These are the most important sow
commands for deploying and deleting components:
Command | Use |
---|---|
sow <component> |
Same as sow deploy <component> . |
sow delete <component> |
Deletes a single component |
sow delete -A |
Deletes all components in the inverse order |
sow delete all |
Same as sow delete -A |
sow delete -a <component> |
Deletes a component and all components that depend on it (including transitive dependencies) |
sow deploy <component> |
Deploys a single component. The deployment will fail if the dependencies have not been deployed before. |
sow deploy -A |
Deploys all components in the order specified by sow order -A |
sow deploy -An |
Deploys all components that are not deployed yet |
sow deploy all |
Same as sow deploy -A |
sow deploy -a <component> |
Deploys a component and all of its dependencies |
sow help |
Displays a command overview for sow |
sow order -a <component> |
Displays all dependencies of a given component (in the order they should be deployed in) |
sow order -A |
Displays the order in which all components can be deployed |
sow url |
Displays the URL for the Gardener dashboard (after a successful installation) |
After using sow to deploy the components, you will notice that there are new directories inside your landscape directory:
Directory | Use |
---|---|
gen |
Temporary files that are created during the deployment of components, for example, generated manifests. |
export |
Allows communication (exports and imports) between components. It also contains the kubeconfig for the virtual cluster that handles the Gardener resources. |
state |
Important state information of the components is stored here, for example, the terraform state and generated certificates. It is crucial that this directory is not deleted while the landscape is active. While the contents of the export and gen directorys will be overwritten when a component is deployed again, the contents of state will be reused instead. In some cases, it is necessary to delete the state of a component before deploying it again, for example if you want to create new certificates for it. |