Skip to content

Commit

Permalink
Merge pull request #3667 from k0sproject/backport-3598-to-release-1.28
Browse files Browse the repository at this point in the history
[Backport release-1.28] Change wordings on CNI,CSI and CRI related pages to really mean k0s s…
  • Loading branch information
twz123 authored Nov 2, 2023
2 parents 88ca1fb + 43ee010 commit f434c51
Show file tree
Hide file tree
Showing 5 changed files with 37 additions and 17 deletions.
14 changes: 6 additions & 8 deletions docs/cloud-providers.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Cloud providers

k0s builds Kubernetes components in *providerless* mode, meaning that cloud providers are not built into k0s-managed Kubernetes components. As such, you must externally configure the cloud providers to enable their support in your k0s cluster (for more information on running Kubernetes with cloud providers, refer to the [Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/).
K0s supports all [Kubernetes cloud controllers]. However, those must be installed as separate cluster add-ons since k0s builds Kubernetes components in *providerless* mode.

## External Cloud Providers
[Kubernetes cloud controllers]: https://kubernetes.io/docs/concepts/architecture/cloud-controller/

### Enable cloud provider support in kubelet
## Enable cloud provider support in kubelet

Even when all components are built with providerless mode, you must be able to enable cloud provider mode for kubelet. To do this, run the workers with `--enable-cloud-provider=true`.
You must enable cloud provider mode for kubelet. To do this, run the workers with `--enable-cloud-provider=true`.

When deploying with [k0sctl](k0sctl-install.md), you can add this into the `installFlags` of worker hosts.

Expand All @@ -23,11 +23,9 @@ spec:
role: worker
```
### Deploy the cloud provider
## Deploy the cloud provider
The easiest way to deploy cloud provider controllers is on the k0s cluster.
Use the built-in [manifest deployer](manifests.md) built into k0s to deploy your cloud provider as a k0s-managed stack. Next, just drop all required manifests into the `/var/lib/k0s/manifests/aws/` directory, and k0s will handle the deployment.
You can use any means to deploy your cloud controller into the cluster. Most providers support [Helm charts](helm-charts.md) to deploy them.
**Note**: The prerequisites for the various cloud providers can vary (for example, several require that configuration files be present on all of the nodes). Refer to your chosen cloud provider's documentation as necessary.
Expand Down
19 changes: 12 additions & 7 deletions docs/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,17 @@

## In-cluster networking

k0s supports two Container Network Interface (CNI) providers out-of-box, [Kube-router](https://github.com/cloudnativelabs/kube-router) and [Calico](https://www.projectcalico.org/). In addition, k0s can support your own CNI configuration.
k0s supports any standard [CNI] network providers. For convenience, k0s does come bundled with tho built-in providers, [Kube-router] and [Calico].

[CNI]: https://github.com/containernetworking/cni
[Kube-router]: https://github.com/cloudnativelabs/kube-router
[Calico]: https://www.projectcalico.org/

### Custom CNI configuration

You can opt-out of having k0s manage the network setup and choose instead to use any network plugin that adheres to the CNI specification. To do so, configure `custom` as the [network provider] in the k0s configuration file (`k0s.yaml`). To deploy the CNI provider you want to use, you can deploy it using Helm, plain Kubernetes manifests or any other way.

[network provider]: configuration.md#specnetwork

### Notes

Expand All @@ -21,17 +31,12 @@ Kube-router is built into k0s, and so by default the distribution uses it for ne

### Calico

In addition to Kube-router, k0s also offers [Calico](https://www.projectcalico.org/) as an alternative, built-in network provider. Calico is a layer 3 container networking solution that routes packets to pods. It supports, for example, pod-specific network policies that help to secure kubernetes clusters in demanding use cases. Calico uses the vxlan overlay network by default, and you can configure it to support ipip (IP-in-IP).
In addition to Kube-router, k0s also offers [Calico] as an alternative, built-in network provider. Calico is a layer 3 container networking solution that routes packets to pods. It supports, for example, pod-specific network policies that help to secure kubernetes clusters in demanding use cases. Calico uses the vxlan overlay network by default, and you can configure it to support ipip (IP-in-IP).

- Does NOT support armv7
- Uses bit more resources
- Supports dual-stack (IPv4/IPv6) networking
- Supports Windows nodes

### Custom CNI configuration

You can opt-out of having k0s manage the network setup and choose instead to use any network plugin that adheres to the CNI specification. To do so, configure `custom` as the network provider in the k0s configuration file (`k0s.yaml`). You can do this, for example, by pushing network provider manifests into `/var/lib/k0s/manifests`, from where k0s controllers will collect them for deployment into the cluster (for more information, refer to [Manifest Deployer](manifests.md).

## Controller-Worker communication

One goal of k0s is to allow for the deployment of an isolated control plane, which may prevent the establishment of an IP route between controller nodes and the pod network. Thus, to enable this communication path (which is mandated by conformance tests), k0s deploys [Konnectivity service](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/) to proxy traffic from the API server (control plane) into the worker nodes. This ensures that we can always fulfill all the Kubernetes API functionalities, but still operate the control plane in total isolation from the workers.
Expand Down
8 changes: 7 additions & 1 deletion docs/runtime.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,15 @@
# Runtime

k0s uses [containerd](https://github.com/containerd/containerd) as the default Container Runtime Interface (CRI) and runc as the default low-level runtime. In most cases they don't require any configuration changes. However, if custom configuration is needed, this page provides some examples.
k0s supports any container runtime that implements the [CRI] specification.

k0s comes bundled with [containerd] as the default Container Runtime Interface (CRI) and [runc] as the default low-level runtime. In most cases they don't require any configuration changes. However, if custom configuration is needed, this page provides some examples.

![k0s_runtime](img/k0s_runtime.png)

[CRI]: https://github.com/kubernetes/cri-api
[containerd]: https://github.com/containerd/containerd
[runc]: https://github.com/opencontainers/runc

## containerd configuration

By default k0s manages the full containerd configuration. User has the option of fully overriding, and thus also managing, the configuration themselves.
Expand Down
4 changes: 4 additions & 0 deletions docs/storage.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# Storage

k0s supports any volume provider that implements the [CSI specification](https://github.com/container-storage-interface/spec). For convenience, k0s comes bundled in with support for [OpenEBS local path provisioner](https://openebs.io/docs/concepts/localpv).

The choise of which CSI provider to use depends heavily on the use case and infrastructure you're running on and the use case you have.

## Bundled OpenEBS storage

K0s comes out with bundled OpenEBS installation which can be enabled by using [configuration file](./configuration.md)
Expand Down
9 changes: 8 additions & 1 deletion docs/user-management.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
# User Management

Kubernetes, and thus k0s, does not have any built-in functionality to manage users. Kubernetes relies solely on external sources for user identification and authentication. A client certificate is considered an external source in this case as Kubernetes api-server "just" validates that the certificate is signed by a trusted CA. This means that it is recommended to use e.g. [OpenID Connect](./examples/oidc/oidc-cluster-configuration.md) to configure the API server to trust tokens issued by an external Identity Provider.

k0s comes with some helper commands to create kubeconfig with client certificates for users. There are few caveats that one needs to take into consideration when using client certificates:

* Client certificates have long expiration time, they're valid for one year
* Client certificates cannot be revoked (general Kubernetes challenge)

## Adding a Cluster User

Run the [kubeconfig create](cli/k0s_kubeconfig_create.md) command on the controller to add a user to the cluster. The command outputs a kubeconfig for the user, to use for authentication.
Expand All @@ -20,4 +27,4 @@ Create a `roleBinding` to grant the user access to the resources:

```shell
k0s kubectl create clusterrolebinding --kubeconfig k0s.config testUser-admin-binding --clusterrole=admin --user=testUser
```
```

0 comments on commit f434c51

Please sign in to comment.