Skip to content

Commit

Permalink
Merge pull request #1001 from mviitane/uninstall
Browse files Browse the repository at this point in the history
Documentation updates for Backup/Restore and Uninstall/Reset
  • Loading branch information
trawler authored Jul 1, 2021
2 parents 9cea63e + 396f1ad commit c3f841e
Show file tree
Hide file tree
Showing 6 changed files with 101 additions and 48 deletions.
39 changes: 33 additions & 6 deletions docs/backup.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Backup/Restore overview

k0s has integrated support for backing up cluster state and configuration.
The k0s backup utility is aiming to back up and restore k0s managed parts of the cluster.
k0s has integrated support for backing up cluster state and configuration. The k0s backup utility is aiming to back up and restore k0s managed parts of the cluster.

The backups created by `k0s backup` command have following pieces of your cluster:

- certificates (the content of the `<data-dir>/pki` directory)
- etcd snapshot, if the etcd storage is used
- etcd snapshot, if the etcd datastore is used
- Kine/SQLite snapshot, if the Kine/SQLite datastore is used
- k0s.yaml
- any custom defined manifests under the `<data-dir>/manifests`
- any image bundles located under the `<data-dir>/images`
Expand All @@ -15,12 +15,14 @@ The backups created by `k0s backup` command have following pieces of your cluste
Parts **NOT** covered by the backup utility:

- PersistentVolumes of any running application
- database content, in case if the `kine` is used as a storage driver
- datastore, in case something else than etcd or Kine/SQLite is used
- any configuration to the cluster introduced by manual changes (e.g. changes that weren't saved under the `<data-dir>/manifests`)

Any of the backup/restore related operations MUST be performed on the controller node.

## Backup
## Backup/restore a k0s node locally

### Backup (local)

To create backup run the following command on the controller node:

Expand All @@ -33,7 +35,7 @@ The command provides backup archive using following naming convention: `k0s_back

Because of the DateTime usage, it is guaranteed that none of the previously created archives would be overwritten.

## Restore
### Restore (local)

To restore cluster state from the archive use the following command on the controller node:

Expand All @@ -42,6 +44,7 @@ k0s restore /tmp/k0s_backup_2021-04-26T19_51_57_000Z.tar.gz
```

The command would fail if the data directory for the current controller has overlapping data with the backup archive content.

The command would use the archived `k0s.yaml` as the cluster configuration description.

In case if your cluster is HA, after restoring single controller node, join the rest of the controller nodes to the cluster.
Expand All @@ -50,3 +53,27 @@ E.g. steps for N nodes cluster would be:
- Restore backup on fresh machine
- Run controller there
- Join N-1 new machines to the cluster the same way as for the first setup.

## Backup/restore a k0s cluster using k0sctl

With k0sctl you can perform cluster level backup and restore remotely with one command.

### Backup (remote)

To create backup run the following command:

```shell
k0sctl backup
```

k0sctl connects to the cluster nodes to create a backup. The backup file is stored in the current working directory.

### Restore (remote)

To restore cluster state from the archive use the following command:

```shell
k0sctl apply --restore-from /path/to/backup_file.tar.gz
```

The control plane load balancer address (externalAddress) needs to remain the same between backup and restore. This is caused by the fact that all worker node components connect to this address and cannot currently be re-configured.
6 changes: 3 additions & 3 deletions docs/configuration-validation.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,6 @@ k0s validate config --config path/to/config/file
`validate config` sub-command can validate the following:

1. YAML formatting
2. [SANs addresses](#specapi-1)
3. [Network providers](#specnetwork-1)
4. [Worker profiles](#specworkerprofiles)
2. [SAN addresses](/configuration/#specapi)
3. [Network providers](/configuration/#specnetwork)
4. [Worker profiles](/configuration/#specworkerprofiles)
6 changes: 1 addition & 5 deletions docs/high-availability.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,7 @@ You can create high availability for the control plane by distributing the contr

You should plan to allocate the control plane nodes into different zones. This will avoid failures in case one zone fails.

For etcd high availability it's recommended to configure 3 or 5 controller nodes.

- With 3 nodes it's possible to lose one node and still have the quorum with 2 working nodes.
- With 2 controller nodes, the quorum is lost if one node fails and there's only one working node left.
- With 5 nodes it's possible to lose two nodes and still have the quorum with 3 working nodes.
For etcd high availability it's recommended to configure 3 or 5 controller nodes. For more information, refer to the [etcd documentation](https://etcd.io/docs/latest/faq/#why-an-odd-number-of-cluster-members).

## Load Balancer

Expand Down
33 changes: 0 additions & 33 deletions docs/k0s-reset.md

This file was deleted.

63 changes: 63 additions & 0 deletions docs/reset.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Uninstall/Reset

k0s can be uninstalled locally with `k0s reset` command and remotely with `k0sctl reset` command. They remove all k0s-related files from the host.

`reset` operates under the assumption that k0s is installed as a service on the host.

## Uninstall a k0s node locally

To prevent accidental triggering, `k0s reset` will not run if the k0s service is running, so you must first stop the service:

1. Stop the service:

```shell
sudo k0s stop
```

2. Invoke the `reset` command:

```shell
$ sudo k0s reset
INFO[2021-06-29 13:08:39] * containers steps
INFO[2021-06-29 13:08:44] successfully removed k0s containers!
INFO[2021-06-29 13:08:44] no config file given, using defaults
INFO[2021-06-29 13:08:44] * remove k0s users step:
INFO[2021-06-29 13:08:44] no config file given, using defaults
INFO[2021-06-29 13:08:44] * uninstal service step
INFO[2021-06-29 13:08:44] Uninstalling the k0s service
INFO[2021-06-29 13:08:45] * remove directories step
INFO[2021-06-29 13:08:45] * CNI leftovers cleanup step
INFO k0s cleanup operations done. To ensure a full reset, a node reboot is recommended.
```

## Uninstall a k0s cluster using k0sctl

k0sctl can be used to connect each node and remove all k0s-related files and processes from the hosts.

1. Invoke `k0sctl reset` command:

```shell
$ k0sctl reset --config k0sctl.yaml
k0sctl v0.9.0 Copyright 2021, k0sctl authors.
? Going to reset all of the hosts, which will destroy all configuration and data, Are you sure? Yes
INFO ==> Running phase: Connect to hosts
INFO [ssh] 13.53.43.63:22: connected
INFO [ssh] 13.53.218.149:22: connected
INFO ==> Running phase: Detect host operating systems
INFO [ssh] 13.53.43.63:22: is running Ubuntu 20.04.2 LTS
INFO [ssh] 13.53.218.149:22: is running Ubuntu 20.04.2 LTS
INFO ==> Running phase: Prepare hosts
INFO ==> Running phase: Gather k0s facts
INFO [ssh] 13.53.43.63:22: found existing configuration
INFO [ssh] 13.53.43.63:22: is running k0s controller version 1.21.2+k0s.0
INFO [ssh] 13.53.218.149:22: is running k0s worker version 1.21.2+k0s.0
INFO [ssh] 13.53.43.63:22: checking if worker has joined
INFO ==> Running phase: Reset hosts
INFO [ssh] 13.53.43.63:22: stopping k0s
INFO [ssh] 13.53.218.149:22: stopping k0s
INFO [ssh] 13.53.218.149:22: running k0s reset
INFO [ssh] 13.53.43.63:22: running k0s reset
INFO ==> Running phase: Disconnect from hosts
INFO ==> Finished in 8s
```
2 changes: 1 addition & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ nav:
- Airgap Install: airgap-install.md
- Upgrade: upgrade.md
- Backup/Restore: backup.md
- Uninstall/Reset: reset.md
- System Requirements: system-requirements.md
- Usage:
- Configuration Options: configuration.md
Expand All @@ -35,7 +36,6 @@ nav:
- Control Plane High Availability: high-availability.md
- Shell Completion: shell-completion.md
- User Management: user-management.md
- Uninstall the k0s Cluster: k0s-reset.md
- Extensions:
- Manifest Deployer: manifests.md
- Helm Charts: helm-charts.md
Expand Down

0 comments on commit c3f841e

Please sign in to comment.