Skip to content

Latest commit

 

History

History
251 lines (180 loc) · 6.88 KB

10-Practice-Test-Backup-and-Restore-Methods-2.md

File metadata and controls

251 lines (180 loc) · 6.88 KB

Practice Test - Backup and Restore Methods 2

Solutions to practice test - Backup and Restore Methods 2

In this test, we practice both with stacked and external etcd clusters.

  1. Information only

  2. Explore the student-node and the clusters it has access to.
    kubectl config get-contexts
  3. How many clusters are defined in the kubeconfig on the student-node?
    kubectl config get-contexts

    2

  4. How many nodes (both controlplane and worker) are part of cluster1?
    kubectl config use-context cluster1
    kubectl get nodes

    2

  5. What is the name of the controlplane node in cluster2?
    kubectl config use-context cluster2
    kubectl get nodes

    cluster2-controlplane

  6. Information only

  7. How is ETCD configured for cluster1?
    kubectl config use-context cluster1
    kubectl get pods -n kube-system

    From the output we can see a pod for etcd, therefore the answer is Stacked ETCD

  8. How is ETCD configured for cluster2?
    kubectl config use-context cluster2
    kubectl get pods -n kube-system

    From the output we can see no pod for etcd. Since no etcd is not an option for a functioning cluster the answer must therefore be External ETCD

  9. What is the IP address of the External ETCD datastore used in cluster2?

    For this, we need to exampine the API server configuration

    kubectl config use-context cluster2
    kubectl get pods -n kube-system kube-apiserver-cluster2-controlplane -o yaml | grep etcd

    From the output, locate --etcd-servers. The IP address in this line is the answer.

  10. What is the default data directory used the for ETCD datastore used in cluster1?

    For this, we need to examine the etcd manifest on the control plane node, and we need to find out the hostpath of its etcd-data volume.

    kubectl config use-context cluster1
    kubectl get pods -n kube-system etcd-cluster1-controlplane -o yaml

    In the output, find the volumes section. The host path of the volume named etcd-data is the answer.

    /var/lib/etcd

  11. Information only

  12. What is the default data directory used the for ETCD datastore used in cluster2?

    For this, we need to examine the system unit file for the etcd service. Remember that for external etcd, it is running as an operating system service.

    ssh etcd-server
    # Verify the name of the service
    systemctl list-unit-files | grep etcd
    
    # Using the output from above command
    systemctl cat etcd.service

    Note the comment line in the output. This tells you where the service unit file is. We are going to need to edit this file in a later question!

    From the output, locate --data-dir

    /var/lib/etcd-data

    Return to the student node:

    exit
  13. How many other nodes are part of the ETCD cluster that etcd-server is a part of?

    This question is somewhat contentious. It ought not to contain the word other. The required answer is

    1

  14. Take a backup of etcd on cluster1 and save it on the student-node at the path /opt/cluster1.db

    For this, we need to do the backup on the control node, then pull it back to the student node.

    ssh cluster1-controlplane
    ETCDCTL_API=3 etcdctl snapshot save \
      --cacert /etc/kubernetes/pki/etcd/ca.crt \
      --cert /etc/kubernetes/pki/etcd/server.crt \
      --key /etc/kubernetes/pki/etcd/server.key \
      cluster1.db
    
    # Return to student node
    exit
    scp cluster1-controlplane:~/cluster1.db /opt/
  15. An ETCD backup for cluster2 is stored at /opt/cluster2.db. Use this snapshot file to carry out a restore on cluster2 to a new path /var/lib/etcd-data-new.

    As you recall, cluster2 is using external etcd. This means

    • etcd does not have to be on the control plane node of the cluster. In this case, it is not.
    • etcd runs as an operating system service not a pod, therefore there is no manifest file to edit. Changes are instead made to a service unit file.

    There are several parts to this question. Let's go through them one at a time.

    1. Move the backup to the etcd-server node
      scp /opt/cluster2.db etcd-server:~/
    2. Log into etcd-server node
      ssh etcd-server
    3. Check the ownership of the current etcd-data directory

      We will need to ensure correct ownership of our restored data. We determined the location of the data directory in Q12

      ls -ld /var/lib/etcd-data/

      Note that owner and group are both etcd

    4. Do the restore
      ETCDCTL_API=3 etcdctl snapshot restore \
          --data-dir /var/lib/etcd-data-new \
          cluster2.db
    5. Set ownership on the restored directory
      chown -R etcd:etcd /var/lib/etcd-data-new
    6. Reconfigure and restart etcd

      We will need the location of the service unit file which we also determined in Q12

      vi /etc/systemd/system/etcd.service

      Edit the --data-dir argument to the newly restored directory, and save.

      Finally, reload and restart the etcd service. Whenever you have edited a service unit file, a daemon-reload is required to reload the in-memory configuration of the systemd service.

      systemctl daemon-reload
      systemctl restart etcd.service

      Return to the student node:

      exit
    7. Verify the restore
      kubectl config use-context cluster2
      kubectl get all -n critical