Skip to content

OCI Instance

nayoung edited this page Jul 18, 2023 · 4 revisions

OCI Instance

Resource Configuration
OS Ubuntu 20.04
shape VM.Standard.E2.2
OCPU 2
Memory 16GB
네트워크 대역폭 1.4Gbps

Kubernetes Cluster Setting

OCI Instance에 kubeadm으로 쿠버네티스 클러스터 구축 ; 단일 노드 구성

Swap Disable

sudo -i
swapoff -a
echo 0 > /proc/sys/vm/swappiness
sed -e '/swap/ s/^#*/#/' -i /etc/fstab

Install Docker

sudo apt update
sudo apt install -y docker.io
sudo systemctl start docker
sudo systemctl enable docker

cat << EOF | sudo tee –a /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo mkdir -p /etc/systemd/system/docker.service.d
sudo systemctl daemon-reload
sudo systemctl restart docker

Install K8S

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

kubeadm init

sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=0.0.0.0

kubeconfig 인증 생성

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

CNI

calico install

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/tigera-operator.yaml
kubectl create -f custom-resources.yaml

Ready 상태 확인

kubectl get nodes
NAME     STATUS   ROLES           AGE    VERSION
master   Ready    control-plane   7m5s   v1.27.3
kubectl get pod -A
NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-586d746ff6-hkvbb          1/1     Running   0          87s
calico-apiserver   calico-apiserver-586d746ff6-xww8q          1/1     Running   0          87s
calico-system      calico-kube-controllers-779fd96866-vf7pp   1/1     Running   0          119s
calico-system      calico-node-2gqc6                          1/1     Running   0          119s
calico-system      calico-typha-847f578c97-rfq6t              1/1     Running   0          119s
kube-system        coredns-5d78c9869d-r29tx                   1/1     Running   0          7m18s
kube-system        coredns-5d78c9869d-xpth5                   1/1     Running   0          7m18s
kube-system        etcd-master                                1/1     Running   3          7m32s
kube-system        kube-apiserver-master                      1/1     Running   3          7m33s
kube-system        kube-controller-manager-master             1/1     Running   2          7m32s
kube-system        kube-proxy-s52hj                           1/1     Running   0          7m18s
kube-system        kube-scheduler-master                      1/1     Running   7          7m32s
tigera-operator    tigera-operator-7977bb4f46-tdddw           1/1     Running   0          3m41s

격리 해제 하기 → controlplane과 node를 한 대에 구성한 경우, 컨테이너가 실행되지 못하도록 격리되어있는 것을 해제 해주어야 한다.

kubectl taint node master node-role.kubernetes.io/control-plane-

etcd/etcdctl

etcd 설치하기

git clone -b v3.4.16 https://github.com/etcd-io/etcd.git
cd etcd
./build
export PATH="$PATH:`pwd`/bin"
export ETCDCTL_API=3

로컬에서 스냅샷 생성 테스트

sudo ETCDCTL_API=3 etcdctl snapshot save --endpoints=127.0.0.1:2379 --cacert=ca.crt --cert=server.crt --key=server.key etcd-backup --debug
INFO: 2023/07/17 17:58:15 parsed scheme: ""
INFO: 2023/07/17 17:58:15 scheme "" not registered, fallback to default scheme
INFO: 2023/07/17 17:58:15 ccResolverWrapper: sending update to cc: {[{127.0.0.1:2379 0  <nil>}] <nil>}
INFO: 2023/07/17 17:58:15 balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
INFO: 2023/07/17 17:58:15 clientv3/balancer: pin "127.0.0.1:2379"
INFO: 2023/07/17 17:58:15 balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
Snapshot saved at etcd-backup

Job Snapshot

스냅샷 시나리오

  • persistentVolume 을 이용해서 인증서 디렉토리를 볼륨으로 저장한다.
  • persistentVolumeClaim 으로 연결한다.
  • 볼륨을 job에 마운트하여 인증서를 job 컨테이너에서 사용할 수 있게 한다.
  • job에서 인증서를 사용해서 snapshot을 찍는다.

PersistentVolume과 PersistenVolumeClaim 생성
kubectl apply -f pv-cert.yaml -f pvc-cert.yaml

Configmap과 Job 생성

kubectl apply -f etcd-backup-configmap-oci.yaml -f etcd-backup-job.yaml

job 파드 로그 확인


ubuntu@master:~$ kubectl logs etcd-backup-job-wbf8q
ENDPOINT=private_ip
ls result.....................
total 10616
drwxr-xr-x 2 root root    4096 Jul 17 18:16 .
drwxr-xr-x 1 root root    4096 Jul 17 19:13 ..
-rw-r--r-- 1 root root    1086 Jul 17 16:08 ca.crt
-rw------- 1 root root    1675 Jul 17 16:08 ca.key
-rw-r--r-- 1 root root 5410848 Jul 17 17:58 etcd-backup
-rw-r--r-- 1 root root 5410848 Jul 17 18:16 etcd-backup2
-rw-r--r-- 1 root root    1159 Jul 17 16:08 healthcheck-client.crt
-rw------- 1 root root    1679 Jul 17 16:08 healthcheck-client.key
-rw-r--r-- 1 root root    1192 Jul 17 16:08 peer.crt
-rw------- 1 root root    1675 Jul 17 16:08 peer.key
-rw-r--r-- 1 root root    1192 Jul 17 16:08 server.crt
-rw------- 1 root root    1679 Jul 17 16:08 server.key
ETCDCTL_CACERT=/cert/ca.crt
ETCDCTL_CERT=/cert/server.crt
ETCDCTL_COMMAND_TIMEOUT=5s
ETCDCTL_DEBUG=true
ETCDCTL_DIAL_TIMEOUT=2s
ETCDCTL_DISCOVERY_SRV=
ETCDCTL_DISCOVERY_SRV_NAME=
ETCDCTL_ENDPOINTS=private_ip
ETCDCTL_HEX=false
ETCDCTL_INSECURE_DISCOVERY=true
ETCDCTL_INSECURE_SKIP_TLS_VERIFY=false
ETCDCTL_INSECURE_TRANSPORT=true
ETCDCTL_KEEPALIVE_TIME=2s
ETCDCTL_KEEPALIVE_TIMEOUT=6s
ETCDCTL_KEY=/cert/server.key
ETCDCTL_PASSWORD=
ETCDCTL_USER=
ETCDCTL_WRITE_OUT=simple
WARNING: 2023/07/17 19:13:22 [core] Adjusting keepalive ping interval to minimum period of 10s
WARNING: 2023/07/17 19:13:22 [core] Adjusting keepalive ping interval to minimum period of 10s
INFO: 2023/07/17 19:13:22 [core] parsed scheme: "etcd-endpoints"
INFO: 2023/07/17 19:13:22 [core] ccResolverWrapper: sending update to cc: {[{private_ip:2379 private_ip <nil> 0 <nil>}] 0xc000043940 <nil>}
INFO: 2023/07/17 19:13:22 [core] ClientConn switching balancer to "round_robin"
INFO: 2023/07/17 19:13:22 [core] Channel switches to new LB policy "round_robin"
INFO: 2023/07/17 19:13:22 [balancer] base.baseBalancer: got new ClientConn state:  {{[{private_ip:2379 private_ip <nil> 0 <nil>}] 0xc000043940 <nil>} <nil>}
INFO: 2023/07/17 19:13:22 [core] Subchannel Connectivity change to CONNECTING
INFO: 2023/07/17 19:13:22 [core] Subchannel picks a new address "private_ip" to connect
{"level":"info","ts":"2023-07-17T19:13:22.934382Z","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"etcd-backup.db.part"}
INFO: 2023/07/17 19:13:22 [balancer] base.baseBalancer: handle SubConn state change: 0xc000319b70, CONNECTING
INFO: 2023/07/17 19:13:22 [core] Channel Connectivity change to CONNECTING
INFO: 2023/07/17 19:13:22 [core] Subchannel Connectivity change to READY
INFO: 2023/07/17 19:13:22 [balancer] base.baseBalancer: handle SubConn state change: 0xc000319b70, READY
INFO: 2023/07/17 19:13:22 [roundrobin] roundrobinPicker: Build called with info: {map[0xc000319b70:{{private_ip private_ip <nil> 0 <nil>}}]}
INFO: 2023/07/17 19:13:22 [core] Channel Connectivity change to READY
{"level":"info","ts":"2023-07-17T19:13:22.945281Z","logger":"client","caller":"[email protected]/maintenance.go:212","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":"2023-07-17T19:13:22.945322Z","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":".."}
{"level":"info","ts":"2023-07-17T19:13:23.016963Z","logger":"client","caller":"[email protected]/maintenance.go:220","msg":"completed snapshot read; closing"}
{"level":"info","ts":"2023-07-17T19:13:23.040878Z","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"private_ip:2379","size":"5.4 MB","took":"now"}
{"level":"info","ts":"2023-07-17T19:13:23.041124Z","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"etcd-backup.db"}
INFO: 2023/07/17 19:13:23 [core] Channel Connectivity change to SHUTDOWN
INFO: 2023/07/17 19:13:23 [core] Subchannel Connectivity change to SHUTDOWN
Snapshot saved at etcd-backup.db
Deprecated: Use `etcdutl snapshot status` instead.


+----------+----------+------------+------------+
|   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| 7983b5b4 |    20947 |       1783 |     5.4 MB |
+----------+----------+------------+------------+


completed된 job 확인

kubectl get pods
NAME                    READY   STATUS      RESTARTS   AGE
etcd-backup-job-wbf8q   0/1     Completed   0          27m