RaspberryPiを使って自宅サーバを作ってみた話 - k8s kubeadmを使用した高可用性クラスターの作成(master編)


はじめに

RaspberryPiを使用して自宅サーバを作っていたのですが、勉強のためにk8sを構築しようと思いました。
ただ普通に作るのはとても面白くないので、後学のためにマスターをマルチ構成にしたk8sを作ります。

本内容は
準備編」「haproxy編」「master編」「worker編
の4部構成となります。

今回は「master編」です。

k8s-master1にて、Masterノードを作成します。

[ 作業対象:k8s-master1 ]

コントロールプレーンノードを作成するconfigを作成します。

# cat kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "ras-proxy"
controlPlaneEndpoint: "k8s-proxy:6443"
networking:
  podSubnet: "10.244.0.0/16"

configを古いconfigから新しいconfigに書き換えます。

#kubeadm config migrate --old-config kubeadm-config.yaml --new-config kubeadm-config-new.yaml
W0812 12:44:46.041647    3612 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

コントロールプレーンノードの初期化

# kubeadm init --config kubeadm-config-new.yaml

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

うまくいかなかったのでkubeletの再起動を実施してみる。

# systemctl restart kubelet

再度実行して、出来た。

# kubeadm init --config kubeadm-config-new.yaml

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-proxy:6443 --token j2cg1d.ehn590knj5kpmpkl \
    --discovery-token-ca-cert-hash sha256:97e55590f8e1d8e5b7b909089c55e2152b2532c3879ca0b5a0eac6fbc9f5ba25 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-proxy:6443 --token j2cg1d.ehn590knj5kpmpkl \
    --discovery-token-ca-cert-hash sha256:97e55590f8e1d8e5b7b909089c55e2152b2532c3879ca0b5a0eac6fbc9f5ba25 

To start using your cluster, you need to run the following as a regular user:に書かれている通り実行します。

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

Podネットワークアドオンのインストールします。

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

作ったMasterノードのステータスを確認します。

# kubectl get node
NAME          STATUS     ROLES    AGE    VERSION
k8s-master1   NotReady   master   6m9s   v1.18.6

もう少し待ってから再度ステータスを確認して、Readyになっていることを確認します。

# kubectl get node
NAME          STATUS   ROLES    AGE    VERSION
k8s-master1   Ready    master   7m9s   v1.18.6

証明書の転送

[ 作業対象:k8s-master1 ]

k8s-master1で作った証明書関連のファイルを他のmasterサーバヘ転送するのですが、
その際にパスワードを毎回求められているので鍵認証します。

$ whoami
ubuntu

$ ssh-keygen -t rsa

$ scp .ssh/id_rsa.pub ubuntu@k8s-master2:

$ ssh ubuntu@k8s-master2 

$ ls
id_rsa.pub

$ mkdir .ssh

$ mv id_rsa.pub .ssh/authorized_keys

$ chmod 600 .ssh/authorized_keys

$ sudo vi /etc/ssh/sshd_config
PasswordAuthentication no

$ sudo systemctl restart sshd

$ exit

$ scp .ssh/id_rsa.pub ubuntu@k8s-master3:

$ ssh ubuntu@k8s-master3

$ ls
id_rsa.pub

$ mkdir .ssh

$ mv id_rsa.pub .ssh/authorized_keys

$ chmod 600 .ssh/authorized_keys

$ sudo vi /etc/ssh/sshd_config
PasswordAuthentication no

$ sudo systemctl restart sshd

$ exit

k8s-master1で作った証明書関連のファイルを他のmasterサーバヘ転送します。

$whoami
ubuntu

$ USER=ubuntu

$ NODE=“k8s-master2”

$ for host in ${NODE}; do
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/ca.crt ${USER}@$host:
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/ca.key ${USER}@$host:
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/sa.key ${USER}@$host:
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/sa.pub ${USER}@$host:
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/front-proxy-ca.crt ${USER}@$host:
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/front-proxy-ca.key ${USER}@$host:
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/etcd/ca.crt ${USER}@$host:etcd-ca.crt
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/etcd/ca.key ${USER}@$host:etcd-ca.key
     sudo scp -i .ssh/id_rsa /etc/kubernetes/admin.conf ${USER}@$host:
   done

$ NODE=“k8s-master3”

$ for host in ${NODE}; do
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/ca.crt ${USER}@$host:
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/ca.key ${USER}@$host:
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/sa.key ${USER}@$host:
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/sa.pub ${USER}@$host:
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/front-proxy-ca.crt ${USER}@$host:
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/front-proxy-ca.key ${USER}@$host:
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/etcd/ca.crt ${USER}@$host:etcd-ca.crt
     sudo scp -i .ssh/id_rsa /etc/kubernetes/pki/etcd/ca.key ${USER}@$host:etcd-ca.key
     sudo scp -i .ssh/id_rsa /etc/kubernetes/admin.conf ${USER}@$host:
   done

各MasterノードをJOINさせます。

[ 作業対象:k8s-master2 / k8s-master3 ]

転送された証明書を適切な場所に配置します。

# mkdir -p /etc/kubernetes/pki/etcd
# mv /home/ubuntu/ca.crt /etc/kubernetes/pki/
# mv /home/ubuntu/ca.key /etc/kubernetes/pki/
# mv /home/ubuntu/sa.pub /etc/kubernetes/pki/
# mv /home/ubuntu/sa.key /etc/kubernetes/pki/
# mv /home/ubuntu/front-proxy-ca.crt /etc/kubernetes/pki/
# mv /home/ubuntu/front-proxy-ca.key /etc/kubernetes/pki/
# mv /home/ubuntu/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# mv /home/ubuntu/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
# mv /home/ubuntu/admin.conf /etc/kubernetes/admin.conf

k8s-master1で表示されたjoinコマンドを実行します。
※末尾に「--control-plane」とついているものを実行する

# kubeadm join k8s-proxy:6443 --token j2cg1d.ehn590knj5kpmpkl --discovery-token-ca-cert-hash sha256:97e55590f8e1d8e5b7b909089c55e2152b2532c3879ca0b5a0eac6fbc9f5ba25 --control-plane 

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

To start using your cluster, you need to run the following as a regular user:に書かれている通り実行します。

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

Masterノードのステータスを確認して、JOINできていることを確認します。

[作業対象:k8s-master1 or k8s-master2 or k8s-master3 ]

# kubectl get node
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready    master   59m     v1.18.6
k8s-master2   Ready    master   20m     v1.18.6
k8s-master3   Ready    master   7m11s   v1.18.6

haproxy側で各MasterがUPしていることを確認します。

[ 作業対象:k8s-proxy ]

# cat /var/log/haproxy.log | grep UP
Aug 12 14:01:55 ras-proxy haproxy[2647]: [WARNING] 224/140155 (2647) : Server k8s_backend/k8s-master1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Aug 12 14:40:48 ras-proxy haproxy[2866]: [WARNING] 224/144048 (2866) : Server k8s_backend/k8s-master2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Aug 12 14:55:02 ras-proxy haproxy[2866]: [WARNING] 224/145502 (2866) : Server k8s_backend/k8s-master3 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.

次回

k8sのWorker構築について触れていきます。