Ansible で Kubernetes 環境構築 (Ubuntu 18.04)


Ansible で Kubernetes の環境を作る方法を解説します

KVM 上に 4 台 Ubuntu をインストール。(Kickstart を使った例)
IP アドレスは inventory ファイルに書いてある通り。(192.168.1.40 が master でその他が node)


$ cat inventory/k8s_servers.ini
[nodes]
192.168.1.41
192.168.1.42
192.168.1.43

[all_servers]
192.168.1.40
192.168.1.41
192.168.1.42
192.168.1.43

Ansible をインストールしたホストから ssh で all_servers にパスワード無しで入れるようにしておく。今回は以下のようなスクリプトを使用。


#!/bin/bash

for i in `seq 40 43`; do
    cat ~/.ssh/id_rsa.pub | ssh [email protected].${i} 'mkdir .ssh && cat >> ~/.ssh/authorized_keys && chmod 700 .ssh && chmod 600 ~/.ssh/authorized_keys'
done

visudo で sudo 時にパスワード無しで実行できるように設定。


%sudo   ALL=(ALL:ALL) NOPASSWD: ALL

Ansible の動作テストがてらパッケージのアップデート。
$ ansible-playbook -i ./inventory/k8s_servers.ini ./apt_update_upgrade.yml


---
- hosts: all_servers
  tasks:
    # do apt-get update and dist-upgrade
    - name: apt-get update and dist-upgrade
      apt:
        upgrade: dist
        update-cache: yes
      changed_when: 0
      become: yes

Kubernetes に必要なパッケージのインストールなど。
$ ansible-playbook -i ./inventory/k8s_servers.ini ./install_k8s.yml


$ cat ./install_k8s.yml
---
- hosts: all_servers
  tasks:
    - name: add k8s apt-key
      apt_key:
        url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
        state: present
      become: yes

    - name: add k8s apt repository
      apt_repository:
        repo: 'deb https://apt.kubernetes.io/ kubernetes-xenial main'
        state: present
        filename: kubernetes
        update_cache: yes
      become: yes

    - name: apt-get install tools
      apt:
        name: "{{ item }}"
        update_cache: yes
      with_items:
        - apt-transport-https
        - curl
        - vim
      become: yes

    - name: apt-get install k8s
      apt:
        name: "{{ item }}"
        update_cache: yes
      with_items:
        - kubelet
        - kubeadm
        - kubectl
        - docker.io
      become: yes

    - name: hold packages
      dpkg_selections:
        name: "{{ item }}"
        selection: hold
      with_items:
        - kubelet
        - kubeadm
        - kubectl
        - docker.io
      become: yes

    - name: change sysctl
      sysctl:
        name: net.bridge.bridge-nf-call-iptables
        value: 1
        state: present
      become: yes

    - name: systemd unmask docker.service
      systemd:
        name: docker.service
        masked: no
      become: yes

    - name: systemd unmask docker.socket
      systemd:
        name: docker.socket
        masked: no
      become: yes

    - name: systemd enable
      systemd:
        name: docker.service
        enabled: yes
      become: yes

    - name: systemd start
      systemd:
        name: docker.service
        state: started
      become: yes

    - name: add modules
      modprobe:
        name: "{{ item }}"
        state: present
      with_items:
        - ip_vs_rr
        - ip_vs_wrr
        - ip_vs_sh
        - ip_vs
      become: yes

    - name: Remove swapfile from /etc/fstab
      mount:
        name: swap
        fstype: swap
        state: absent
      become: yes

    - name: Disable swap
      command: swapoff -a
      when: ansible_swaptotal_mb > 0
      become: yes

実行後一回全てのノードの再起動が必要かもしれません。

master node 上で kubeadmin init を実行。flannel もインストール。


$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

その他の node で kubeadmin の実行結果に出てくるコマンドを実行。一台ずつやっても良いけど、Ansible でやるならこんな感じ。


$ ansible -i inventory/k8s_servers.ini nodes -m shell -a "sudo kubeadm join 192.168.1.40:6443 --token XXXXXXXXXXXX --discovery-token-ca-cert-hash sha256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"

kubectl でクラスターの状態を確認。


$ kubectl get nodes
NAME   STATUS   ROLES    AGE   VERSION
k40h   Ready    master   70m   v1.13.3
k41h   Ready       69m   v1.13.3
k42h   Ready       69m   v1.13.3
k43h   Ready       69m   v1.13.3

無事動いてそう。