kubeadmを使ってk 8 sクラスタ01-初期化を展開する.

14895 ワード

kubeadmを使ってk 8 sクラスタ01-初期化を展開する.
2018/2/23
ノードの設定
  • マスターx 3
  • OS
  • version:centos 7 swapoff
    ###      :off
  • hosts
    ###        :
    [root@tvm-00 ~]# cat /etc/hosts
    ### k8s master @envDev
    10.10.9.67 tvm-00
    10.10.9.68 tvm-01
    10.10.9.69 tvm-02
    
    Docker
  • version:latest(17.09.1-ce)
  • インストール
    ###   
    [root@tvm-00 ~]# yum -y install yum-utils
    [root@tvm-00 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    [root@tvm-00 ~]# yum makecache fast
    ###      yum -y install docker-ce    ,          ,         ,  :
    [root@tvm-00 ~]# yum -y install docker-ce-17.09.1.ce-1.el7.centos.x86_64
    
    ###      
    [root@tvm-00 ~]# mkdir -p /data/docker
    [root@tvm-00 ~]# mkdir -p /etc/docker; tee /etc/docker/daemon.json <
    鏡像
    registry mirror
  • アリ雲にコンテナミラーサービスを開通したら、専用の加速アドレスが見つかります.
  • は、前のステップにdockerを配置したときに
  • を使用します.
    kubeadmは下記のミラーが必要です.
  • 事前にpullをローカルに転送し、ネットワークが遅い場合、docker save&docker load操作によって、各ノード
    
    ###       :
    gcr.io/google_containers/kube-apiserver-amd64:v1.9.0
    gcr.io/google_containers/kube-controller-manager-amd64:v1.9.0
    gcr.io/google_containers/kube-scheduler-amd64:v1.9.0
    gcr.io/google_containers/kube-proxy-amd64:v1.9.0
    gcr.io/google_containers/etcd-amd64:3.1.10
    gcr.io/google_containers/pause-amd64:3.0
    gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
    gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
    gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
  • にミラーリングを配信することが考えられます.
    マスターノード用のイメージ圧縮パッケージを作ります.
    [root@tvm-00~」璖cd~/k 8 s_install/master/gcr.io[root@tvm-00 gcr.io]芰docker save-o gcr.io-all.tar\gcr.io/google_containers/kube-appiserver-amd 64:v 1.9.0\gcr.io/google_containers/kube-controller-manager-amd 64:v 1.9.0\gcr.io/google_containers/kube-scheduler-amd 64:v 1.9.0\gcr.io/google_containers/kube-proxy-amd 64:v 1.9.0\gcr.io/google_containers/etcd-amd 64:3.1.10\gcr.io/google_containers/pause-amd 64:3.0\gcr.io/google_containers/k 8 s-dns-sidecar-amd 64:1.14.7\gcr.io/google_containers/k 8 s-dns-kube-dns-amd 64:1.14.7\gcr.io/google_containers/k 8 s-dns-dnsmasts-nany-amd 64:1.14.7
    ウォーカーノード用のイメージ圧縮パッケージを作ります.
    [root@tvm-00 gcr.io]「芰docker save-o gcr.io-worker.tar\gcr.io/google_containers/kube-proxy-amd 64:v 1.9.0\gcr.io/google_containers/pause-amd 64:3.0
    [root@tvm-00 gcr.io]芰lsgcr.io-all.tar gcr.io-worker.tar
    ターゲットノードに同期したら、ミラーをインポートします.
    [root@tvm-00~]菗docker load-i gcr.io-all.tar[root@tvm-00~」菗docker load-i gcr.io-worker.tar
    
    ##### private registry
      -          
    
    ###       k8s          
      - version: 1.9.0
      -        kubelet kubeadm kubectl  3   
        -     :#2(  )
    #####       
    ```bash
    ###   SELinux
    [root@tvm-00 ~]# getenforce
    Disabled
    ###      Disabled  :
    [root@tvm-00 ~]# setenforce 0
    
    ###     
    [root@tvm-00 ~]# cat <  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    _EOF
    [root@tvm-00 ~]# sysctl --system
    rpmパッケージをダウンロードしてローカルにインストールします.
  • 壁の存在のため、あなたは分かります.もちろんです.ご自分のローカルyumソースを持って、これらのカバンをキャッシュしたほうがいいです.
    
    ###   
    [root@tvm-00 ~]# cd ~/k8s_install/k8s_rpms_1.9
    [root@tvm-00 k8s_rpms_1.9]# ls
    k8s/kubeadm-1.9.0-0.x86_64.rpm  k8s/kubectl-1.9.0-0.x86_64.rpm  k8s/kubelet-1.9.0-0.x86_64.rpm  k8s/kubernetes-cni-0.6.0-0.x86_64.rpm  k8s/socat-1.7.3.2-2.el7.x86_64.rpm
  • .
    [root@tvm-00 k 8 s s恳pms_1.9]菗yum localinstall*.rpm-y
    [root@tvm-00 k 8 s s Curpmsu 1.9]菗systeml enable kubelet
    
    ##### cgroupfs vs systemd
      -     :#3(  )
    ```bash
    ###    --cgroup-driver     docker         cgroupfs   :
    [root@tvm-00 ~]# sed -i 's#--cgroup-driver=systemd#--cgroup-driver=cgroupfs#' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    [root@tvm-00 ~]# systemctl daemon-reload
    
    ######   ,  centos7     --cgroup-driver=systemd       kube-dns     ,  :
    ### (   kubedns      )
    [root@tvm-00 ~]# kubectl logs -n kube-system --tail=20 kube-dns-6f4fd4bdf-ntcgn -c kubedns
    container_linux.go:265: starting container process caused "process_linux.go:284: applying cgroup configuration for process caused \"No such device or address\""
    ### (   sidecar   )
    [root@tvm-00 ~]# kubectl logs -n kube-system --tail=1 kube-dns-6f4fd4bdf-ntcgn -c sidecar
    W1226 06:21:40.170896       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:44903->127.0.0.1:53: read: connection refused
    ### (   dnsmasq    )
    [root@tvm-00 ~]# kubectl logs -n kube-system --tail=20 kube-dns-6f4fd4bdf-ntcgn -c dnsmasq
    I1226 06:21:40.214148       1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
    I1226 06:21:40.214233       1 nanny.go:94] Starting dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053]
    I1226 06:21:40.222440       1 nanny.go:119]
    W1226 06:21:40.222453       1 nanny.go:120] Got EOF from stdout
    I1226 06:21:40.222537       1 nanny.go:116] dnsmasq[9]: started, version 2.78 cachesize 1000
    ### (   )
    初期化k 8 sクラスタ
  • 初期化前
  • エラーが発生した場合は、resetファイル
  • を参照してください.
  • 実行初期化
  • k 8 sクラスタの情報
  • を確認する.
  • 付加コンポーネントのnetwork plugs-cacalio
  • 先に伝達します.--pod-network-cidrはkubeadm init
  • に渡します.
  • ネットワークセグメントCALICO_を配置します.IPV 4 POOL_CIDR
  • 初期化前
    
    ###   1:       ,        
    --kubernetes-version=v1.9.0
    ###   2:    CIDR                calico                   (     calico            )
    --pod-network-cidr=172.30.0.0/20
  • 下記のIPアドレス池は小型クラスタの需要を満たす.
    セグメント:172.30.00/20
    ホストリスト:172.30.11-172.30.1254=4094個
    
    #####     ,    reset   
      -     :#4(  )
    ```bash
    [root@tvm-00 ~]# kubeadm reset
    [preflight] Running pre-flight checks.
    [reset] Stopping the kubelet service.
    [reset] Unmounting mounted directories in "/var/lib/kubelet"
    [reset] Removing kubernetes-managed containers.
    [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
    [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
    [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
    初期化を実行
    [root@tvm-00 ~]# kubeadm init --pod-network-cidr=172.30.0.0/20 --kubernetes-version=v1.9.0
    [init] Using Kubernetes version: v1.9.0
    [init] Using Authorization modes: [Node RBAC]
    [preflight] Running pre-flight checks.
            [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.09.1-ce. Max validated version: 17.03
            [WARNING FileExisting-crictl]: crictl not found in system path
    [preflight] Starting the kubelet service
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [tvm-00 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.9.67]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated sa key and public key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
    [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
    ### (   )
    Your Kubernetes master has initialized successfully!
    k 8 sクラスタの情報を確認する
    ###        kubectl   ,      :
    [root@tvm-00 ~]# mkdir -p $HOME/.kube
    [root@tvm-00 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    ###       :
    [root@tvm-00 ~]# kubectl get nodes
    NAME     STATUS     ROLES     AGE       VERSION
    tvm-00   NotReady   master    19h       v1.9.0
    ###     :
    [root@tvm-00 ~]# journalctl -xeu kubelet
    ###       :
    [root@tvm-00 ~]# kubectl cluster-info
    Kubernetes master is running at https://10.10.9.67:6443
    KubeDNS is running at https://10.10.9.67:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    付加コンポーネントのnetwork plugins-calico
  • calicoに必要な下記のミラーを用意します.
  • 事前にpullを現地に置いて、workerノードでもnodeとcniの2つのミラー
    
    ###    calico.yaml     
    [root@tvm-00 ~]# mkdir -p ~/k8s_install/master/network
    [root@tvm-00 ~]# cd !$
    [root@tvm-00 network]# curl -so calico-v2.6.yaml  https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
  • が必要です.
    [root@tvm00 network-嗵墯噬sed-e's萼^.image:quay.io qull ay.io.io.io.io嗲嗲à惸ôôôôôôsed sed-m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m m-e e e e e's s s s s s s s s s s s s s s s s s s?\?.io/calio/kube-controllers:v 1.0.2
    ミラーを保存して、他のノードにコピーして直接docker load-i xx.tarでいいです.
    [root@tvm-00 network-.萼docker save-o cacalio-v 2.6.tari quay.io/coreos/etcd:v 3.1.10 quay.io/cacalio/node:v 2.6.5 quay.io/calio/cni:v 1.11.2 quay.io/calio/kube-contres:v 1.0.2root@tvm-00 network-芫lscalico-v 2.6.tall ico-v 2.6.yaml
    
    -    calico
    ```bash
    ###    calico.yaml     
    [root@tvm-00 network]# sed -i 's#192.168.0.0/16#172.30.0.0/20#' calico-v2.6.yaml
    
    ###   
    [root@tvm-00 network]# kubectl apply -f calico-v2.6.yaml
    configmap "calico-config" created
    daemonset "calico-etcd" created
    service "calico-etcd" created
    daemonset "calico-node" created
    deployment "calico-kube-controllers" created
    deployment "calico-policy-controller" created
    clusterrolebinding "calico-cni-plugin" created
    clusterrole "calico-cni-plugin" created
    serviceaccount "calico-cni-plugin" created
    clusterrolebinding "calico-kube-controllers" created
    clusterrole "calico-kube-controllers" created
    serviceaccount "calico-kube-controllers" created
    
    ###    kube-dns pod is Running
    [root@tvm-00 ~]# kubectl get pods --all-namespaces
    NAMESPACE     NAME                                             READY     STATUS    RESTARTS   AGE
    kube-system   calico-etcd-djrtb                                1/1       Running   1          1d
    kube-system   calico-kube-controllers-d6c6b9b8-7***n           1/1       Running   1          1d
    kube-system   calico-node-mff7x                                2/2       Running   3          1d
    kube-system   etcd-tvm-00                                      1/1       Running   1          4h
    kube-system   kube-apiserver-tvm-00                            1/1       Running   0          2m
    kube-system   kube-controller-manager-tvm-00                   1/1       Running   2          3d
    kube-system   kube-dns-6f4fd4bdf-ntcgn                         3/3       Running   7          3d
    kube-system   kube-proxy-pfmh8                                 1/1       Running   1          3d
    kube-system   kube-scheduler-tvm-00                            1/1       Running   2          3d
    
    ###      nodes    
    [root@tvm-00 ~]# kubectl get nodes
    NAME     STATUS    ROLES     AGE       VERSION
    tvm-00   Ready     master    2d        v1.9.0
    他の2つのノードをk 8 sクラスタに追加する.
  • kubeadm token
    
    ###   :kubeadm init     join     token    24h     ,     ,      ,     :
    [root@tvm-00 ~]# kubeadm token create --print-join-command
    kubeadm join --token 84d7d1.e4ed7451c620436e 10.10.9.67:6443 --discovery-token-ca-cert-hash sha256:42cfdc412e731793ce2fa20aad1d8163ee8e6e5c05c30765f204ff086823c653
  • [root@tvm-00~]腩克beadm token listTOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUTS 84 d 7.e 4 em 7451 c 620436 e 23 h 2017-26 T 14:46+08:00 authentication、signing system:boottrapper Bedeku
    
    - kubeadm join
    ```bash
    [root@tvm-00 ~]# kubeadm join --token 84d7d1.e4ed7451c620436e 10.10.9.67:6443 --discovery-token-ca-cert-hash sha256:42cfdc412e731793ce2fa20aad1d8163ee8e6e5c05c30765f204ff086823c653
  • cluster情報を確認する
    
    [root@tvm-00 ~]# kubectl get nodes
    NAME     STATUS    ROLES     AGE       VERSION
    tvm-00   Ready     master    3d        v1.9.0
    tvm-01   Ready         2h        v1.9.0
    tvm-02   Ready         27s       v1.9.0
  • [root@tvm-00~]カルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルカルストストストストストストストストストストストストストストストストストストストS AGEEEkubekubex-sssssybebebebebebebebet-ssssm-sssssysystem cacacacacacacacacacacacacacacacacacacacall ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll ll de-mff7 x 2/2 Running 3 1 dkube-system calico-node-mw 96 v 2/2 Running 3 19 hkube-system d-tvm-1/1 Running 1 4 hkube-sysstem kube-aappiserver-tvm-0 1/1 Running 0 2 mkube-system 0 0 2 2 mkubebe-mmmmmmmbeben-ffffstem 3 9 9 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0/1 Runninining 2 3 dkukukubeben-dkubebebeben-dmmmmmbebeben-ddddbebebebeben-dmmmmmmmmmmmbebeben-d-ddddddddbebebebebeben-d-d-d-d-dd7-7 xtv 4 1/1 Running 1 19 hkube-system kube-proxy-pfmh 8 1/1 Running 1 3 dkube-system kube-scheduler-tvm-00 1/1 Running 2 3 d
    予想通り、3つのcacacalio-nodeとkube-proxyがクラスタ内にあります.
    
    
    ### ZYXW、  
    1. [       Kubeadm    Kubernetes  -    ](http://tonybai.com/2017/05/15/setup-a-ha-kubernetes-cluster-based-on-kubeadm-part1/)
    2. [install docker for kubeadm](https://kubernetes.io/docs/setup/independent/install-kubeadm/#installing-docker)
    3. [kube-dns crashloopbackoff after flannel/weave install #54910](https://github.com/kubernetes/kubernetes/issues/54910)
    4. [kubeadm reset](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#tear-down)
    転載先:https://blog.51cto.com/nosmoking/2062886