kubeadmに基づいてK 8 S Masterノードを迅速に構築

16162 ワード

kubeadm快速構築K 8 S Masterノード
  • 前期準備
  • kubeadm init
  • デフォルトプロファイルのエクスポート
  • (オプション)ミラーウェアハウスの構成
  • (オプション)kube-proxy構成ipvsモード
  • フルプロファイルリファレンス
  • 実行kubeadm init
  • 構成ネットワーク(flannelベース)
  • クラスタ導入アプリケーション検証
  • (オプション)Workerノードの追加
  • 前期準備
    docker、kubeadmなどの必要なコマンドをインストールします.プロセスは、マシンがゼロからK 8 SクラスタWorkerノードへのインストールプロセスを参照してください.
    kubeadm init
    デフォルトプロファイルのエクスポート
    kubeadmはパラメータ構成を使用できますが、パラメータが多い場合はプロファイルを使用することをお勧めします.
    プロファイルをエクスポートし、kubeadm.yamlと名前を付けます.
    kubeadm config print init-defaults > kubeadm.yaml
    

    (オプション)ミラーウェアハウスの構成
    ネットワーク環境に応じて、registry.aliyuncs.com/google_containersプロファイルkubeadm.yamlの関連行を選択して、選択したミラーに変更します.
    ...
    imageRepository: registry.aliyuncs.com/google_containers
    ...
    

    (オプション)kube-proxy構成ipvsモード
    公式文書を参照してください.
  • v1beta2 API reference
  • KubeProxyConfiguration
  • kubeadm.yamlの末尾に追加:
    ---
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    mode: ipvs
    ---
    

    完全なプロファイルリファレンス
    完全なプロファイルリファレンス:
    apiVersion: kubeadm.k8s.io/v1beta2
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 24h0m0s
      usages:
      - signing
      - authentication
    kind: InitConfiguration
    localAPIEndpoint:
      bindPort: 6443
    nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: hyper-sia
      taints:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
    ---
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      local:
        imageRepository: ""
        dataDir: /var/lib/etcd
    imageRepository: registry.aliyuncs.com/google_containers
    kind: ClusterConfiguration
    kubernetesVersion: v1.17.3
    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
      podSubnet: 10.240.0.0/16
    scheduler: {}
    ---
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    mode: ipvs
    ---
    

    kubeadm initの実行
    sudo kubeadm init --config kubeadm.yaml
    

    出力結果:
    W0317 09:37:19.620034    7827 validation.go:28] Cannot validate kube-proxy config - no validator is available
    W0317 09:37:19.620074    7827 validation.go:28] Cannot validate kubelet config - no validator is available
    [init] Using Kubernetes version: v1.17.3
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [hyper-sia kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.3.200]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [hyper-sia localhost] and IPs [192.168.3.200 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [hyper-sia localhost] and IPs [192.168.3.200 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    W0317 09:37:22.004796    7827 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    W0317 09:37:22.005387    7827 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 34.001805 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node hyper-sia as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node hyper-sia as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: abcdef.0123456789abcdef
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.3.200:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:9260f901a1702edbca5de31f8d19e4986a753827e12871a4529cc7ee6bb08c13 
    

    ここではまだ終了していません.次のコマンドを実行する必要があります.
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    検証ステータス:
    kubectl get nodes
    

    出力結果:
    NAME        STATUS     ROLES    AGE     VERSION
    hyper-sia   NotReady   master   2m45s   v1.17.3
    

    ネットワークの構成(flannelベース)
    ネットワークプラグインが不足しているため、クラスタのステータスはNotReadyです.
    GitHub: https://github.com/coreos/flannel 公式に提供された命令に従います.
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    

    ネットワーク条件が制限されている場合、エラーが発生する可能性があります.ミラーソースを変更する必要があります.まずファイルをローカルにダウンロードします.
    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    

    すべてのamd 64バージョン(システムによって決定される)のミラーアドレスをquay.ioからquay-mirror.qiniu.comに変更します.
    quay.io/coreos/flannel:v0.12.0-amd64
    

    変更後:
    quay-mirror.qiniu.com/coreos/flannel:v0.12.0-amd64
    

    次にkubectl apply -f kube-flannel.ymlを実行する
    最後に実行するとnodeがreadyになってpodが起きていることがわかります.
    NAME        STATUS   ROLES    AGE   VERSION
    hyper-sia   Ready    master   22m   v1.17.3
    
    NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
    kube-system   coredns-9d85f5447-4tdc2             1/1     Running   0          21m
    kube-system   coredns-9d85f5447-gvtml             1/1     Running   0          21m
    kube-system   etcd-hyper-sia                      1/1     Running   0          21m
    kube-system   kube-apiserver-hyper-sia            1/1     Running   0          21m
    kube-system   kube-controller-manager-hyper-sia   1/1     Running   0          21m
    kube-system   kube-flannel-ds-amd64-bjnhn         1/1     Running   0          5m57s
    kube-system   kube-proxy-l8r8j                    1/1     Running   0          21m
    kube-system   kube-scheduler-hyper-sia            1/1     Running   0          21m
    

    単一ノードK 8 Sを構築する準備ができている場合、またはpodの導入にmasterノードを使用する必要がある場合は、コマンドを実行します.
    kubectl taint nodes --all node-role.kubernetes.io/master-
    

    このとき,このK 8 Sクラスタはアプリケーションを配備することができる.
    クラスタ導入アプリケーションの検証
    3ノードのnginxクラスタを配備します.
    kubectl run my-nginx --image=nginx --port=80 --expose -r3
    

    nginxサービスのIPを取得するには:
    kubectl get svc/my-nginx
    

    出力結果:
    NAME       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
    my-nginx   ClusterIP   10.102.31.20           80/TCP    81s
    

    curl接続を使用するには:
    sia@hyper-sia:~$ curl 10.102.31.20
    
    
    
    Welcome to nginx!
    
    
    
    

    Welcome to nginx!

    If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

    For online documentation and support please refer to nginx.org.
    Commercial support is available at nginx.com.

    Thank you for using nginx.


    検証パス
    (オプション)Workerノードの追加
    このクラスタにWorkerノードを導入する場合は、マシンのゼロからK 8 SクラスタWorkerノードへのインストール手順を参照してください.