Kubeadm installation

9712 ワード

作者:本曾リンク:https://www.jianshu.com/p/e3e6a66fcb97
Kubeadmネイティブを使用してKubernetesを導入
K 8 sの導入方法はいろいろありますが、kubeadmはネットで見た方がお勧めの導入方法です.シングルマシンまたはクラスタに配備できます.
Kubeadm
公式説明:If you already have a way to configure hosting resources,use kubeadm to easily bring up a cluster with a single command per machine.個人的にインストールしたほうが使いやすいと思います.
  • dockerを使用してコンポーネントを配置します.etcd
  • などです.
  • は使用が簡単で、instructionははっきりしています.
  • は、単機で導入してもよいし、異なるマシンに配置してクラスタに参加してもよい.
  • は、複数のオペレーティングシステムをサポートしています.ubuntu,centos.仮想マシンを使用する場合、オペレーティングシステムは重要ではありません.

  • インストール
    インストールは比較的簡単で、公式のguideに従ってインストールすればいいです.ここにリンクを提供します.https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/私自身はUbuntuにマスターをインストールし、VMでworkerを立ち上げ、クラスタを試してみました.ローカル開発だけであれば、masterだけを配備し、podsはmasterから開始すればよい.
    使用したコマンド
    #reset kubeadm
    kubeadm reset
    
    #init kubeadm
    kubeadm init
    
    #Install network plugin
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml
    
    #get all pods in k8s
    kubectl get pods --all-namespaces
    
    #get all nodes in k8s
    kubectl get nodes
    
    #Join a worker to k8s 
    kubeadm join --token  : --discovery-token-ca-cert-hash sha256:
    
    #Allow master to create pods
    kubectl taint nodes --all node-role.kubernetes.io/master-
    
    #Install a sample application (demo)
    kubectl create namespace sock-shop
    kubectl apply -n sock-shop -f "https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true"
    
    #Check applicaiton
    kubectl -n sock-shop get svc front-end
    kubectl describe svc front-end -n sock-shop
    
    # Tear down
    kubectl drain  --delete-local-data --force --ignore-daemonsets
    kubectl delete node 
    
    kubeadm reset
    
    

    インストール中に発生した問題とTips
  • 初めてkubeadm initがエラーを報告し、一部のディレクトリは空ではありません.kubeadm resetを使用して空にすることができます.
  • のインストールで発生する可能性のあるいくつかの問題について、ここに良い文章があります.http://www.cnblogs.com/pinganzi/p/7239328.html

  • 短所&limitation
  • アクセスが必要https://cloud.google.com/container-registry/
  • Be sure to read the limitations

  • インストール後の拡張学習方向
    インストールすることで、後から学ぶ方向がいくつか見つかります.
  • Kubernetesネットワークプラグインの学習、いくつかのネットワークプラグインはどのような応用シーンに適しています.
  • PodとServiceの学習.podとサービスを配置し、彼らの関係を理解する.
  • 独自のdockerイメージを作成し、podとServicesに配置します.
  • マイクロサービスの学習.
  • PS:kubeadmのguideには小型のマイクロサービスが含まれています.参考にすることができます.
    インストールログ
    root@hostname:/home/username/.kube# kubeadm reset
    [preflight] Running pre-flight checks
    [reset] Stopping the kubelet service
    [reset] Unmounting mounted directories in "/var/lib/kubelet"
    [reset] Removing kubernetes-managed containers
    [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
    [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
    [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
    root@hostname:/home/username/.kube# kubeadm reset
    [preflight] Running pre-flight checks
    [reset] Stopping the kubelet service
    [reset] Unmounting mounted directories in "/var/lib/kubelet"
    [reset] Removing kubernetes-managed containers
    [reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml", assuming external etcd.
    [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
    [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
    [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
    root@hostname:/home/username/.kube# kubeadm init --pod-network-cidr=10.244.0.0/16
    [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
    [init] Using Kubernetes version: v1.8.1
    [init] Using Authorization modes: [Node RBAC]
    [preflight] Running pre-flight checks
    [preflight] Starting the kubelet service
    [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [hostname kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 146.11.23.8]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated sa key and public key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
    [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
    [init] This often takes around a minute; or longer if the control plane images have to be pulled.
    [apiclient] All control plane components are healthy after 27.502157 seconds
    [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [markmaster] Will mark node hostname as master by adding a label and a taint
    [markmaster] Master hostname tainted and labelled with key/value: node-role.kubernetes.io/master=""
    [bootstraptoken] Using token: 331134.8601be46f05da602
    [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: kube-dns
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run (as a regular user):
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      http://kubernetes.io/docs/admin/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join --token 3311xxxxxxxxx2 146.11.23.8:6443 --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    
    root@hostname:/home/username/.kube# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    cp: overwrite '/root/.kube/config'? y
    root@hostname:/home/username/.kube# sudo chown $(id -u):$(id -g) $HOME/.kube/config
    root@hostname:/home/username/.kube# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml
    clusterrole "flannel" created
    clusterrolebinding "flannel" created
    serviceaccount "flannel" created
    configmap "kube-flannel-cfg" created
    daemonset "kube-flannel-ds" created
    root@hostname:/home/username/.kube# 
    root@hostname:/home/username/.kube# kubectl get pods --all-namespaces
    NAMESPACE     NAME                                  READY     STATUS    RESTARTS   AGE
    kube-system   etcd-hostname                      1/1       Running   0          37s
    kube-system   kube-apiserver-hostname            1/1       Running   0          1m
    kube-system   kube-controller-manager-hostname   1/1       Running   0          42s
    kube-system   kube-dns-545bc4bfd4-7t4cr             0/3       Pending   0          1m
    kube-system   kube-flannel-ds-jbx9g                 1/1       Running   0          18s
    kube-system   kube-proxy-kdfxj                      1/1       Running   0          1m
    kube-system   kube-scheduler-hostname            1/1       Running   0          52s
    root@hostname:/home/username/.kube# kubectl get pods --all-namespaces
    NAMESPACE     NAME                                  READY     STATUS    RESTARTS   AGE
    kube-system   etcd-hostname                      1/1       Running   0          20m
    kube-system   kube-apiserver-hostname            1/1       Running   0          20m
    kube-system   kube-controller-manager-hostname   1/1       Running   0          20m
    kube-system   kube-dns-545bc4bfd4-7t4cr             3/3       Running   0          21m
    kube-system   kube-flannel-ds-jbx9g                 1/1       Running   0          20m
    kube-system   kube-proxy-kdfxj                      1/1       Running   0          21m
    kube-system   kube-scheduler-hostname            1/1       Running   0          20m
    root@hostname:/home/username/.kube#