クbeadmでKubenetesを構築する


この数日間の苦しめていることを記録して、Kubergnetesを構築したいです.どうやって遊んでいるのかを見てみます.構築過程はやはり大変です.特別に信頼できる資料が見つからないからです.またはバージョンは互換性がないです.
一組の方式
Kubergnetesの構築には三つの方法があります.簡単に評価してください.
  • Dockerローカルに基づいてKubergnetes先行条件を実行する:http://www.cnblogs.com/zhangeamon/p/5197655.html 参考資料:https://github.com/kubernetes/community/blob/master/contributors/devel/local-cluster/docker.md Install kubectl and shell aut coplish:評価:この方式は成功していません.ずっとcan not connet 127.1.0.1:8080の問題があって、後で感じます.しかし、再試行されませんでした.
  • 用のminibbe minikoubeは、単一のマシン環境で構築するのに適しています.これは仮想マシンを作成し、Kubenetes公式はDockerに基づいてローカルにKubergnetesを実行するためのサポートを停止しているようです.参考:https://github.com/kubernetes/minikubeしかし、それが一番いいのはVirtual boxを下地として仮想化driverとして要求しています.私のbare metalはすでにkvmをインストールしました.試してみたら衝突があったので、このような方式でインストールしていませんでした.
  • はKbeadmを使って、Kubenetes clusterをインストールしやすいツールです.私もこのように成功しました.この方式は後で詳しく記録されます.
  • は一歩ずつ各コンポーネントをインストールしてインストールします.まだ試していません.https://github.com/opsnull/follow-me-install-kubernetes-cluster面倒くさいです.個人的にはやはり第三の方法を勧めます.上手にとっては便利です.これらの方法は全部試してみます.
  • 二kubeadm setup Kubenetes
    参考:Openstack:https://docs.openstack.org/developer/kolla-kubernetes/deployment-guide.html Kubergnetes:https://kubernetes.io/docs/getting-started-guides/kubeadm/ 構築環境:KVMからCentos 7の仮想マシン1.Turn off SELinux
    sudo setenforce 0
    sudo sed -i 's/enforcing/permissive/g' /etc/selinux/config
    2.Turn off firewalld
    sudo systemctl stop firewalld
    sudo systemctl disable firewalld
    3.Write the Kubergnetes repository file
    cat < kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
    https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
    sudo mv kubernetes.repo /etc/yum.repos.d
    4.Install Kubergnetes 1.6.1 or later and other dependencies
    sudo yum install -y docker ebtables kubeadm kubectl kubelet kubernetes-cni
    5.Toenable the proper cgroup driver,start Docer and disable CRI
    sudo systemctl enable docker
    sudo systemctl start docker
    CGROUP_DRIVER=$(sudo docker info | grep "Cgroup Driver" | awk '{print $3}')
    sudo sed -i "s|KUBELET_KUBECONFIG_ARGS=|KUBELET_KUBECONFIG_ARGS=--cgroup-driver=$CGROUP_DRIVER --enable-cri=false |g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    sudo sed -i "s|\$KUBELET_NETWORK_ARGS| |g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    6.セットトップthe DNS server with the service CIDR:
    sudo sed -i 's/10.96.0.10/10.3.3.10/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    7.reload kubelet
    sudo systemctl daemon-reload
    sudo systemctl stop kubelet
    sudo systemctl enable kubelet
    sudo systemctl start kubelet
    8.Deploy Kubenetes with kubeadm
    sudo kubeadm init --pod-network-cidr=10.1.0.0/16 --service-cidr=10.3.3.0/24
    もしあなたが会社のproxyを通じてインターネットに出るなら、あなたのvmの住所をnoにしてください.proxyでは、kubeadmを実行するかどうかは、次のhankになります.もし運行が失敗したら、実行:sudo kubeadm reet:
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [apiclient] Created API client, waiting for the control plane to become ready
    Note:Note pod-network-cidr is a network prvate to Kubenetes that the PODs within Kubersnetes communicate on.The service-cidris where IP address for Kubersnetes services the allocated.The e componetwetworkthe Korlla developers have found through experience that each node consumes an entire/24 network,so this configration would permit 255 Kubenetes nodes.運転完了後:
    [preflight] Starting the kubelet service
    [certificates] Generated CA certificate and key.
    [certificates] Generated API server certificate and key.
    [certificates] API Server serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.122.29]
    [certificates] Generated API server kubelet client certificate and key.
    [certificates] Generated service account token signing key and public key.
    [certificates] Generated front-proxy CA certificate and key.
    [certificates] Generated front-proxy client certificate and key.
    [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [apiclient] Created API client, waiting for the control plane to become ready
    [apiclient] All control plane components are healthy after 23.768335 seconds
    [apiclient] Waiting for at least one node to register
    [apiclient] First node has registered after 4.022721 seconds
    [token] Using token: 5e0896.4cced9c43904d4d0
    [apiconfig] Created RBAC rules
    [addons] Created essential addon: kube-proxy
    [addons] Created essential addon: kube-dns
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run (as a regular user):
    
      sudo cp /etc/kubernetes/admin.conf $HOME/
      sudo chown $(id -u):$(id -g) $HOME/admin.conf
      export KUBECONFIG=$HOME/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      http://kubernetes.io/docs/admin/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join --token 5e0896.4cced9c43904d4d0 192.168.122.29:6443
    
    覚えてください.最後の文はKberadm joinで、slavee nodeはこのCLIでKubergnetesクラスタにjectできます.そして:
      sudo cp /etc/kubernetes/admin.conf $HOME/
      sudo chown $(id -u):$(id -g) $HOME/admin.conf
      export KUBECONFIG=$HOME/admin.conf
    Load the kubedm credentials into the system:
    mkdir -p $HOME/.kube
    sudo -H cp /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo -H chown $(id -u):$(id -g) $HOME/.kube/config
    運転が終わったらチェックします.
    kubectl get nodes
    kubectl get pods -n kube-system
    9.Deploy CNI Driver CNIネットワーク方式:https://linux.cn/thread-15315-1-1.html Flannelを使います.Flannelはvxlanに基づいています.Vxlanを使うと新聞の長さが増加するので、効率が比較的低いです.Kubergnetesが推奨する方式です.
    kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    このような方式は成功しませんでした.flannelというpodはずっと再起動しています.Canalを使う:
    wget http://docs.projectcalico.org/v2.1/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
    
    sed -i "[email protected]/[email protected]/16@" calico.yaml 
    sed -i "[email protected]@10.3.3.100@" calico.yaml
    kubectl apply -f calico.yaml
    Finally untaint the node(mark the master node as schedulable)so that PODs can be scheduled to this AIO deployment:
    kubectl taint nodes --all=true  node-role.kubernetes.io/master:NoSchedule-
    10.rester$KUBELET_NETWORK.ARGS
    sudo sed -i "s|\$KUBELET_EXTRA_ARGS|\$KUBELET_EXTRA_ARGS \$KUBELET_NETWORK_ARGS|g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    
    sudo systemctl daemon-reload
    sudo systemctl restart kubelet
    
    OLD_DNS_POD=$(kubectl get pods -n kube-system |grep dns | awk '{print $1}')
    kubectl delete pod $OLD_DNS_POD -n kube-system
    wait for old dns_pod deleted and atoretart a new dns_pod kubectl get pods、svc、deploy、ds–all-namespaces
    11 setup sample apple Ref:http://janetkuo.github.io/docs/getting-started-guides/kubeadm/ Installing a sample aplicationの部分
    締め括りをつける
  • kubectl自動命令補完Ref:https://kubernetes.io/docs/tasks/kubectl/install/