Centos7.2 kubernatesクラスタの構築

15122 ワード

一、環境の準備
1.1機械の準備
CentOS 7を3台用意する.2マシン、1台をマスターノード、その他をnodeノードとする
ホスト名の変更
hostnamectl set-hostname k8s-mst
Role          IP                   Hostname

Master        192.168.0.87         k8s-cns1-mst

Node          192.168.0.88         k8s-cns1-nod1

Node          192.168.0.89         k8s-cns1-nod2

マスターノード/etc/hostsを変更し、以下の内容を追加します(設定しないと、マスター上のkubectl関連コマンドで対応するホスト上のオブジェクトを操作できません)
192.168.0.87     k8s-cns1-mst

192.168.0.88     k8s-cns1-nod1

192.168.0.89     k8s-cns1-nod2

Dockerのiptablesとの競合を回避するため、ノードのファイアウォールを閉じます
systemctl stop firewalld

systemctl disable firewalld

各ノードの時間を一致させ、すべてのノードにNTPをインストールする
yum -y install ntp

systemctl start ntpd

systemctl enable ntpd

1.2 dockerのインストール(ここではdocker-ceバージョンがインストールされています)
[root@k8s-cns1-nod2 home]# cat installdocker.sh

#!/bin/bash

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum-config-manager --enable docker-ce-edge

yum-config-manager --enable docker-ce-testing

yum makecache fast

yum install -y docker-ce

Docker Daemonの実行
systemctl start docker

二、ソースコードのコンパイル
2.1 golang環境の準備
リファレンスhttps://golang.org/doc/install、対応するバージョンをダウンロードして/usr/localに解凍します.たとえば、
tar -C /usr/local -xzf go1.8.3.linux-amd64.tar.gz

gitをインストールし、kubenatesソースコードをダウンロードし、ブランチが必要になるまで切り替えます.
yum install git

go get -d k8s.io/kubernetes

cd /root/go/src/k8s.io/kubernetes

git checkout release-1.6.3      //  release-1.6.3  

make

コンパイルに成功した後、実行可能ファイルは
/root/go/src/k8s.io/kubernetes/_output/bin

三、Master構成
3.1 ectdのインストール(オプション、etcdクラスタが省略されている場合)
3.1.1ソフトウェアのインストール
yum -y install etcd

3.1.2 etcdの構成/etc/etcd/etcdを修正する.conf
ETCD_NAME=default

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

3.1.3 etcdの実行
systemctl enable etcd

systemctl start etcd

3.1.4 etcdサブネットの構成
etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'

3.2 kubernates構成
3.2.1バイナリのコピー
/root/go/src/k 8 sに位置する.io/kubernetes/_output/bin/ディレクトリの下のkube-apiserver、kube-controller-manager、kube-scheduler、kubectlをMasterノードの/usr/bin/ディレクトリの下にコピー
3.2.2サービス構成スクリプトの作成(shell)
[root@k8s-cns1-mst home]# cat configmaster.sh 
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

MASTER_ADDRESS=${1:-"192.168.0.87"}
ETCD_SERVERS=${2:-"http://192.168.0.87:2379"}
SERVICE_CLUSTER_IP_RANGE=${3:-"10.254.0.0/16"}
ADMISSION_CONTROL=${4:-"NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"}

cat </etc/kubernetes/config
# --logtostderr=true: log to standard error instead of files
KUBE_LOGTOSTDERR="--logtostderr=false"

# --v=0: log level for V logs
KUBE_LOG_LEVEL="--v=0"

# --allow-privileged=false: If true, allow privileged containers.
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"
EOF

cat </etc/kubernetes/apiserver
# --insecure-bind-address=127.0.0.1: The IP address on which to serve the --insecure-port.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# --insecure-port=8080: The port on which to serve unsecured, unauthenticated access.
KUBE_API_PORT="--insecure-port=8080"

# --kubelet-port=10250: Kubelet port
NODE_PORT="--kubelet-port=10250"

# --etcd-servers=[]: List of etcd servers to watch (http://ip:port), 
# comma separated. Mutually exclusive with -etcd-config
KUBE_ETCD_SERVERS="--etcd-servers=${ETCD_SERVERS}"

# --advertise-address=: The IP address on which to advertise 
# the apiserver to members of the cluster.
KUBE_ADVERTISE_ADDR="--advertise-address=${MASTER_ADDRESS}"

# --service-cluster-ip-range=: A CIDR notation IP range from which to assign service cluster IPs. 
# This must not overlap with any IP ranges assigned to nodes for pods.
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE}"

# --admission-control="AlwaysAdmit": Ordered list of plug-ins 
# to do admission control of resources into cluster. 
# Comma-delimited list of: 
#   LimitRanger, AlwaysDeny, SecurityContextDeny, NamespaceExists, 
#   NamespaceLifecycle, NamespaceAutoProvision,
#   AlwaysAdmit, ServiceAccount, ResourceQuota, DefaultStorageClass
KUBE_ADMISSION_CONTROL="--admission-control=${ADMISSION_CONTROL}"

# Add your own!
KUBE_API_ARGS="--log-dir=/var/log/kubenates/"
EOF

KUBE_APISERVER_OPTS="   \${KUBE_LOGTOSTDERR}         \\
                        \${KUBE_LOG_LEVEL}           \\
                        \${KUBE_ETCD_SERVERS}        \\
                        \${KUBE_API_ADDRESS}         \\
                        \${KUBE_API_PORT}            \\
                        \${NODE_PORT}                \\
                        \${KUBE_ADVERTISE_ADDR}      \\
                        \${KUBE_ALLOW_PRIV}          \\
                        \${KUBE_SERVICE_ADDRESSES}   \\
                        \${KUBE_ADMISSION_CONTROL}   \\
                        \${KUBE_API_ARGS}"


cat </usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver ${KUBE_APISERVER_OPTS}
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat </etc/kubernetes/controller-manager
###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS=""
EOF

KUBE_CONTROLLER_MANAGER_OPTS="  \${KUBE_LOGTOSTDERR} \\
                                \${KUBE_LOG_LEVEL}   \\
                                \${KUBE_MASTER}      \\
                                \${KUBE_CONTROLLER_MANAGER_ARGS}"

cat </usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager ${KUBE_CONTROLLER_MANAGER_OPTS}
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat </etc/kubernetes/scheduler
###
# kubernetes scheduler config

# Add your own!
KUBE_SCHEDULER_ARGS=""
EOF

KUBE_SCHEDULER_OPTS="   \${KUBE_LOGTOSTDERR}     \\
                        \${KUBE_LOG_LEVEL}       \\
                        \${KUBE_MASTER}          \\
                        \${KUBE_SCHEDULER_ARGS}"

cat </usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler ${KUBE_SCHEDULER_OPTS}
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload

3.2.3サービス起動スクリプトの作成
for svc in kube-apiserver kube-controller-manager kube-scheduler; do 
    systemctl restart $svc
    systemctl enable $svc
    systemctl status $svc 
done

四、ノード配置
4.1 flanneldのインストール
4.1.1ソフトウェアのインストール
yum -y install flannel

4.1.2 flannelのプロファイルの変更/etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.0.87:2379"
FLANNEL_ETCD_KEY="/atomic.io/network"

4.1.3 flannelの実行
systemctl restart flanneld
systemctl enable flanneld
systemctl status flanneld

4.1.4ネットワーク構成をアップロードしてconfigを作成する.jsonファイル、内容は以下の通りです.
{
"Network": "172.17.0.0/16",
"SubnetLen": 24,
"Backend": {
     "Type": "vxlan",
     "VNI": 7890
     }
 }

構成をetcdサーバにアップロードする
curl -L http://192.168.0.87:2379/v2/keys/atomic.io/network/config -XPUT --data-urlencode [email protected]

etcd割り当てサブネット情報の表示
[root@k8s-sz-0002 ~]# cat /run/flannel/subnet.env 
FLANNEL_NETWORK=172.17.0.0/16
FLANNEL_SUBNET=172.17.79.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=false

4.2 kubernates構成
4.2.1バイナリのコピー
/root/go/src/k 8 sに位置する.io/kubernetes/_output/bin/ディレクトリの下のkube-proxy、kubeletをノードの/usr/bin/ディレクトリの下にコピー
4.2.2サービス構成スクリプトの作成
[root@k8s-cns1-nod1 home]# cat configslave.sh 
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

MASTER_ADDRESS=${1:-"192.168.0.87"}
NODE_HOSTNAME=${2:-"k8s-cns1-nod1"}

cat </etc/kubernetes/config
# --logtostderr=true: log to standard error instead of files
KUBE_LOGTOSTDERR="--logtostderr=true"

# --v=0: log level for V logs
KUBE_LOG_LEVEL="--v=0"

# --allow-privileged=false: If true, allow privileged containers.
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"
EOF

cat </etc/kubernetes/proxy
###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--log-dir=/var/log/kubenates/"
EOF

KUBE_PROXY_OPTS="   \${KUBE_LOGTOSTDERR} \\
                    \${KUBE_LOG_LEVEL}   \\
                    \${KUBE_MASTER}    \\
                    \${KUBE_PROXY_ARGS}"

cat </usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kube-proxy
ExecStart=/usr/bin/kube-proxy ${KUBE_PROXY_OPTS}
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

cat </etc/kubernetes/kubelet
# --address=0.0.0.0: The IP address for the Kubelet to serve on (set to 0.0.0.0 for all interfaces)
KUBELET__ADDRESS="--address=0.0.0.0"

# --port=10250: The port for the Kubelet to serve on. Note that "kubectl logs" will not work if you set this flag.
KUBELET_PORT="--port=10250"

# --hostname-override="": If non-empty, will use this string as identification instead of the actual hostname.
KUBELET_HOSTNAME="--hostname-override=${NODE_HOSTNAME}"

# --api-servers=[]: List of Kubernetes API servers for publishing events, 
# and reading pods and services. (ip:port), comma separated.
KUBELET_API_SERVER="--api-servers=http://${MASTER_ADDRESS}:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""
EOF

KUBE_PROXY_OPTS="   \${KUBE_LOGTOSTDERR}     \\
                    \${KUBE_LOG_LEVEL}       \\
                    \${KUBELET__ADDRESS}         \\
                    \${KUBELET_PORT}            \\
                    \${KUBELET_HOSTNAME}        \\
                    \${KUBELET_API_SERVER}   \\
                    \${KUBE_ALLOW_PRIV}      \\
                    \${KUBELET_POD_INFRA_CONTAINER}\\
                    \${KUBELET_ARGS}"

cat </usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet ${KUBE_PROXY_OPTS}
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload

4.2.3サービス起動スクリプトの作成
# cat /home/startslave.sh 
mkdir -p /etc/kubernetes/
mkdir -p /var/logs/kubernetes/
mkdir -p /var/lib/kubelet

source /run/flannel/subnet.env

sed -i "s|--bip=.*|--bip=${FLANNEL_SUBNET} --ip-masq=true --mtu=1472|" /usr/lib/systemd/system/docker.service

systemctl daemon-reload

for svc in docker kubelet kube-proxy; do 
    systemctl restart $svc
    systemctl enable $svc
    systemctl status $svc 
done
/usr/lib/systemd/system/docker.serviceはdockerのプロファイルです.nodeとetcdの間にはリース時間があるので、nodeが長時間オンラインでない場合、etcdはリースが期限切れになったと考え、サブネット情報を消去し、nodeが立ち上がった後、flannelは新しいサブネットを再取得するので、dockerコンテナセグメント(--bip=)は毎回新しいflannelから取得し、一致を保証することをお勧めします.
編集/etc/rc.local、スクリプト実行を起動起動項目に追加し、nodeが再起動した後にスクリプトを自動的に実行することを保証する
環境構成の検証
Masterノードでコマンドkubectl get nodesを実行し、出力情報は以下の通りである.
[root@k8s-cns1-mst home]# kubectl get node
NAME            STATUS    AGE       VERSION
k8s-cns1-nod1   Ready     1d        v1.6.4
k8s-cns1-nod2   Ready     1d        v1.6.4

よくある質問
1、etcdctl使用時にローカルエラーを接続する
/usr/lib # etcdctl ls /
Error:  client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:2379: getsockopt: connection refused
; error #1: dial tcp 127.0.0.1:4001: getsockopt: connection refused

error #0: dial tcp 127.0.0.1:2379: getsockopt: connection refused
error #1: dial tcp 127.0.0.1:4001: getsockopt: connection refused

これはETCD_LISTEN_CLIENT_URLSパラメータがhttp://127.0.0.1:2379を構成していないため、etcdctlを使用する場合にendpointsオプションを追加
/usr/lib # etcdctl --endpoints=192.168.0.87:2379 ls /