PowerPCにK 8 Sをインストールする


Setting up a Kubernetes Cluster on OpenPower Servers
BY PRADIPTA KUMAR BANERJEE · FEBRUARY 1, 2016
A container orchestration software is required if your environment consists of multiple hosts running docker containers. There are numerous options available today – Docker Swarm, Mesos/Marathon, Kubernetes among others. This article will help you to setup a Kubernetes cluster on OpenPower systems (eg. Tyan) running RHEL LE (little endian). The same instructions also applies to other scale-out Power servers from IBM.
Kubernetes is an opensource orchestration engine for docker containers and works on the master-slave concept. Following are the major components of a Kubernetes cluster:
  • Master: this is the cluster manager which oversees one or more nodes (minions).
  • Node or Minion or Slave: these are cluster members and responsible for starting containers.
  • Pod: this is the basic unit of operation in Kubernetes. It represents a group of one or more containers constituting an application (or part) that runs on a slave (minion).

  • Kubernetes includes lot many other concepts which are necessary for a real production deployment. While going into the details of all of them is beyond the scope of this article, nonetheless a good reference is the following documentation – http://kubernetes.io/v1.1/docs/user-guide/walkthrough/k8s201.html.
    Following are the instructions which should help you with Kubernetes setup. Special thanks to my colleague Peeyush Gupta for co-authoring this writeup.
    Installation and Setup of Kubernetes
    While you can build Kubernetes from source on Power platform however for ease of use, packages for RHEL LE are provided on an as-is basis from the Unicamp repository.
    Add Unicamp Package Repository for RHEL
    Ensure the following repositories are added to all the systems that are going to be part of the Kubernetes cluster
    # cat > /etc/yum.repos.d/unicamp-docker.repo < /etc/yum.repos.d/unicamp-misc.repo < 
      

    Add Unicamp Package Repository for Advance Toolchain

    You’ll need a Go compiler to build docker p_w_picpath for Kubernetes infrastructure container. Additionally you’ll need the compiler if you would like to build Kubernetes from source

    # cat > /etc/yum.repos.d/at9.repo < 
      

    Install Advance toolchain

    # yum install -y advance-toolchain-at9.0-runtime \
                   advance-toolchain-at9.0-devel \
                   advance-toolchain-at9.0-perf \
                   advance-toolchain-at9.0-mcore-libs
    
    # echo "export PATH=/opt/at9.0/bin:/opt/at9.0/sbin:$PATH" >> /etc/profile.d/at9.sh
    # source /etc/profile.d/at9.sh
    # /opt/at9.0/sbin/ldconfig

    Building Infra container p_w_picpath – pause
    This is a special type of container created by default for every POD. It just sleeps and is used to provide networking connectivity to other containers in the POD. By default Kubernetes downloads it from ‘gcr.io/google_containers/pause’. However since no ‘pause’ p_w_picpath exists for Power platforms yet in Google registry, the only way possible is to build the ‘pause’ container p_w_picpath on Power and use the same.
    A slightly modified version of the ‘pause’ code allowing it to be built on Power platforms can be downloaded from Peeyush’s github tree. The instructions are described below:
    # git clone https://github.com/Pensu/pause.git
    # cd pause
    # make

    After make, run “docker p_w_picpaths” to see the pod infra container p_w_picpath in the list.
    [root@localhost pause]# docker p_w_picpaths
    REPOSITORY  TAG      IMAGE ID            CREATED             VIRTUAL SIZE
    pause       0.8.0    f149b36756ff        About an hour ago   7.994 MB

    Tag the p_w_picpath as appropriate and either push it to your local registry or to public docker registry. The full p_w_picpath name needs to be updated in the/etc/kubernetes/kubelet file for all the nodes (minions).
    Installation and Setup of Kubernetes Master
    Install the required packages
    # yum install kubernetes-client kubernetes-master etcd

    Open Network Ports
    By default kubernetes apiserver listens on port 8080 for kubelets. Ensure that it is not blocked by local firewall.  If using firewalld, then the following command can be used to open a TCP port for the ‘public’ zone
    # firewall-cmd --zone=public --add-port=8080/tcp --permanent
    # firewall-cmd --reload

    Additional the etcd server listens on port 2379 by default. Use the following instructions to open the respective port.
    # firewall-cmd --zone=public --add-port=2379/tcp --permanent
    # firewall-cmd --reload

    Configure Kubernetes Master
    For the remainder of the configuration we’ll assume that the Kubernetes master has the following IP – 192.168.122.76, and the Kubernetes node has the following IP – 192.168.122.236.
    Modify the file/etc/kubernetes/config according to the environment. Based on the above info, the modified file will look the following:
    # logging to stderr means we get it in the systemd journal                                                                                    KUBE_LOGTOSTDERR="--logtostderr=true"
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=false"
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=http://192.168.122.76:8080"

    Modify/etc/kubernetes/apiserver according to the environment. Based on the above info, the modified file will look the following:
    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--address=0.0.0.0"
    # The port on the  local server to listen on.
    # KUBE_API_PORT="--port=8080”
    # Port minions listen on
    # KUBELET_PORT="--kubelet-port=10250"
    #  Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.122.76:2379"
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    # default admission control policies
    KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    # Add your own!
    KUBE_API_ARGS=""

    Configure EtcdModify the following two parameters in/etc/etcd/etcd.conf as shown below:
    ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
    ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

    Start the services
    # for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
        systemctl restart $SERVICES
        systemctl enable $SERVICES
        systemctl status $SERVICES
    done

    Installation and Setup of Kubernetes Node (Minion)
    Install the required packages
    # yum install docker-io kubernetes-client kubernetes-node

    Configure Kubernetes NodeModify/etc/kubernetes/kubelet according to the environment. Based on the above info, the modified file will look the following:
    # kubernetes kubelet (minion) config
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=0.0.0.0"
    # The port for the info server to serve on
    # KUBELET_PORT="--port=10250"
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME=" "
    # location of the api-server
    KUBELET_API_SERVER="--api-servers=http://192.168.122.76:8080"
    # Add your own!
    KUBELET_ARGS="--pod-infra-container-p_w_picpath="

    Ensure to update the “pod-infra-container-p_w_picpath” parameter with full details of the infra container p_w_picpath.
    Start the services
    # for SERVICES in kube-proxy kubelet docker; do
      systemctl restart $SERVICES
      systemctl enable $SERVICES
      systemctl status $SERVICES
    done

    Verifying the setupLog in to the master and run “kubectl get nodes” to check the available nodes.
    [root@localhost ~]# kubectl get nodes
    NAME     LABELS                          STATUS AGE
    fed-node kubernetes.io/hostname=fed-node Ready  1h

    Hope this helps you to use Kubernetes to manage multiple OpenPower based docker hosts.
    :
    http://cloudgeekz.com/773/setting-up-a-kubernetes-cluster-on-power.html