手軽に試せるOpenshift4環境(Code Ready Container)のメモ
Code Ready Containerを試してみた
Code Ready Containerを動かす方法
下記の方のまとめが非常にわかりやすいので参考にした。
Redhatのサブスクリプションを無償で使う方法が丁寧にまとめられている。
https://qiita.com/zaki-lknr/items/ac2223152661886438da#インストール
主に実施することは、下記の4ステップくらいか。
・Redhatの無償サブスクリプションの取得
・crc取得(2GBほどあるファイルをダウンロードする)
・crcを動作させるための秘密情報(pull-secret)をダウンロードして、crcを動作させる。
・Openshift環境を構築できる
下記のRedHat Openshift Cluster Managerにログインして、「Download pull secret」を選択しpull-secretを適当なディレクトリに配置
同様に自分のOS向けのcrcもダウンロードする
以降は、Linuxで実施した場合の想定で記載している。
crc start -p pull-secret
毎回同じsecretを用いるので、aliasを作成して効率化
echo 'alias crcs="crc start -p pull-secret"' >> ~/.bashrc
. ~/.bashrc
crcによって作成されたocコマンドとその設定を有効化
下記を設定することで、ocコマンドを使用できるようになる。
eval $(crc oc-env)
export KUBECONFIG=$HOME/.crc/machines/crc/kubeconfig
crc startのログに下記のように表示されるので、oc loginを叩くか、crc consoleにてopenshift ダッシュボードの操作が可能
INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'
INFO To login as an admin, run 'oc login -u kubeadmin -p 7z6T5-qmTth-oxaoD-p3xQF https://api.crc.testing:6443'
INFO
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
また下記の通り、OpenshiftのいくつかのOperatorはリソース使用量を抑えるために停止してある旨が記載されいている。
WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation
どのOperatorが動作しているのかは下記のコマンドにて確認できる
monitoringのoperator だけAvailable列がFalseになっており、停止しているらしい。
[openshift@base ~]$ oc get co
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication 4.3.0 True False False 22d
cloud-credential 4.3.0 True False False 22d
cluster-autoscaler 4.3.0 True False False 22d
console 4.3.0 True False False 22d
dns 4.3.0 True False False 21d
image-registry 4.3.0 True False False 22d
ingress 4.3.0 True False False 22d
insights 4.3.0 True False False 22d
kube-apiserver 4.3.0 True False False 22d
kube-controller-manager 4.3.0 True False False 22d
kube-scheduler 4.3.0 True False False 22d
machine-api 4.3.0 True False False 22d
machine-config 4.3.0 True False False 22d
marketplace 4.3.0 True False False 11m
monitoring 4.3.0 False True True 22d
network 4.3.0 True False False 22d
node-tuning 4.3.0 True False False 11m
openshift-apiserver 4.3.0 True False False 22d
openshift-controller-manager 4.3.0 True False False 21d
openshift-samples 4.3.0 True False False 22d
operator-lifecycle-manager 4.3.0 True False False 22d
operator-lifecycle-manager-catalog 4.3.0 True False False 22d
operator-lifecycle-manager-packageserver 4.3.0 True False False 11m
service-ca 4.3.0 True False False 22d
service-catalog-apiserver 4.3.0 True False False 22d
service-catalog-controller-manager 4.3.0 True False False 22d
storage 4.3.0 True False False 22d
該当のnamespace内を確認してみると下記の通り、Replica数が0として定義されている。
また、crcのマニュアルにも同様のことが書かれている。
[openshift@base ~]$ oc get all -n openshift-monitoring
NAME READY STATUS RESTARTS AGE
pod/node-exporter-hffz9 2/2 Running 0 22d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-main ClusterIP 172.30.206.91 <none> 9094/TCP 22d
service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 22d
service/cluster-monitoring-operator ClusterIP None <none> 8080/TCP 22d
service/grafana ClusterIP 172.30.191.225 <none> 3000/TCP 22d
service/kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 22d
service/node-exporter ClusterIP None <none> 9100/TCP 22d
service/openshift-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 22d
service/prometheus-adapter ClusterIP 172.30.20.184 <none> 443/TCP 22d
service/prometheus-k8s ClusterIP 172.30.22.83 <none> 9091/TCP,9092/TCP 22d
service/prometheus-operated ClusterIP None <none> 9090/TCP,10901/TCP 22d
service/prometheus-operator ClusterIP None <none> 8080/TCP 22d
service/telemeter-client ClusterIP None <none> 8443/TCP 22d
service/thanos-querier ClusterIP 172.30.169.150 <none> 9091/TCP,9092/TCP 22d
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/node-exporter 1 1 1 1 1 kubernetes.io/os=linux 22d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cluster-monitoring-operator 0/0 0 0 22d
deployment.apps/grafana 0/0 0 0 22d
deployment.apps/kube-state-metrics 0/0 0 0 22d
deployment.apps/openshift-state-metrics 0/0 0 0 22d
deployment.apps/prometheus-adapter 0/0 0 0 22d
deployment.apps/prometheus-operator 0/0 0 0 22d
deployment.apps/telemeter-client 0/0 0 0 22d
deployment.apps/thanos-querier 0/0 0 0 22d
NAME DESIRED CURRENT READY AGE
replicaset.apps/cluster-monitoring-operator-7bbc9f9895 0 0 0 22d
replicaset.apps/grafana-687f7dfcf4 0 0 0 22d
replicaset.apps/grafana-7847db887 0 0 0 22d
replicaset.apps/kube-state-metrics-777f6bf798 0 0 0 22d
replicaset.apps/openshift-state-metrics-b6755756 0 0 0 22d
replicaset.apps/prometheus-adapter-79f9c99d67 0 0 0 22d
replicaset.apps/prometheus-adapter-7f9c5d699 0 0 0 22d
replicaset.apps/prometheus-operator-985bf8dd5 0 0 0 22d
replicaset.apps/telemeter-client-54dfc4d54c 0 0 0 22d
replicaset.apps/telemeter-client-7c87f56869 0 0 0 22d
replicaset.apps/thanos-querier-5856664597 0 0 0 22d
replicaset.apps/thanos-querier-7f9657d4f7 0 0 0 22d
NAME READY AGE
statefulset.apps/alertmanager-main 0/0 22d
statefulset.apps/prometheus-k8s 0/0 22d
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/alertmanager-main alertmanager-main-openshift-monitoring.apps-crc.testing alertmanager-main web reencrypt/Redirect None
route.route.openshift.io/grafana grafana-openshift-monitoring.apps-crc.testing grafana https reencrypt/Redirect None
route.route.openshift.io/prometheus-k8s prometheus-k8s-openshift-monitoring.apps-crc.testing prometheus-k8s web reencrypt/Redirect None
route.route.openshift.io/thanos-querier thanos-querier-openshift-monitoring.apps-crc.testing thanos-querier web reencrypt/Redirect None
初期状態で動作しているPodはざっくりと下記のような感じで、70弱ほど動作している。
[openshift@base ~]$ oc get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
openshift-apiserver-operator openshift-apiserver-operator-7cc77d965f-4mcgm 1/1 Running 0 22d
openshift-apiserver apiserver-6jtsm 1/1 Running 0 10m
openshift-authentication-operator authentication-operator-57d4974d5d-mwdkl 1/1 Running 1 22d
openshift-authentication oauth-openshift-67585659c6-g8lxt 1/1 Running 0 4m45s
openshift-authentication oauth-openshift-67585659c6-shk64 1/1 Running 0 3m43s
openshift-cluster-machine-approver machine-approver-57dd49d7c5-mvdz2 2/2 Running 0 22d
openshift-cluster-node-tuning-operator cluster-node-tuning-operator-6986d4dff4-cn54n 1/1 Running 0 22d
openshift-cluster-node-tuning-operator tuned-tbljv 1/1 Running 0 9m41s
openshift-cluster-samples-operator cluster-samples-operator-889fb7599-zjblq 2/2 Running 0 22d
openshift-cluster-storage-operator cluster-storage-operator-5dc75b588c-mh9w6 1/1 Running 0 22d
openshift-console-operator console-operator-57f5bcc578-b59hx 1/1 Running 0 22d
openshift-console console-8c7b46fb4-68x4w 1/1 Running 0 22d
openshift-controller-manager-operator openshift-controller-manager-operator-68dcf95c47-bxbln 1/1 Running 0 22d
openshift-controller-manager controller-manager-c26bm 1/1 Running 0 21d
openshift-dns-operator dns-operator-7785d9f869-nqmh8 2/2 Running 0 22d
openshift-dns dns-default-s4r76 2/2 Running 0 22d
openshift-etcd etcd-member-crc-w6th5-master-0 2/2 Running 0 22d
openshift-image-registry cluster-image-registry-operator-f9697f69d-44484 2/2 Running 0 22d
openshift-image-registry image-registry-864894cbd5-8n5ff 1/1 Running 0 22d
openshift-image-registry node-ca-kp85n 1/1 Running 0 22d
openshift-ingress-operator ingress-operator-556dd68cb9-gfbwf 2/2 Running 0 22d
openshift-ingress router-default-77c77568f4-npdrs 1/1 Running 0 22d
openshift-kube-apiserver-operator kube-apiserver-operator-566b9798-fzvtd 1/1 Running 0 22d
openshift-kube-apiserver installer-10-crc-w6th5-master-0 0/1 Completed 0 21d
openshift-kube-apiserver installer-11-crc-w6th5-master-0 0/1 Completed 0 8m21s
openshift-kube-apiserver installer-12-crc-w6th5-master-0 0/1 OOMKilled 0 6m6s
openshift-kube-apiserver installer-9-crc-w6th5-master-0 0/1 Completed 0 22d
openshift-kube-apiserver kube-apiserver-crc-w6th5-master-0 3/3 Running 0 5m36s
openshift-kube-apiserver revision-pruner-10-crc-w6th5-master-0 0/1 Completed 0 21d
openshift-kube-apiserver revision-pruner-11-crc-w6th5-master-0 0/1 OOMKilled 0 6m12s
openshift-kube-apiserver revision-pruner-12-crc-w6th5-master-0 0/1 Completed 0 3m37s
openshift-kube-apiserver revision-pruner-8-crc-w6th5-master-0 0/1 Completed 0 22d
openshift-kube-apiserver revision-pruner-9-crc-w6th5-master-0 0/1 Completed 0 22d
openshift-kube-controller-manager-operator kube-controller-manager-operator-7c8b7465b-4mbkc 1/1 Running 0 22d
openshift-kube-controller-manager installer-7-crc-w6th5-master-0 0/1 Completed 0 8m29s
openshift-kube-controller-manager kube-controller-manager-crc-w6th5-master-0 3/3 Running 1 8m11s
openshift-kube-controller-manager revision-pruner-6-crc-w6th5-master-0 0/1 Completed 0 22d
openshift-kube-controller-manager revision-pruner-7-crc-w6th5-master-0 0/1 OOMKilled 0 6m14s
openshift-kube-scheduler-operator openshift-kube-scheduler-operator-557777c86b-zxqx7 1/1 Running 0 22d
openshift-kube-scheduler installer-7-crc-w6th5-master-0 0/1 Completed 0 8m19s
openshift-kube-scheduler openshift-kube-scheduler-crc-w6th5-master-0 1/1 Running 1 8m3s
openshift-kube-scheduler revision-pruner-6-crc-w6th5-master-0 0/1 Completed 0 22d
openshift-kube-scheduler revision-pruner-7-crc-w6th5-master-0 0/1 Completed 0 5m53s
openshift-machine-config-operator machine-config-daemon-xtdmj 2/2 Running 0 22d
openshift-machine-config-operator machine-config-server-pv6nm 1/1 Running 0 22d
openshift-marketplace certified-operators-5d6f745457-qkm8w 1/1 Running 0 9m47s
openshift-marketplace community-operators-55b7cc57bf-rcqwl 1/1 Running 0 9m43s
openshift-marketplace marketplace-operator-7fbcb88798-wxcdc 1/1 Running 0 22d
openshift-marketplace redhat-operators-65ffcdcd6-rjmzn 1/1 Running 0 9m39s
openshift-monitoring node-exporter-hffz9 2/2 Running 0 22d
openshift-multus multus-admission-controller-z6sx4 1/1 Running 0 22d
openshift-multus multus-vbjms 1/1 Running 0 22d
openshift-network-operator network-operator-5c7c7dc988-dt8qx 1/1 Running 0 22d
openshift-operator-lifecycle-manager catalog-operator-5d644f7b4b-zfhb6 1/1 Running 0 22d
openshift-operator-lifecycle-manager olm-operator-6d454db9dd-4sz4q 1/1 Running 0 22d
openshift-operator-lifecycle-manager packageserver-55b886b6db-fc64w 1/1 Running 0 9m34s
openshift-operator-lifecycle-manager packageserver-55b886b6db-klpq2 1/1 Running 0 10m
openshift-sdn ovs-m586r 1/1 Running 0 22d
openshift-sdn sdn-controller-7s5hg 1/1 Running 0 22d
openshift-sdn sdn-twbd8 1/1 Running 0 22d
openshift-service-ca-operator service-ca-operator-595657f77-rbmjs 1/1 Running 0 22d
openshift-service-ca apiservice-cabundle-injector-d84c98485-v787m 1/1 Running 0 22d
openshift-service-ca configmap-cabundle-injector-6cc5ccdd7f-tcl4m 1/1 Running 0 22d
openshift-service-ca service-serving-cert-signer-d59b877-thvch 1/1 Running 0 22d
openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-6cddfd76cc-pmzmw 1/1 Running 1 22d
openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-5886hlmm2 1/1 Running 1 22d
crcで構築したノードにSSHログインする方法
.crc以下にssh秘密鍵があり、ノードのIPはcrc ipにて取得できる。
ssh -i ~/.crc/machines/crc/id_rsa core@`crc ip`
別途、下記の方法でもノードにできるようだが、Dockerイメージ取得できず失敗
oc debug nodes/`oc get node -ojsonpath='{.items[0].metadata.name}'`
Debug podのイメージをdocker loginしてRedhatリポジトリから持ってくるようだが、下記を取得できていない
sudo docker pull registry.redhat.io/rhel7/support-tools
遭遇した問題
名前解決できない問題
crc start時に下記のようなエラーに遭遇
ERRO Failed to query DNS from host: lookup foo.apps-crc.testing on [240d:1a:4a3:1b00:e67e:66ff:fe43:9a43]:53: no such host
下記に記載の方法で回避できる。
https://medium.com/@trlogic/how-to-setup-local-openshift-4-cluster-with-red-hat-codeready-containers-6c5aefba72ad
/etc/hosts/に下記を追加
192.168.130.11 api.crc.testing
192.168.130.11 oauth-openshift.apps-crc.testing
192.168.130.11 console-openshift-console.apps-crc.testing
crcを実行したマシン上でDnsmasqが動作する
bindとかを別に立てていると53番ポートが競合して困る
openshift 3.11用にbindを立てていたのでハマった
systemctlコマンドで確認するとdnsmasqが動いているようには見えないので、別途どこかで動いているっぽい
その他
etcd backup
Code Readyのマスタノードにログインして確認可能
[root@crc-w6th5-master-0 ~]# sh /usr/local/bin/etcd-snapshot-backup.sh .
Creating asset directory ./assets
Downloading etcdctl binary..
etcdctl version: 3.3.17
API version: 3.3
Trying to backup etcd client certs..
etcd client certs found in /etc/kubernetes/static-pod-resources/kube-apiserver-pod-3 backing up to ./assets/backup/
Backing up /etc/kubernetes/manifests/etcd-member.yaml to ./assets/backup/
Trying to backup latest static pod resources..
{"level":"warn","ts":"2020-03-07T10:05:45.648Z","caller":"clientv3/retry_interceptor.go:116","msg":"retry stream intercept"}
Snapshot saved at ./assets/tmp/snapshot.db
snapshot db and kube resources are successfully saved to ./snapshot_db_kuberesources_2020-03-07_100542.tar.gz!
[root@crc-w6th5-master-0 ~]# ls
assets snapshot_db_kuberesources_2020-03-07_100542.tar.gz
[root@crc-w6th5-master-0 ~]# tar xzvf snapshot_db_kuberesources_2020-03-07_100542.tar.gz
static-pod-resources/kube-apiserver-pod-10/
static-pod-resources/kube-apiserver-pod-10/secrets/
static-pod-resources/kube-apiserver-pod-10/secrets/etcd-client/
static-pod-resources/kube-apiserver-pod-10/secrets/etcd-client/tls.crt
static-pod-resources/kube-apiserver-pod-10/secrets/etcd-client/tls.key
static-pod-resources/kube-apiserver-pod-10/secrets/kube-apiserver-cert-syncer-client-cert-key/
static-pod-resources/kube-apiserver-pod-10/secrets/kube-apiserver-cert-syncer-client-cert-key/tls.key
static-pod-resources/kube-apiserver-pod-10/secrets/kube-apiserver-cert-syncer-client-cert-key/tls.crt
static-pod-resources/kube-apiserver-pod-10/secrets/kubelet-client/
static-pod-resources/kube-apiserver-pod-10/secrets/kubelet-client/tls.crt
static-pod-resources/kube-apiserver-pod-10/secrets/kubelet-client/tls.key
static-pod-resources/kube-apiserver-pod-10/configmaps/
static-pod-resources/kube-apiserver-pod-10/configmaps/config/
static-pod-resources/kube-apiserver-pod-10/configmaps/config/config.yaml
static-pod-resources/kube-apiserver-pod-10/configmaps/etcd-serving-ca/
static-pod-resources/kube-apiserver-pod-10/configmaps/etcd-serving-ca/ca-bundle.crt
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-cert-syncer-kubeconfig/
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-pod/
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-pod/forceRedeploymentReason
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-pod/pod.yaml
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-pod/version
static-pod-resources/kube-apiserver-pod-10/configmaps/kubelet-serving-ca/
static-pod-resources/kube-apiserver-pod-10/configmaps/kubelet-serving-ca/ca-bundle.crt
static-pod-resources/kube-apiserver-pod-10/configmaps/sa-token-signing-certs/
static-pod-resources/kube-apiserver-pod-10/configmaps/sa-token-signing-certs/service-account-001.pub
static-pod-resources/kube-apiserver-pod-10/configmaps/sa-token-signing-certs/service-account-002.pub
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-server-ca/
static-pod-resources/kube-apiserver-pod-10/configmaps/kube-apiserver-server-ca/ca-bundle.crt
static-pod-resources/kube-apiserver-pod-10/configmaps/oauth-metadata/
static-pod-resources/kube-apiserver-pod-10/configmaps/oauth-metadata/oauthMetadata
static-pod-resources/kube-apiserver-pod-10/kube-apiserver-pod.yaml
snapshot.db
[root@crc-w6th5-master-0 ~]#
crcでCluster Monitoring有効化
crcのデフォルトでCluster Monitoringは有効になっていないよう。
そして、replica数がすべて0
oc scale --replicas=1 statefulset --all -n openshift-monitoring; oc scale --replicas=1 deployment --all -n openshift-monitoring
上記で起動しようとすると、メモリのrequest値がノードの限界を超えるため、メモリ不足でPodがスケジュールされない。
関連Issueは下記l。
解決策として、crcのVMのメモリ割り当て増やす方法は下記
$ crc config set memory 16398
Changes to configuration property 'memory' are only applied when a new CRC instance is created.
If you already have a CRC instance, then for this configuration change to take effect, delete the CRC instance with 'crc delete' and start a new one with 'crc start'.
$ crc delete && crc create
モニタリングに関するサポート状況
OCP4のモニタリングスタックのドキュメントは下記にあるが、crcで導入されるthanosなどについては記載がない
別途、Openshiftのブログにはthanosやprometheus instanceはOCPとしてはサポートされていない旨が記載されている。
thanosとオブジェクトストレージを活用した、複数Openshiftからのメトリクス永続化
PrometheusとThanosとS3を用いたメトリクス永続化方法が記載されている。
Thanos ReceiverがS3にデータ永続化、Thanos GatewayがS3に対してクエリする感じぽい
Author And Source
この問題について(手軽に試せるOpenshift4環境(Code Ready Container)のメモ), 我々は、より多くの情報をここで見つけました https://qiita.com/iaoiui/items/6903e3996dd61ed095e6著者帰属:元の著者の情報は、元のURLに含まれています。著作権は原作者に属する。
Content is automatically searched and collected through network algorithms . If there is a violation . Please contact us . We will adjust (correct author information ,or delete content ) as soon as possible .