KubeEdge実践過程の記録

10041 ワード

本論文では,疑問と解決策を含むKubeEdgeの実践のいくつかの記録を記録する.この文書は不定期に更新されます.
その他
kubeedgeをコンパイルすると、メモリが2 GBでエラーが発生し、4 Gは正常です.同じpodエクスポートノードポートと同じで、ノードポートが占有されているため、拡張は成功しません.取得したプロファイルを実行してから変更する必要があります.プロファイルの場所に注意し、システムプラットフォームフレームワークに注意し、armプラットフォームであるがpauseがkubeedge/pause-arm:3.1を使用しない場合、エラーが発生します.ホスト名を確認するには、コンプライアンス(小文字、数字、横棒-、ドット番号.)が必要です.そうしないと登録できず、err:のメッセージが返され、チェックできない場合があります.エッジエンドシステムにはデフォルトゲートウェイが必要です.そうしないと、実行中にエラーが発生します.issueによると、これは解決されたが、依然としてある.KubeEdgeはk 8 sと完全に等しくなく,k 8 sの一部の命令はまだ実現されていない.コンテナを表示、実行するコマンドはありません.
私が収集した関連バグ
2020.3.19記録:kubectl execkubectl logsの命令をサポートせず、公式には後続のサポートと述べた.観察を待つ.スケジューリング情報が足りません.kubectl describeから成功してあるノードにスケジューリングされたことしか知らず、成功または失敗については知らない.ノードマシンでdocker logsでログを調べるしかありません.
に質問
スケジュールできません
環境:3台のホスト、k 8 sが配備されています.k 8 sをクリーンアップします.k 8 sを押してdeploymentを配置し、podを表示し、Pendingを表示し、podを削除し、Terminatingを表示します.もう一度試してみると、podが1つのノードで実行され、拡張され、別のノードPendingが実行されることが分かった.一晩を経ても、相変わらず.強制的にcloudcoreとedgecoreを停止し、k 8 sのノードにNotReadyが表示されます.ノードのコンテナは依然として動作しています.
質問:スケジュールできません.どうしますか.優雅にpodを消してcloudcoreを止めたら?今のところ方法が見つからない.
クラウド印刷:
messagehandler.go:448] write error, connection for node edge-node2 will be closed, affected event id: dba8d7ec-ffa4-4c6f-ac6e-accfa527a366, parent_id: , group: resource, source: edgecontroller, resource: default/pod/nginx-deployment-77698bff7d-jdm8k, operation: update, reason tls: use of closed connection

エッジ印刷:
process.go:130] failed to send message: tls: use of closed connection
process.go:196] websocket write error: failed to send message, error: tls: use of closed connection

推測:接続が切断されていますが、node状態を確認すると、Ready状態で、なぜか分かりません.後続:削除し、しばらくしてから再配置し、成功しました.
正常接続、走り、一夜後、NotReady状態.podは絶えず破壊され、絶えず作成されます.
# kubectl get pod
NAME                                         READY   STATUS        RESTARTS   AGE
led-light-mapper-deployment-94bbdf88-26h2d   0/1     Terminating   0          14h
led-light-mapper-deployment-94bbdf88-2hwxq   0/1     Terminating   0          90m
led-light-mapper-deployment-94bbdf88-4f8pd   0/1     Terminating   0          80m
led-light-mapper-deployment-94bbdf88-52p9w   0/1     Terminating   0          15m
led-light-mapper-deployment-94bbdf88-8t9cl   0/1     Terminating   0          30m
led-light-mapper-deployment-94bbdf88-9bpt7   0/1     Terminating   0          95m
led-light-mapper-deployment-94bbdf88-9nfk6   0/1     Terminating   0          65m
led-light-mapper-deployment-94bbdf88-c8wtb   0/1     Terminating   0          85m
led-light-mapper-deployment-94bbdf88-kpcx4   0/1     Terminating   0          75m
led-light-mapper-deployment-94bbdf88-kwgqs   0/1     Terminating   0          35m
led-light-mapper-deployment-94bbdf88-l6hn2   0/1     Terminating   0          55m
led-light-mapper-deployment-94bbdf88-pk6fx   0/1     Terminating   0          5m1s
led-light-mapper-deployment-94bbdf88-qk9gj   0/1     Terminating   0          60m
led-light-mapper-deployment-94bbdf88-sgns2   0/1     Terminating   0          100m
led-light-mapper-deployment-94bbdf88-sk8gf   0/1     Terminating   0          20m
led-light-mapper-deployment-94bbdf88-svkgr   0/1     Terminating   0          50m
led-light-mapper-deployment-94bbdf88-tjz7z   0/1     Terminating   0          45m
led-light-mapper-deployment-94bbdf88-vwx7w   0/1     Pending       0          1s
led-light-mapper-deployment-94bbdf88-xfsc8   0/1     Terminating   0          10m
led-light-mapper-deployment-94bbdf88-xpq8k   0/1     Terminating   0          40m
led-light-mapper-deployment-94bbdf88-zhj24   0/1     Terminating   0          25m
led-light-mapper-deployment-94bbdf88-zncjg   0/1     Terminating   0          70m

端を調べる:
I0319 09:17:05.425874    2147 communicate.go:151] has msg
I0319 09:17:05.426062    2147 communicate.go:155] redo task due to no recv
I0319 09:17:05.427233    2147 communicate.go:151] has msg
I0319 09:17:05.427416    2147 communicate.go:155] redo task due to no recv
I0319 09:17:05.428657    2147 dtcontext.go:69] CommModule is healthy 1584580625

context_channel.go:175] the message channel is full, message: {Header:{ID:5f072fe2-b8cf-411e-8aee-16e927f27433 ParentID: Timestamp:1584580605260 ResourceVersion:391570 Sync:false} Router:{Source:edgecontroller Group:resource Operation:update Resource:default/pod/led-light-mapper-deployment-94bbdf88-26h2d} Content:map[metadata:map[creationTimestamp:2020-03-18T10:23:50Z deletionGracePeriodSeconds:30 deletionTimestamp:2020-03-18T23:40:09Z generateName:led-light-mapper-deployment-94bbdf88- labels:map[app:led-light-mapper pod-template-hash:94bbdf88] name:led-light-mapper-deployment-94bbdf88-26h2d namespace:default ownerReferences:[map[apiVersion:apps/v1 blockOwnerDeletion:true controller:true kind:ReplicaSet name:led-light-mapper-deployment-94bbdf88 uid:52c44b48-1214-4b10-9007-23093a953a40]] resourceVersion:391570 selfLink:/api/v1/namespaces/default/pods/led-light-mapper-deployment-94bbdf88-26h2d uid:12002c7e-69fe-4a31-bf66-759d78380abe] spec:map[containers:[map[image:latelee/led-light-mapper:v1.1 imagePullPolicy:IfNotPresent name:led-light-mapper-container resources:map[] securityContext:map[privileged:true] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File volumeMounts:[map[mountPath:/opt/kubeedge/ name:config-volume] map[mountPath:/var/run/secrets/kubernetes.io/serviceaccount name:default-token-gb4kq readOnly:true]]]] dnsPolicy:ClusterFirst enableServiceLinks:true hostNetwork:true nodeName:latelee.org.ttucon-2142ec priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] serviceAccount:default serviceAccountName:default terminationGracePeriodSeconds:30 tolerations:[map[effect:NoExecute key:node.kubernetes.io/not-ready operator:Exists tolerationSeconds:300] map[effect:NoExecute key:node.kubernetes.io/unreachable operator:Exists tolerationSeconds:300]] volumes:[map[configMap:map[defaultMode:420 name:device-profile-config-edge-node2] name:config-volume] map[name:default-token-gb4kq secret:map[defaultMode:420 secretName:default-token-gb4kq]]]] status:map[phase:Pending qosClass:BestEffort]]}

DNS警告:
I0319 16:25:18.563472   17947 record.go:24] Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
I0319 16:25:18.563724   17947 record.go:24] Warning MissingClusterDNS pod: "webgin-deployment-747c6887f5-dwmtb_default(1ceb1dd6-6dae-4aff-a2c6-d0de64373031)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
I0319 16:25:18.563902   17947 record.go:19] Warning DNSConfigForming Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
E0319 16:25:18.564035   17947 dns.go:135] Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 8.8.8.8 8.8.4.4 2001:4860:4860::8888
I0319 16:30:09.037479   17947 edged.go:808] consume added pod [webgin-deployment-7ccff86d8b-s227c] successfully
I0319 16:30:10.506631   17947 record.go:19] Normal Started Started container webgin
E0319 16:30:10.507199   17947 kuberuntime_container.go:172] Failed to create legacy symbolic link "/var/log/containers/webgin-deployment-747c6887f5-f6547_default_webgin-1772b70cd7725f77c30b9cf47e3ce57159d9fdccf47c0c19aed8edf779c52c16.log" to container "1772b70cd7725f77c30b9cf47e3ce57159d9fdccf47c0c19aed8edf779c52c16" log "/var/log/pods/default_webgin-deployment-747c6887f5-f6547_abc27c3c-50f1-49e9-9f2e-b00fa802dc7f/webgin/0.log": symlink /var/log/pods/default_webgin-deployment-747c6887f5-f6547_abc27c3c-50f1-49e9-9f2e-b00fa802dc7f/webgin/0.log /var/log/containers/webgin-deployment-747c6887f5-f6547_default_webgin-1772b70cd7725f77c30b9cf47e3ce57159d9fdccf47c0c19aed8edf779c52c16.log: no such file or directory
I0319 16:30:10.507557   17947 edged.go:808] consume added pod [webgin-deployment-747c6887f5-f6547] successfully
I0319 16:30:10.667156   17947 edged.go:648] sync loop ignore event: [ContainerDied], with pod [1ceb1dd6-6dae-4aff-a2c6-d0de64373031] not found
W0319 16:30:10.685178   17947 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/webgin-deployment-747c6887f5-f6547 through plugin: invalid network status for
W0319 16:30:10.871129   17947 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/webgin-deployment-747c6887f5-f6547 through plugin: invalid network status for
I0319 16:30:10.914857   17947 container_manager_linux.go:880] Found 44 PIDs in root, 44 of them are not to be moved
I0319 16:30:11.088286   17947 edged.go:645] sync loop get event [ContainerStarted], ignore it now.
I0319 16:30:11.327738   17947 edged.go:645] sync loop get event [ContainerStarted], ignore it now.
W0319 16:30:12.413498   17947 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/webgin-deployment-747c6887f5-f6547 through plugin: invalid network status for
W0319 16:30:12.543879   17947 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/webgin-deployment-747c6887f5-f6547 through plugin: invalid network status for

podの導入に成功しました:
I0319 16:25:18.564503   17947 edged.go:808] consume added pod [webgin-deployment-747c6887f5-dwmtb] successfully
I0319 16:25:18.564974   17947 proxy.go:318] [L4 Proxy] process other resource: kube-system/endpoints/kube-scheduler
I0319 16:25:18.688263   17947 edged_volumes.go:54] Using volume plugin "kubernetes.io/empty-dir" to mount wrapped_default-token-gb4kq