Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/qt/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kubernetes 节点在尝试加入具有'kubeadm'的群集时未获得证书`_Kubernetes_Kubeadm - Fatal编程技术网

Kubernetes 节点在尝试加入具有'kubeadm'的群集时未获得证书`

Kubernetes 节点在尝试加入具有'kubeadm'的群集时未获得证书`,kubernetes,kubeadm,Kubernetes,Kubeadm,我能够使用kubeadm为kubernetes部署引导主节点,但在kubeadm加入阶段kubelet开始阶段出现错误: 现在,通过journalctl-xeu kubelet查看kubelet日志: 有趣的是,在试图加入的worker上没有找到kubelet-client-current.pem,事实上/var/lib/kubelet/pki中唯一的文件是kubelet。{crt,key} 如果在尝试加入的节点上运行以下命令,则会发现缺少所有证书: # kubeadm alpha certs

我能够使用kubeadm为kubernetes部署引导主节点,但在kubeadm加入阶段kubelet开始阶段出现错误:

现在,通过journalctl-xeu kubelet查看kubelet日志:

有趣的是,在试图加入的worker上没有找到kubelet-client-current.pem,事实上/var/lib/kubelet/pki中唯一的文件是kubelet。{crt,key}

如果在尝试加入的节点上运行以下命令,则会发现缺少所有证书:

# kubeadm alpha certs check-expiration
W0119 00:06:35.088034   24017 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0119 00:06:35.088082   24017 validation.go:28] Cannot validate kubelet config - no validator is available
CERTIFICATE                          EXPIRES   RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
!MISSING! admin.conf                                                                   
!MISSING! apiserver                                                                    
!MISSING! apiserver-etcd-client                                                        
!MISSING! apiserver-kubelet-client                                                     
!MISSING! controller-manager.conf                                                      
!MISSING! etcd-healthcheck-client                                                      
!MISSING! etcd-peer                                                                    
!MISSING! etcd-server                                                                  
!MISSING! front-proxy-client                                                           
!MISSING! scheduler.conf                                                               Error checking external CA condition for ca certificate authority: failure loading certificate for API server: failed to load certificate: couldn't load the certificate file /etc/kubernetes/pki/apiserver.crt: open /etc/kubernetes/pki/apiserver.crt: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
/etc/kubernetes/pki中唯一的文件是ca.crt

master和worker都有kubeadm和kubelet版本1.17.1,因此不太可能出现版本不匹配的情况

一些可能不相关但也容易导致错误的情况是,worker和master节点都使用Cgroup-Driver:systemd设置了docker,但出于某种原因,正在传递kubelet-Cgroup-Driver=cgroupfs

是什么导致了这个问题?更重要的是,我如何修复它,以便成功地将节点连接到主节点

编辑:更多信息 在worker上,systemd文件是:

~# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
#Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
kubelet的机组服务:

以及kubelet config.yaml:

工作节点与主节点上/var/lib/kubelet/kubeadm-flags.env的内容:

工人:

KUBELET_KUBEADM_ARGS=-cgroup driver=systemd-network plugin=cni-pod infra container image=k8s.gcr.io/pause:3.1

大师:

KUBELET_KUBEADM_ARGS=-cgroup driver=systemd-network plugin=cni-pod infra container image=k8s.gcr.io/pause:3.1-resolv conf=/run/systemd/resolv/resolv.conf

master和worker具有相同的docker版本18.09,其配置文件相同:

~$ cat /etc/docker/daemon.json
{
 "exec-opts": ["native.cgroupdriver=systemd"],
 "data-root": "/opt/var/docker/"
}

我相信,由于引导令牌过期,工作节点上的kubelet服务无法向API服务器进行身份验证。能否在主节点上重新生成令牌,并尝试在工作节点上运行kubeadm join命令

CMD:  kubeadm token create --print-join-command

是Ubuntu18OS还是其他的操作系统?什么是容器运行时docker或其他东西?@ArghyaSadhu这两种情况都是docker 18.09。主节点是ubuntu 18.04,worker是debian buster 10你能在master和worker上提供/var/lib/kubelet/kubeadm-flags.env文件的内容吗?我不知道为什么要运行join-phase kubelet start命令…控制平面中的kubeadm init应该给你一个worker节点的join命令…当你运行这个join命令时会发生什么情况?@ArghyaSadhu它失败了同样的错误,显示init phase kubelet start的输出的目的是在post中添加特定的有问题的stage。但将在一分钟内添加整个输出
~# cat /etc/systemd/system/multi-user.target.wants/kubelet.service 
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/

[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
~# cat /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
~$ cat /etc/docker/daemon.json
{
 "exec-opts": ["native.cgroupdriver=systemd"],
 "data-root": "/opt/var/docker/"
}
CMD:  kubeadm token create --print-join-command