Kubernetes:加载apiserver etcd客户端证书失败:证书已过期

Kubernetes:加载apiserver etcd客户端证书失败:证书已过期,kubernetes,kubeadm,kube-apiserver,Kubernetes,Kubeadm,Kube Apiserver,我无法运行任何kubectl命令,我认为这是apiserver etcd客户端证书过期的结果 $ openssl x509 -in /etc/kubernetes/pki/apiserver-etcd-client.crt -noout -text |grep ' Not ' Not Before: Jun 25 17:28:17 2018 GMT Not After : Jun 25 17:28:18 2019 GMT 失败的apiserve

我无法运行任何kubectl命令,我认为这是apiserver etcd客户端证书过期的结果

$ openssl x509 -in /etc/kubernetes/pki/apiserver-etcd-client.crt -noout -text |grep ' Not '
            Not Before: Jun 25 17:28:17 2018 GMT
            Not After : Jun 25 17:28:18 2019 GMT 
失败的apiserver容器中的日志显示:

Unable to create storage backend: config (&{ /registry [https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true false 1000 0xc420363900 <nil> 5m0s 1m0s}), err (dial tcp 127.0.0.1:2379: getsockopt: connection refused)
接下来,我尝试:

kubeadm --config kubeadm.yaml alpha phase certs apiserver-etcd-client
其中kubeadm.yaml文件为:

但它的回报是:

failure loading apiserver-etcd-client certificate: the certificate has expired
此外,在目录/etc/kubernetes/pki/etcd中,除ca证书和密钥外,其余所有证书和密钥都已过期

有没有办法在不重建群集的情况下续订过期的证书?

Logs from the etcd container:
$ sudo docker logs e4da061fc18f
2019-07-02 20:46:45.705743 I | etcdmain: etcd Version: 3.1.12
2019-07-02 20:46:45.705798 I | etcdmain: Git SHA: 918698add
2019-07-02 20:46:45.705803 I | etcdmain: Go Version: go1.8.7
2019-07-02 20:46:45.705809 I | etcdmain: Go OS/Arch: linux/amd64
2019-07-02 20:46:45.705816 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-07-02 20:46:45.705848 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-07-02 20:46:45.705871 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-07-02 20:46:45.705878 W | embed: The scheme of peer url http://localhost:2380 is HTTP while peer key/cert files are presented. Ignored peer key/cert files.
2019-07-02 20:46:45.705882 W | embed: The scheme of peer url http://localhost:2380 is HTTP while client cert auth (--peer-client-cert-auth) is enabled. Ignored client cert auth for this url.
2019-07-02 20:46:45.712218 I | embed: listening for peers on http://localhost:2380
2019-07-02 20:46:45.712267 I | embed: listening for client requests on 127.0.0.1:2379
2019-07-02 20:46:45.716737 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.718103 I | etcdserver: recovered store from snapshot at index 13621371
2019-07-02 20:46:45.718116 I | etcdserver: name = default
2019-07-02 20:46:45.718121 I | etcdserver: data dir = /var/lib/etcd
2019-07-02 20:46:45.718126 I | etcdserver: member dir = /var/lib/etcd/member
2019-07-02 20:46:45.718130 I | etcdserver: heartbeat = 100ms
2019-07-02 20:46:45.718133 I | etcdserver: election = 1000ms
2019-07-02 20:46:45.718136 I | etcdserver: snapshot count = 10000
2019-07-02 20:46:45.718144 I | etcdserver: advertise client URLs = https://127.0.0.1:2379
2019-07-02 20:46:45.842281 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 13629377
2019-07-02 20:46:45.842917 I | raft: 8e9e05c52164694d became follower at term 1601
2019-07-02 20:46:45.842940 I | raft: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 1601, commit: 13629377, applied: 13621371, lastindex: 13629377, lastterm: 1601]
2019-07-02 20:46:45.843071 I | etcdserver/api: enabled capabilities for version 3.1
2019-07-02 20:46:45.843086 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 from store
2019-07-02 20:46:45.843093 I | etcdserver/membership: set the cluster version to 3.1 from store
2019-07-02 20:46:45.846312 I | mvcc: restore compact to 13274147
2019-07-02 20:46:45.854822 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.855232 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.855267 I | etcdserver: starting server... [version: 3.1.12, cluster version: 3.1]
2019-07-02 20:46:45.855293 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-07-02 20:46:46.443331 I | raft: 8e9e05c52164694d is starting a new election at term 1601
2019-07-02 20:46:46.443388 I | raft: 8e9e05c52164694d became candidate at term 1602
2019-07-02 20:46:46.443405 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 1602
2019-07-02 20:46:46.443419 I | raft: 8e9e05c52164694d became leader at term 1602
2019-07-02 20:46:46.443428 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 1602
2019-07-02 20:46:46.443699 I | etcdserver: published {Name:default ClientURLs:[https://127.0.0.1:2379]} to cluster cdf818194e3a8c32
2019-07-02 20:46:46.443768 I | embed: ready to serve client requests
2019-07-02 20:46:46.444012 I | embed: serving client requests on 127.0.0.1:2379
2019-07-02 20:48:05.528061 N | pkg/osutil: received terminated signal, shutting down...
2019-07-02 20:48:05.528103 I | etcdserver: skipped leadership transfer for single member cluster
systemd启动脚本:

sudo systemctl status -l kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2019-07-01 14:54:24 UTC; 1 day 23h ago
     Docs: http://kubernetes.io/docs/
 Main PID: 9422 (kubelet)
    Tasks: 13
   Memory: 47.0M
   CGroup: /system.slice/kubelet.service
           └─9422 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authentication-token-webhook=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cgroup-driver=cgroupfs --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki

Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.871276    9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.31.22.241:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.872444    9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://172.31.22.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.880422    9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://172.31.22.241:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.871913    9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.31.22.241:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.872948    9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://172.31.22.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.880792    9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://172.31.22.241:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: I0703 14:10:50.964989    9422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: I0703 14:10:50.966644    9422 kubelet_node_status.go:82] Attempting to register node ahub-k8s-m1.aws-intanalytic.com
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.967012    9422 kubelet_node_status.go:106] Unable to register node "ahub-k8s-m1.aws-intanalytic.com" with API server: Post https://172.31.22.241:6443/api/v1/nodes: dial tcp 172.31.22.241:6443: getsockopt: connection refused

背景:kubectl在CentOS:/$USER/.kube/config中使用一个名为config location的文件来标识自己的用户,即api服务器的用户。此文件是CentOS:/etc/kubernetes/admin.conf中admin.conf位置的副本

现在从1.10升级到1.14,并在已经过期的证书集群中更新现有证书,应该区别对待,只是不要使过程复杂化

仅供参考,您应该一次升级一个跃点,从1.10到1.11到1.12。。。。。检查可用于不同版本的页面。在从一个跃点升级到下一个跃点之前,检查并执行强制命令也是非常重要的

回到仅更新证书kubeadm-config kubeadm.yaml alpha phase certs apiserver etcd客户端是不够的,请执行kubeadm-config kubeadm.yaml alpha phase certs all 注意:根据以上内容,您需要在所有节点中执行kubeadm-config kubeadm.yaml alpha phase kubeconfig all,因为ca.crt现在已经更改


未来您可能需要考虑

后台:Kubtl利用CITOS:/$UB/.KuBe/CONFIG中的一个名为CONFIG位置的文件来标识API服务器中的您自己。此文件是CentOS:/etc/kubernetes/admin.conf中admin.conf位置的副本

现在从1.10升级到1.14,并在已经过期的证书集群中更新现有证书,应该区别对待,只是不要使过程复杂化

仅供参考,您应该一次升级一个跃点,从1.10到1.11到1.12。。。。。检查可用于不同版本的页面。在从一个跃点升级到下一个跃点之前,检查并执行强制命令也是非常重要的

回到仅更新证书kubeadm-config kubeadm.yaml alpha phase certs apiserver etcd客户端是不够的,请执行kubeadm-config kubeadm.yaml alpha phase certs all 注意:根据以上内容,您需要在所有节点中执行kubeadm-config kubeadm.yaml alpha phase kubeconfig all,因为ca.crt现在已经更改


<>你可能想在KubNeNETS 1.14及以上中考虑

< P>,你可以运行SUDO KuBeADM Alpha Curts更新并重启主机。对于旧版本,手动步骤包括:

sudo-sE切换到root 检查主机上的证书以查看到期日期 echo-n/etc/kubernetes/pki/{apiserver-kubelet-client,apiserver-etcd-client,前端代理客户端,etcd/healthcheck-client,etcd/peer,etcd/server}.crt | xargs-d''-I{}bash-cls-hal{}&&opensslx509-in{}-noout-enddate 移动现有密钥/配置文件,以便可以重新创建它们 mv/etc/kubernetes/pki/apiserver.key{,.old} mv/etc/kubernetes/pki/apiserver.crt{,.old} mv/etc/kubernetes/pki/apiserver-kubelet-client.crt{,.old} mv/etc/kubernetes/pki/apiserver-kubelet-client.key{,.old} mv/etc/kubernetes/pki/apiserver-etcd-client.crt{,.old} mv/etc/kubernetes/pki/apiserver-etcd-client.key{,.old} mv/etc/kubernetes/pki/front proxy client.crt{,.old} mv/etc/kubernetes/pki/front proxy client.key{,.old} mv/etc/kubernetes/pki/etcd/healthcheck client.crt{,.old} mv/etc/kubernetes/pki/etcd/healthcheck client.key{,.old} mv/etc/kubernetes/pki/etcd/peer.key{,.old} mv/etc/kubernetes/pki/etcd/peer.crt{,.old} mv/etc/kubernetes/pki/etcd/server.crt{,.old} mv/etc/kubernetes/pki/etcd/server.key{,.old} mv/etc/kubernetes/kubelet.conf{,.old} mv/etc/kubernetes/admin.conf{.old} mv/etc/kubernetes/controller-manager.conf{,.old} mv/etc/kubernetes/scheduler.conf{,.old} 重新生成密钥和配置文件 kubeadm alpha阶段证书apiserver-config/etc/kubernetes/kubeadm.yaml kubeadm alpha阶段证书apiserver etcd客户端-config/etc/kubernetes/kubeadm.yaml kubeadm alpha阶段证书apiserver kubelet客户端 kubeadm alpha阶段证书前端代理客户端 kubeadm alpha阶段证书etcd healthcheck客户端 kubeadm阿尔法相位证书etcd对等 kubeadm alpha阶段证书etcd服务器 kubeadm alpha phase kubeconfig all-config/etc/kubernetes/kubeadm.yaml 然后需要重新启动kubelet和服务,但对于主机来说,最好只是重新启动
在Kubernetes 1.14及更高版本中,您只需运行sudo kubeadm alpha证书即可全部续订并重新启动主机。对于旧版本,手动步骤包括:

sudo-sE切换到root 检查主机上的证书以查看到期日期 echo-n/etc/kubernetes/pki/{apiserver、apiserver-kubelet-client、apiserver-etcd-client、前端代理客户端、etcd/healthch eck client,etcd/peer,etcd/server}.crt | xargs-d'-I{}bash-cls-hal{}&&opensslx509-in{}-noout-enddate 移动现有密钥/配置文件,以便可以重新创建它们 mv/etc/kubernetes/pki/apiserver.key{,.old} mv/etc/kubernetes/pki/apiserver.crt{,.old} mv/etc/kubernetes/pki/apiserver-kubelet-client.crt{,.old} mv/etc/kubernetes/pki/apiserver-kubelet-client.key{,.old} mv/etc/kubernetes/pki/apiserver-etcd-client.crt{,.old} mv/etc/kubernetes/pki/apiserver-etcd-client.key{,.old} mv/etc/kubernetes/pki/front proxy client.crt{,.old} mv/etc/kubernetes/pki/front proxy client.key{,.old} mv/etc/kubernetes/pki/etcd/healthcheck client.crt{,.old} mv/etc/kubernetes/pki/etcd/healthcheck client.key{,.old} mv/etc/kubernetes/pki/etcd/peer.key{,.old} mv/etc/kubernetes/pki/etcd/peer.crt{,.old} mv/etc/kubernetes/pki/etcd/server.crt{,.old} mv/etc/kubernetes/pki/etcd/server.key{,.old} mv/etc/kubernetes/kubelet.conf{,.old} mv/etc/kubernetes/admin.conf{.old} mv/etc/kubernetes/controller-manager.conf{,.old} mv/etc/kubernetes/scheduler.conf{,.old} 重新生成密钥和配置文件 kubeadm alpha阶段证书apiserver-config/etc/kubernetes/kubeadm.yaml kubeadm alpha阶段证书apiserver etcd客户端-config/etc/kubernetes/kubeadm.yaml kubeadm alpha阶段证书apiserver kubelet客户端 kubeadm alpha阶段证书前端代理客户端 kubeadm alpha阶段证书etcd healthcheck客户端 kubeadm阿尔法相位证书etcd对等 kubeadm alpha阶段证书etcd服务器 kubeadm alpha phase kubeconfig all-config/etc/kubernetes/kubeadm.yaml 然后需要重新启动kubelet和服务,但对于主机来说,最好只是重新启动
您可以发布etcd容器的日志吗?我知道您可能无法使用kubectl,因此请使用docker logs ETCD_CONTAINER_ID。我之所以这样问,是因为您的错误是:拨打tcp 127.0.0.1:2379:getsockopt:connection Rejected,只需查看ETCD是否运行healthyupdated post以包含来自ETCD容器的日志。您可以发布ETCD容器的日志吗?我知道你可能无法使用kubectl,因此请使用docker日志ETCD_CONTAINER_ID。我之所以这样问,是因为你的错误是:拨打tcp 127.0.0.1:2379:getsockopt:connection Deceed,只需查看ETCD是否运行healthyupdated post以包含ETCD容器中的日志。谢谢,@garlicFrancium。我编辑了这篇文章来说明cert轮换的效果,这就是为什么这是一个不寻常的情况。我最终还是重建了,因为这是从1.10升级到1.15的实用方法。谢谢你,@garlicFrancium。我编辑了这篇文章来说明cert轮换的效果,这就是为什么这是一个不寻常的情况。我最终还是重建了,因为这是从1.10升级到1.15的实际方法。

Logs from the etcd container:
$ sudo docker logs e4da061fc18f
2019-07-02 20:46:45.705743 I | etcdmain: etcd Version: 3.1.12
2019-07-02 20:46:45.705798 I | etcdmain: Git SHA: 918698add
2019-07-02 20:46:45.705803 I | etcdmain: Go Version: go1.8.7
2019-07-02 20:46:45.705809 I | etcdmain: Go OS/Arch: linux/amd64
2019-07-02 20:46:45.705816 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-07-02 20:46:45.705848 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-07-02 20:46:45.705871 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-07-02 20:46:45.705878 W | embed: The scheme of peer url http://localhost:2380 is HTTP while peer key/cert files are presented. Ignored peer key/cert files.
2019-07-02 20:46:45.705882 W | embed: The scheme of peer url http://localhost:2380 is HTTP while client cert auth (--peer-client-cert-auth) is enabled. Ignored client cert auth for this url.
2019-07-02 20:46:45.712218 I | embed: listening for peers on http://localhost:2380
2019-07-02 20:46:45.712267 I | embed: listening for client requests on 127.0.0.1:2379
2019-07-02 20:46:45.716737 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.718103 I | etcdserver: recovered store from snapshot at index 13621371
2019-07-02 20:46:45.718116 I | etcdserver: name = default
2019-07-02 20:46:45.718121 I | etcdserver: data dir = /var/lib/etcd
2019-07-02 20:46:45.718126 I | etcdserver: member dir = /var/lib/etcd/member
2019-07-02 20:46:45.718130 I | etcdserver: heartbeat = 100ms
2019-07-02 20:46:45.718133 I | etcdserver: election = 1000ms
2019-07-02 20:46:45.718136 I | etcdserver: snapshot count = 10000
2019-07-02 20:46:45.718144 I | etcdserver: advertise client URLs = https://127.0.0.1:2379
2019-07-02 20:46:45.842281 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 13629377
2019-07-02 20:46:45.842917 I | raft: 8e9e05c52164694d became follower at term 1601
2019-07-02 20:46:45.842940 I | raft: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 1601, commit: 13629377, applied: 13621371, lastindex: 13629377, lastterm: 1601]
2019-07-02 20:46:45.843071 I | etcdserver/api: enabled capabilities for version 3.1
2019-07-02 20:46:45.843086 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 from store
2019-07-02 20:46:45.843093 I | etcdserver/membership: set the cluster version to 3.1 from store
2019-07-02 20:46:45.846312 I | mvcc: restore compact to 13274147
2019-07-02 20:46:45.854822 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.855232 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.855267 I | etcdserver: starting server... [version: 3.1.12, cluster version: 3.1]
2019-07-02 20:46:45.855293 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-07-02 20:46:46.443331 I | raft: 8e9e05c52164694d is starting a new election at term 1601
2019-07-02 20:46:46.443388 I | raft: 8e9e05c52164694d became candidate at term 1602
2019-07-02 20:46:46.443405 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 1602
2019-07-02 20:46:46.443419 I | raft: 8e9e05c52164694d became leader at term 1602
2019-07-02 20:46:46.443428 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 1602
2019-07-02 20:46:46.443699 I | etcdserver: published {Name:default ClientURLs:[https://127.0.0.1:2379]} to cluster cdf818194e3a8c32
2019-07-02 20:46:46.443768 I | embed: ready to serve client requests
2019-07-02 20:46:46.444012 I | embed: serving client requests on 127.0.0.1:2379
2019-07-02 20:48:05.528061 N | pkg/osutil: received terminated signal, shutting down...
2019-07-02 20:48:05.528103 I | etcdserver: skipped leadership transfer for single member cluster
sudo systemctl status -l kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2019-07-01 14:54:24 UTC; 1 day 23h ago
     Docs: http://kubernetes.io/docs/
 Main PID: 9422 (kubelet)
    Tasks: 13
   Memory: 47.0M
   CGroup: /system.slice/kubelet.service
           └─9422 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authentication-token-webhook=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cgroup-driver=cgroupfs --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki

Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.871276    9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.31.22.241:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.872444    9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://172.31.22.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.880422    9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://172.31.22.241:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.871913    9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.31.22.241:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.872948    9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://172.31.22.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.880792    9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://172.31.22.241:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: I0703 14:10:50.964989    9422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: I0703 14:10:50.966644    9422 kubelet_node_status.go:82] Attempting to register node ahub-k8s-m1.aws-intanalytic.com
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.967012    9422 kubelet_node_status.go:106] Unable to register node "ahub-k8s-m1.aws-intanalytic.com" with API server: Post https://172.31.22.241:6443/api/v1/nodes: dial tcp 172.31.22.241:6443: getsockopt: connection refused