kubectl获取节点-连接被拒绝 在虚拟机中运行Ubuntu18.04.1 LTS 我似乎和报道的问题一样

kubectl获取节点-连接被拒绝 在虚拟机中运行Ubuntu18.04.1 LTS 我似乎和报道的问题一样,ubuntu,kubernetes,Ubuntu,Kubernetes,我几天前安装了这个,一切都很好。我可以通过kubectl连接,没有问题。但是现在,当我执行以下操作时: $ kubectl get nodes The connection to the server 192.168.40.101:6443 was refused - did you specify the right host or port? 更新:添加了环境设置。 $ echo $KUBECONFIG $ kubectl config view apiVersion: v1 clust

我几天前安装了这个,一切都很好。我可以通过kubectl连接,没有问题。但是现在,当我执行以下操作时:

$ kubectl get nodes
The connection to the server 192.168.40.101:6443 was refused - did you specify the right host or port?
更新:添加了环境设置。

$ echo $KUBECONFIG

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.40.101:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
$ kubectl get pods
即使我将变量显式设置为我的主目录中的配置文件:

$ ls -l .kube/config
-rw------- 1 someuser someuser 5450 Oct 15 21:58 .kube/config
这没什么区别。“kubectl配置视图”仍然返回相同的数据(默认情况下,没有KUBECONFIG变量设置在上述位置查找配置文件)

防火墙也已关闭:

$ sudo ufw status
Status: inactive
我可以看出库贝莱还不错:

$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2018-10-15 21:46:55 AEDT; 1 weeks 1 days ago
apiserver似乎没有运行:

$ ps aux | grep kube
root      10304  9.4  1.5 1380412 136776 ?      Ssl  Oct15 1093:57 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --resolv-conf=/run/systemd/resolve/resolv.conf
root      11104  0.7  0.3  43168 32476 ?        Ssl  Oct15  92:07 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
donovan   11757  0.0  0.0  14428  1044 pts/1    S+   22:39   0:00 grep --color=auto kube
root     159921  0.0  0.1  16252  8824 ?        Ssl  Oct19   5:02 /chart-repo sync --mongo-url=kubeapps-mongodb --mongo-user=root stable https://kubernetes-charts.storage.googleapis.com

~$ sudo lsof -i
COMMAND     PID            USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
systemd-r   516 systemd-resolve   12u  IPv4    28394      0t0  UDP localhost:domain
systemd-r   516 systemd-resolve   13u  IPv4    28395      0t0  TCP localhost:domain (LISTEN)
avahi-dae   627           avahi   12u  IPv4    31555      0t0  UDP *:mdns
avahi-dae   627           avahi   13u  IPv6    31556      0t0  UDP *:mdns
avahi-dae   627           avahi   14u  IPv4    31557      0t0  UDP *:47611
avahi-dae   627           avahi   15u  IPv6    31558      0t0  UDP *:35014
xrdp-sesm   750            root    7u  IPv6    33682      0t0  TCP ip6-localhost:3350 (LISTEN)
sshd       2018            root    3u  IPv4  8211858      0t0  TCP *:ssh (LISTEN)
sshd       2018            root    4u  IPv6  8211860      0t0  TCP *:ssh (LISTEN)
sshd       2161            root    3u  IPv4    44589      0t0  TCP KUBE-01:ssh->192.168.40.50:43835 (ESTABLISHED)
sshd       2254         donovan    3u  IPv4    44589      0t0  TCP KUBE-01:ssh->192.168.40.50:43835 (ESTABLISHED)
sshd       6348            root    3u  IPv4    57332      0t0  TCP KUBE-01:ssh->192.168.40.50:46583 (ESTABLISHED)
sshd       6429         donovan    3u  IPv4    57332      0t0  TCP KUBE-01:ssh->192.168.40.50:46583 (ESTABLISHED)
kubelet   10304            root    9u  IPv4    98081      0t0  TCP localhost:38077 (LISTEN)
kubelet   10304            root   19u  IPv4   118188      0t0  TCP localhost:10248 (LISTEN)
kubelet   10304            root   20u  IPv6   117597      0t0  TCP *:10250 (LISTEN)
cupsd     19145            root    6u  IPv6 21711266      0t0  TCP ip6-localhost:ipp (LISTEN)
cupsd     19145            root    7u  IPv4 21711267      0t0  TCP localhost:ipp (LISTEN)
cups-brow 19146            root    7u  IPv4 21710056      0t0  UDP *:ipp
但就我的一生而言,我不知道如何检查kube apiserver是否正在运行(通过服务检查或similair),因为我猜这就是导致问题的原因

更新:API服务器似乎因为etcd而失败

挖掘docker日志:

sudo less /var/log/containers/kube-apiserver-kube-01_kube-system_kube-apiserver-00c9e483c6f0f84520d0f6b41cfb8e6489ef030aac91c8d6ac30c88bde44e9f1.log
{"log":"Flag --insecure-port has been deprecated, This flag will be removed in a future version.\n","stream":"stderr","time":"2018-10-24T10:32:08.316846636Z"}
{"log":"I1024 10:32:08.316937       1 server.go:681] external host was not specified, using 192.168.40.101\n","stream":"stderr","time":"2018-10-24T10:32:08.317214326Z"}
{"log":"I1024 10:32:08.317252       1 server.go:152] Version: v1.12.1\n","stream":"stderr","time":"2018-10-24T10:32:08.317368622Z"}
{"log":"I1024 10:32:09.025904       1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.\n","stream":"stderr","time":"2018-10-24T10:32:09.026105478Z"}
{"log":"I1024 10:32:09.025981       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.\n","stream":"stderr","time":"2018-10-24T10:32:09.026159677Z"}
{"log":"I1024 10:32:09.026595       1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.\n","stream":"stderr","time":"2018-10-24T10:32:09.026704563Z"}
{"log":"I1024 10:32:09.026625       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.\n","stream":"stderr","time":"2018-10-24T10:32:09.026717163Z"}
{"log":"F1024 10:32:29.031135       1 storage_decorator.go:57] Unable to create storage backend: config (\u0026{ /registry [https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true true 1000 0xc420ba1cb0 \u003cnil\u003e 5m0s 1m0s}), err (dial tcp 127.0.0.1:2379: connect: connection refused)\n","stream":"stderr","time":"2018-10-24T10:32:29.032482723Z"}
因此:

  • 为什么etcd在Docker内部失败
  • 在Ubuntu机器上,我如何判断所有的k8s位是否都在运行
  • 如何进一步解决此问题?(这样我就可以让kubectl再次与集群对话)

  • 我有同样的问题,并能够解决它

    使用以下命令禁用临时交换,但如果系统重新启动,问题将再次发生

    sudo -i
    swapoff -a
    
    永久的解决方法是从/etc/fstab中删除交换条目
    编辑vim/etc/fstab

    env variable
    KUBECONFIG
    是否指向正确的文件?您最初是如何安装Kubernetes的?@Rico@wilbeibi添加了KUBECONFIG信息您是否能够解决您的问题或其仍然是最新的?