Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/image/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何从工作节点中删除kube污染:污染节点.kubernetes.io/unreachable:NoSchedule_Kubernetes_Kubectl_Kubeadm - Fatal编程技术网

如何从工作节点中删除kube污染:污染节点.kubernetes.io/unreachable:NoSchedule

如何从工作节点中删除kube污染:污染节点.kubernetes.io/unreachable:NoSchedule,kubernetes,kubectl,kubeadm,Kubernetes,Kubectl,Kubeadm,我能够从主节点上删除污染,但我的两个工人节点在裸机上安装了Kubeadmin,即使在发出删除它们的命令后,它们仍保留着无法访问的污染。它说已经移除,但不是永久性的。当我检查污染仍然存在的时候。我也尝试过打补丁并设置为null,但这不起作用。我在SO或其他任何地方发现的唯一一件事是和master有关的,或者假设这些命令可以工作 更新:我检查了污点的时间戳,并在删除时再次添加。那么,节点在什么意义上是不可访问的呢?我可以打它。我是否可以运行kubernetes诊断程序来找出它是如何无法访问的?我检查

我能够从主节点上删除污染,但我的两个工人节点在裸机上安装了Kubeadmin,即使在发出删除它们的命令后,它们仍保留着无法访问的污染。它说已经移除,但不是永久性的。当我检查污染仍然存在的时候。我也尝试过打补丁并设置为null,但这不起作用。我在SO或其他任何地方发现的唯一一件事是和master有关的,或者假设这些命令可以工作

更新:我检查了污点的时间戳,并在删除时再次添加。那么,节点在什么意义上是不可访问的呢?我可以打它。我是否可以运行kubernetes诊断程序来找出它是如何无法访问的?我检查了是否可以在主节点和工作节点之间双向ping。那么,日志将在哪里显示哪个组件无法连接的错误呢

kubectl describe no k8s-node1 | grep -i taint 
Taints:             node.kubernetes.io/unreachable:NoSchedule
尝试:

kubectl patch node k8s-node1 -p '{"spec":{"Taints":[]}}'

结果是这两个worker节点显示为untanted,但当我grep时,我会再次看到它们

    kubectl describe no k8s-node1 | grep -i taint 
    Taints:             node.kubernetes.io/unreachable:NoSchedule


$ k get nodes
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   Ready      master   10d   v1.14.2
k8s-node1    NotReady   <none>   10d   v1.14.2
k8s-node2    NotReady   <none>   10d   v1.14.2
当然希望我不必每次工作节点被污染时都这样做

k describe node k8s-node2

Name:               k8s-node2

Roles:              <none>

Labels:             beta.kubernetes.io/arch=amd64

                beta.kubernetes.io/os=linux

                kubernetes.io/arch=amd64

                kubernetes.io/hostname=k8s-node2

                kubernetes.io/os=linux

 Annotations:        flannel.alpha.coreos.com/backend-data:      {"VtepMAC":”d2:xx:61:c3:xx:16"}

                flannel.alpha.coreos.com/backend-type: vxlan

                flannel.alpha.coreos.com/kube-subnet-manager: true

                flannel.alpha.coreos.com/public-ip: 10.xx.1.xx

                kubeadm.alpha.kubernetes.io/cri-socket:   /var/run/dockershim.sock

                node.alpha.kubernetes.io/ttl: 0

                volumes.kubernetes.io/controller-managed-attach-detach: true
---------------------------------------------

 MemoryPressure   Unknown   Fri, 14 Jun 2019 10:34:07 +0700   Fri, 14     Jun 2019 10:35:09 +0700   NodeStatusUnknown   Kubelet stopped posting node status.

 DiskPressure     Unknown   Fri, 14 Jun 2019 10:34:07 +0700   Fri, 14 Jun 2019 10:35:09 +0700   NodeStatusUnknown   Kubelet stopped posting node status.

 PIDPressure      Unknown   Fri, 14 Jun 2019 10:34:07 +0700   Fri, 14 Jun 2019 10:35:09 +0700   NodeStatusUnknown   Kubelet stopped posting node status.

Ready            Unknown   Fri, 14 Jun 2019 10:34:07 +0700   Fri, 14    Jun 2019 10:35:09 +0700   NodeStatusUnknown   Kubelet stopped posting node status.
地址:

 InternalIP:  10.10.10.xx

 Hostname:    k8s-node2

Capacity:

cpu:                2

ephemeral-storage:  26704124Ki

memory:             4096032Ki

pods:               110

Allocatable:

cpu:                2

ephemeral-storage:  24610520638

memory:             3993632Ki

pods:               110

System Info:

Machine ID:                 6e4e4e32972b3b2f27f021dadc61d21

System UUID:                6e4e4ds972b3b2f27f0cdascf61d21

Boot ID:                    abfa0780-3b0d-sda9-a664-df900627be14

Kernel Version:             4.4.0-87-generic

OS Image:                   Ubuntu 16.04.3 LTS

Operating System:           linux

Architecture:               amd64

Container Runtime Version:  docker://17.3.3

Kubelet Version:            v1.14.2

 Kube-Proxy Version:         v1.14.2

 PodCIDR:                     10.xxx.10.1/24

 Non-terminated Pods:         (18 in total)

 Namespace                  Name                                                          CPU Requests  CPU Limits    Memory Requests  Memory Limits  AGE

---------                  ----                                                        ------------  ----------    ---------------  -------------  ---

 heptio-sonobuoy            sonobuoy-systemd-logs-daemon-set-  6a8d92061c324451-hnnp9    0 (0%)        0 (0%)        0 (0%)           0   (0%)         2d1h

 istio-system               istio-pilot-7955cdff46-w648c                               110m (5%)     2100m (105%)  228Mi (5%)       1224Mi (31%)   6h55m

 istio-system               istio-telemetry-5c9cb76c56-twzf5                           150m (7%)     2100m (105%)  228Mi (5%)       1124Mi (28%)   6h55m

 istio-system               zipkin-8594bbfc6b-9p2qc                                    0 (0%)        0 (0%)        1000Mi (25%)     1000Mi (25%)   6h55m

 knative-eventing           webhook-576479cc56-wvpt6                                   0 (0%)        0 (0%)        1000Mi (25%)     1000Mi (25%)   6h45m

 knative-monitoring         elasticsearch-logging-0                                    100m (5%)     1 (50%)       0 (0%)           0 (0%)         3d20h

 knative-monitoring         grafana-5cdc94dbd-mc4jn                                    100m (5%)     200m (10%)    100Mi (2%)       200Mi (5%)     3d21h

 knative-monitoring         kibana-logging-7cb6b64bff-dh8nx                            100m (5%)     1 (50%)       0 (0%)           0 (0%)         3d20h

knative-monitoring         kube-state-metrics-56f68467c9-vr5cx                        223m (11%)    243m (12%)    176Mi (4%)       216Mi (5%)     3d21h

 knative-monitoring         node-exporter-7jw59                                        110m (5%)     220m (11%)    50Mi (1%)        90Mi (2%)      3d22h

 knative-monitoring         prometheus-system-0                                        0 (0%)        0 (0%)        400Mi (10%)      1000Mi (25%)   3d20h

 knative-serving            activator-6cfb97bccf-bfc4w                                 120m (6%)     2200m (110%)  188Mi (4%)       1624Mi (41%)   6h45m

 knative-serving            autoscaler-85749b6c48-4wf6z                                130m (6%)     2300m (114%)  168Mi (4%)       1424Mi (36%)   6h45m

 knative-serving            controller-b49d69f4d-7j27s                                 100m (5%)     1 (50%)       100Mi (2%)       1000Mi (25%)   6h45m

 knative-serving            networking-certmanager-5b5d8f5dd8-qjh5q                    100m (5%)     1 (50%)       100Mi (2%)       1000Mi (25%)   6h45m

 knative-serving            networking-istio-7977b9bbdd-vrpl5                          100m (5%)     1 (50%)       100Mi (2%)       1000Mi (25%)   6h45m

 kube-system                canal-qbn67                                                250m (12%)    0 (0%)        0 (0%)           0 (0%)         10d

 kube-system                kube-proxy-phbf5                                           0 (0%)        0 (0%)        0 (0%)           0 (0%)         10d

 Allocated resources:

   (Total limits may be over 100 percent, i.e., overcommitted.)

 Resource           Requests      Limits

--------           --------      ------

cpu                1693m (84%)   14363m (718%)

memory             3838Mi (98%)  11902Mi (305%)

ephemeral-storage  0 (0%)        0 (0%)

Events:              <none>
InternalIP:10.10.10.xx
主机名:k8s-node2
容量:
中央处理器:2
短暂储存量:26704124Ki
内存:4096032Ki
豆荚:110
可分配:
中央处理器:2
短暂存储:24610520638
内存:3993632Ki
豆荚:110
系统信息:
机器ID:6E4E32972B3B2F27F021DADC61D21
系统UUID:6E4DS972B3B2F27F0CDASCF61D21
启动ID:abfa0780-3b0d-sda9-a664-df900627be14
内核版本:4.4.0-87-generic
操作系统映像:Ubuntu 16.04.3 LTS
操作系统:linux
架构:amd64
容器运行时版本:docker://17.3.3
Kubelet版本:v1.14.2
Kube代理版本:v1.14.2
PodCIDR:10.xxx.10.1/24
非端接吊舱:(共18个)
命名空间名称CPU请求CPU限制内存请求内存限制使用年限
---------                  ----                                                        ------------  ----------    ---------------  -------------  ---
heptio Sonobucket Sonobucket systemd日志守护程序集-6a8d92061c324451-hnnp9 0(0%)0(0%)0(0%)0(0%)2d1h
istio系统istio-pilot-7955cdff46-w648c 110m(5%)2100m(105%)228Mi(5%)1224Mi(31%)6h55m
istio系统istio-telemetry-5c9cb76c56-twzf5 150m(7%)2100m(105%)228Mi(5%)1124Mi(28%)6h55m
istio系统zipkin-8594bbfc6b-9p2qc 0(0%)0(0%)1000Mi(25%)1000Mi(25%)6h55m
knative eventing webhook-576479cc56-wvpt6 0(0%)0(0%)1000Mi(25%)1000Mi(25%)6h45m
动态监测弹性搜索测井-0 100m(5%)1(50%)0(0%)0(0%)3d20h
动态监测grafana-5cdc94dbd-mc4jn 100m(5%)200m(10%)100Mi(2%)200Mi(5%)3d21h
动态监测kibana-logging-7cb6b64bff-dh8nx 100m(5%)1(50%)0(0%)0(0%)3d20h
kube-state-metrics-56f68467c9-vr5cx 223m(11%)243m(12%)176Mi(4%)216Mi(5%)3d21h
knative监控节点-exporter-7jw59 110m(5%)220m(11%)50Mi(1%)90Mi(2%)3d22h
动态监测普罗米修斯系统-0(0%)0(0%)400英里(10%)1000英里(25%)3天20小时
knative服务激活剂-6cfb97bccf-bfc4w 120m(6%)2200m(110%)188Mi(4%)1624Mi(41%)6h45m
knative serving autoscaler-85749b6c48-4wf6z 130m(6%)2300m(114%)168Mi(4%)1424Mi(36%)6h45m
knative发球控制器-b49d69f4d-7j27s 100m(5%)1(50%)100Mi(2%)1000Mi(25%)6h45m
knative serving networking-certmanager-5b5d8f5dd8-qjh5q 100m(5%)1(50%)100Mi(2%)1000Mi(25%)6h45m
knative serving networking-istio-7977b9bbdd-vrpl5 100m(5%)1(50%)100Mi(2%)1000Mi(25%)6h45m
库贝系统运河-qbn67 250m(12%)0(0%)0(0%)0(0%)10d
kube系统kube-proxy-phbf5 0(0%)0(0%)0(0%)0(0%)10d
分配的资源:
(总限额可能超过100%,即超额承诺。)
资源请求限制
--------           --------      ------
中央处理器1693m(84%)143630M(718%)
内存3838Mi(98%)11902Mi(305%)
短期存储量0(0%)0(0%)
活动:

问题是工作节点上的交换被打开,因此kublet崩溃退出。这在/var下的syslog文件中很明显,因此在解决此问题之前,将重新添加污染。也许有人可以评论一下允许kublet在swap开启的情况下运行的含义:

kubelet[29207]: F0616 06:25:05.597536   29207 server.go:265] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename#011#011#011#011Type#011#011Size#011Used#011Priority /dev/xvda5                              partition#0114191228#0110#011-1]
Jun 16 06:25:05 k8s-node2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jun 16 06:25:05 k8s-node2 systemd[1]: kubelet.service: Unit entered failed state.
Jun 16 06:25:05 k8s-node2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jun 16 06:25:15 k8s-node2 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jun 16 06:25:15 k8s-node2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jun 16 06:25:15 k8s-node2 systemd[1]: Started kubelet: The Kubernetes Node Agent.

请为两个工作节点添加kubectl Descripte节点的输出。好的,请稍等。我将看到Kubelet停止发布节点状态。检查工作节点上的系统日志,我看到该节点已退出,因为已打开交换。可能不是opti
 MemoryPressure   Unknown   Fri, 14 Jun 2019 10:34:07 +0700   Fri, 14     Jun 2019 10:35:09 +0700   NodeStatusUnknown   Kubelet stopped posting node status.

 DiskPressure     Unknown   Fri, 14 Jun 2019 10:34:07 +0700   Fri, 14 Jun 2019 10:35:09 +0700   NodeStatusUnknown   Kubelet stopped posting node status.

 PIDPressure      Unknown   Fri, 14 Jun 2019 10:34:07 +0700   Fri, 14 Jun 2019 10:35:09 +0700   NodeStatusUnknown   Kubelet stopped posting node status.

Ready            Unknown   Fri, 14 Jun 2019 10:34:07 +0700   Fri, 14    Jun 2019 10:35:09 +0700   NodeStatusUnknown   Kubelet stopped posting node status.
 InternalIP:  10.10.10.xx

 Hostname:    k8s-node2

Capacity:

cpu:                2

ephemeral-storage:  26704124Ki

memory:             4096032Ki

pods:               110

Allocatable:

cpu:                2

ephemeral-storage:  24610520638

memory:             3993632Ki

pods:               110

System Info:

Machine ID:                 6e4e4e32972b3b2f27f021dadc61d21

System UUID:                6e4e4ds972b3b2f27f0cdascf61d21

Boot ID:                    abfa0780-3b0d-sda9-a664-df900627be14

Kernel Version:             4.4.0-87-generic

OS Image:                   Ubuntu 16.04.3 LTS

Operating System:           linux

Architecture:               amd64

Container Runtime Version:  docker://17.3.3

Kubelet Version:            v1.14.2

 Kube-Proxy Version:         v1.14.2

 PodCIDR:                     10.xxx.10.1/24

 Non-terminated Pods:         (18 in total)

 Namespace                  Name                                                          CPU Requests  CPU Limits    Memory Requests  Memory Limits  AGE

---------                  ----                                                        ------------  ----------    ---------------  -------------  ---

 heptio-sonobuoy            sonobuoy-systemd-logs-daemon-set-  6a8d92061c324451-hnnp9    0 (0%)        0 (0%)        0 (0%)           0   (0%)         2d1h

 istio-system               istio-pilot-7955cdff46-w648c                               110m (5%)     2100m (105%)  228Mi (5%)       1224Mi (31%)   6h55m

 istio-system               istio-telemetry-5c9cb76c56-twzf5                           150m (7%)     2100m (105%)  228Mi (5%)       1124Mi (28%)   6h55m

 istio-system               zipkin-8594bbfc6b-9p2qc                                    0 (0%)        0 (0%)        1000Mi (25%)     1000Mi (25%)   6h55m

 knative-eventing           webhook-576479cc56-wvpt6                                   0 (0%)        0 (0%)        1000Mi (25%)     1000Mi (25%)   6h45m

 knative-monitoring         elasticsearch-logging-0                                    100m (5%)     1 (50%)       0 (0%)           0 (0%)         3d20h

 knative-monitoring         grafana-5cdc94dbd-mc4jn                                    100m (5%)     200m (10%)    100Mi (2%)       200Mi (5%)     3d21h

 knative-monitoring         kibana-logging-7cb6b64bff-dh8nx                            100m (5%)     1 (50%)       0 (0%)           0 (0%)         3d20h

knative-monitoring         kube-state-metrics-56f68467c9-vr5cx                        223m (11%)    243m (12%)    176Mi (4%)       216Mi (5%)     3d21h

 knative-monitoring         node-exporter-7jw59                                        110m (5%)     220m (11%)    50Mi (1%)        90Mi (2%)      3d22h

 knative-monitoring         prometheus-system-0                                        0 (0%)        0 (0%)        400Mi (10%)      1000Mi (25%)   3d20h

 knative-serving            activator-6cfb97bccf-bfc4w                                 120m (6%)     2200m (110%)  188Mi (4%)       1624Mi (41%)   6h45m

 knative-serving            autoscaler-85749b6c48-4wf6z                                130m (6%)     2300m (114%)  168Mi (4%)       1424Mi (36%)   6h45m

 knative-serving            controller-b49d69f4d-7j27s                                 100m (5%)     1 (50%)       100Mi (2%)       1000Mi (25%)   6h45m

 knative-serving            networking-certmanager-5b5d8f5dd8-qjh5q                    100m (5%)     1 (50%)       100Mi (2%)       1000Mi (25%)   6h45m

 knative-serving            networking-istio-7977b9bbdd-vrpl5                          100m (5%)     1 (50%)       100Mi (2%)       1000Mi (25%)   6h45m

 kube-system                canal-qbn67                                                250m (12%)    0 (0%)        0 (0%)           0 (0%)         10d

 kube-system                kube-proxy-phbf5                                           0 (0%)        0 (0%)        0 (0%)           0 (0%)         10d

 Allocated resources:

   (Total limits may be over 100 percent, i.e., overcommitted.)

 Resource           Requests      Limits

--------           --------      ------

cpu                1693m (84%)   14363m (718%)

memory             3838Mi (98%)  11902Mi (305%)

ephemeral-storage  0 (0%)        0 (0%)

Events:              <none>
kubelet[29207]: F0616 06:25:05.597536   29207 server.go:265] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename#011#011#011#011Type#011#011Size#011Used#011Priority /dev/xvda5                              partition#0114191228#0110#011-1]
Jun 16 06:25:05 k8s-node2 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jun 16 06:25:05 k8s-node2 systemd[1]: kubelet.service: Unit entered failed state.
Jun 16 06:25:05 k8s-node2 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jun 16 06:25:15 k8s-node2 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jun 16 06:25:15 k8s-node2 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jun 16 06:25:15 k8s-node2 systemd[1]: Started kubelet: The Kubernetes Node Agent.