Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/kubernetes/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/csharp/331.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kubernetes kube dns pod在升级主机操作系统Ubuntu18后崩溃_Kubernetes_Kubernetes Pod_Kube Dns - Fatal编程技术网

Kubernetes kube dns pod在升级主机操作系统Ubuntu18后崩溃

Kubernetes kube dns pod在升级主机操作系统Ubuntu18后崩溃,kubernetes,kubernetes-pod,kube-dns,Kubernetes,Kubernetes Pod,Kube Dns,我正在尝试将kube cluster从Ubuntu 16升级到18。升级后,kube dns pod不断崩溃。这个问题只出现在U18上,如果我回到U16,一切正常 Kube版本“v1.10.11” kube dns pod事件: Events: Type Reason Age From Message ---- ------

我正在尝试将kube cluster从Ubuntu 16升级到18。升级后,kube dns pod不断崩溃。这个问题只出现在U18上,如果我回到U16,一切正常

Kube版本“v1.10.11”

kube dns pod事件:

Events:
  Type     Reason                 Age                From                                   Message
  ----     ------                 ----               ----                                   -------
  Normal   Scheduled              28m                default-scheduler                      Successfully assigned kube-dns-75966d58fb-pqxz4 to 
  Normal   SuccessfulMountVolume  28m                kubelet,   MountVolume.SetUp succeeded for volume "kube-dns-config"
  Normal   SuccessfulMountVolume  28m                kubelet,   MountVolume.SetUp succeeded for volume "kube-dns-token-h4q66"
  Normal   Pulling                28m                kubelet,   pulling image "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.10"
  Normal   Pulled                 28m                kubelet,   Successfully pulled image "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.10"
  Normal   Started                28m                kubelet,   Started container
  Normal   Created                28m                kubelet,   Created container
  Normal   Pulling                28m                kubelet,   pulling image "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10"
  Normal   Pulling                28m                kubelet,   pulling image "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10"
  Normal   Pulled                 28m                kubelet,   Successfully pulled image "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10"
  Normal   Created                28m                kubelet,   Created container
  Normal   Pulled                 28m                kubelet,   Successfully pulled image "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.10"
  Normal   Started                28m                kubelet,   Started container
  Normal   Created                25m (x2 over 28m)  kubelet,   Created container
  Normal   Started                25m (x2 over 28m)  kubelet,   Started container
  Normal   Killing                25m                kubelet,   Killing container with id docker://dnsmasq:Container failed liveness probe.. Container will be killed and recreated.
  Normal   Pulled                 25m                kubelet,   Container image "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.10" already present on machine
  Warning  Unhealthy              4m (x26 over 27m)  kubelet,   Liveness probe failed: HTTP probe failed with statuscode: 503
kube dns侧车容器日志:

kubectl logs kube-dns-75966d58fb-pqxz4 -n kube-system -c sidecar
I0809 16:31:26.768964       1 main.go:51] Version v1.14.8.3
I0809 16:31:26.769049       1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns})
I0809 16:31:26.769079       1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
I0809 16:31:26.769117       1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
W0809 16:31:33.770594       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:49305->127.0.0.1:53: i/o timeout
W0809 16:31:40.771166       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:49655->127.0.0.1:53: i/o timeout
W0809 16:31:47.771773       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:53322->127.0.0.1:53: i/o timeout
W0809 16:31:54.772386       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:58999->127.0.0.1:53: i/o timeout
W0809 16:32:01.772972       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:35034->127.0.0.1:53: i/o timeout
W0809 16:32:08.773540       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:33250->127.0.0.1:53: i/o timeout
kubectl logs kube-dns-75966d58fb-pqxz4 -n kube-system -c dnsmasq
I0809 16:29:51.596517       1 main.go:74] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --dns-forward-max=150 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/in6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0809 16:29:51.596679       1 nanny.go:94] Starting dnsmasq [-k --cache-size=1000 --dns-forward-max=150 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/in6.arpa/127.0.0.1#10053]
I0809 16:29:52.135179       1 nanny.go:119]
W0809 16:29:52.135211       1 nanny.go:120] Got EOF from stdout
I0809 16:29:52.135277       1 nanny.go:116] dnsmasq[20]: started, version 2.78 cachesize 1000
I0809 16:29:52.135293       1 nanny.go:116] dnsmasq[20]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0809 16:29:52.135303       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in6.arpa
I0809 16:29:52.135314       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0809 16:29:52.135323       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0809 16:29:52.135329       1 nanny.go:116] dnsmasq[20]: reading /etc/resolv.conf
I0809 16:29:52.135334       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in6.arpa
I0809 16:29:52.135343       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0809 16:29:52.135348       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0809 16:29:52.135353       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.53#53
I0809 16:29:52.135397       1 nanny.go:116] dnsmasq[20]: read /etc/hosts - 7 addresses
I0809 16:31:28.728897       1 nanny.go:116] dnsmasq[20]: Maximum number of concurrent DNS queries reached (max: 150)
I0809 16:31:38.746899       1 nanny.go:116] dnsmasq[20]: Maximum number of concurrent DNS queries reached (max: 150)
kube dns dnsmasq容器日志:

kubectl logs kube-dns-75966d58fb-pqxz4 -n kube-system -c sidecar
I0809 16:31:26.768964       1 main.go:51] Version v1.14.8.3
I0809 16:31:26.769049       1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns})
I0809 16:31:26.769079       1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
I0809 16:31:26.769117       1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
W0809 16:31:33.770594       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:49305->127.0.0.1:53: i/o timeout
W0809 16:31:40.771166       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:49655->127.0.0.1:53: i/o timeout
W0809 16:31:47.771773       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:53322->127.0.0.1:53: i/o timeout
W0809 16:31:54.772386       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:58999->127.0.0.1:53: i/o timeout
W0809 16:32:01.772972       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:35034->127.0.0.1:53: i/o timeout
W0809 16:32:08.773540       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:33250->127.0.0.1:53: i/o timeout
kubectl logs kube-dns-75966d58fb-pqxz4 -n kube-system -c dnsmasq
I0809 16:29:51.596517       1 main.go:74] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --dns-forward-max=150 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/in6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0809 16:29:51.596679       1 nanny.go:94] Starting dnsmasq [-k --cache-size=1000 --dns-forward-max=150 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/in6.arpa/127.0.0.1#10053]
I0809 16:29:52.135179       1 nanny.go:119]
W0809 16:29:52.135211       1 nanny.go:120] Got EOF from stdout
I0809 16:29:52.135277       1 nanny.go:116] dnsmasq[20]: started, version 2.78 cachesize 1000
I0809 16:29:52.135293       1 nanny.go:116] dnsmasq[20]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0809 16:29:52.135303       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in6.arpa
I0809 16:29:52.135314       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0809 16:29:52.135323       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0809 16:29:52.135329       1 nanny.go:116] dnsmasq[20]: reading /etc/resolv.conf
I0809 16:29:52.135334       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in6.arpa
I0809 16:29:52.135343       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0809 16:29:52.135348       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0809 16:29:52.135353       1 nanny.go:116] dnsmasq[20]: using nameserver 127.0.0.53#53
I0809 16:29:52.135397       1 nanny.go:116] dnsmasq[20]: read /etc/hosts - 7 addresses
I0809 16:31:28.728897       1 nanny.go:116] dnsmasq[20]: Maximum number of concurrent DNS queries reached (max: 150)
I0809 16:31:38.746899       1 nanny.go:116] dnsmasq[20]: Maximum number of concurrent DNS queries reached (max: 150)
我已经删除了现有的豆荚,但是新创建的豆荚在一段时间后出现了相同的错误。不知道为什么这只发生在Ubuntu18上。有没有办法解决这个问题?

Ubuntu 18用作DNS服务器,监听127.0.0.53。您可以查看resolv.conf文件。当/etc/resolv.conf映射到CoreDNS时,它充当上游DNS服务器,但是循环检测插件失败。你可以看一下

在我的Ubuntu18集群中,我禁用了systemd resolved。

Ubuntu18用作侦听127.0.0.53的DNS服务器。您可以查看resolv.conf文件。当/etc/resolv.conf映射到CoreDNS时,它充当上游DNS服务器,但是循环检测插件失败。你可以看一下


在我的ubuntu18集群中,我禁用了systemd resolved。

在我的例子中,我发现在ubuntu18中resolve.conf指向:
/etc/resolv.conf->../run/systemd/resolv/stub resolv.conf
它有
nameserver127.0.0.53
条目。 同时,在/run/systemd/resolve下,您应该有另一个resolv.conf

/run/systemd/resolve$ ll
total 8
drwxr-xr-x  2 systemd-resolve systemd-resolve  80 Aug  12 13:24 ./
drwxr-xr-x 23 root            root            520 Aug  12 11:54 ../
-rw-r--r--  1 systemd-resolve systemd-resolve 607 Aug  12 13:24 resolv.conf
-rw-r--r--  1 systemd-resolve systemd-resolve 735 Aug  12 13:24 stub-resolv.conf
在我的例子中,resolv.conf包含私有IP名称服务器172.27.0.2。
只需在所有群集计算机上重新链接到../run/systemd/resolv/resolv.conf并重新启动kube dns吊舱。

在我的例子中,我发现resolve.conf指向ubuntu18:
/etc/resolv.conf->../run/systemd/resolv/stub resolv.conf
它有
nameserver127.0.0.53
条目。 同时,在/run/systemd/resolve下,您应该有另一个resolv.conf

/run/systemd/resolve$ ll
total 8
drwxr-xr-x  2 systemd-resolve systemd-resolve  80 Aug  12 13:24 ./
drwxr-xr-x 23 root            root            520 Aug  12 11:54 ../
-rw-r--r--  1 systemd-resolve systemd-resolve 607 Aug  12 13:24 resolv.conf
-rw-r--r--  1 systemd-resolve systemd-resolve 735 Aug  12 13:24 stub-resolv.conf
在我的例子中,resolv.conf包含私有IP名称服务器172.27.0.2。
只需在所有群集计算机上重新链接到../run/systemd/resolv/resolv.conf并重新启动kube dns吊舱。

Hi,检查吊舱内容器的状态,
kubectl获取吊舱kube-dns-75966d58fb-pqxz4-n kube system-o yaml
可能是由已解决的系统引起的。请参见我的回答:您好,检查吊舱内容器的状态,
kubectl获取吊舱kube-dns-75966d58fb-pqxz4-n kube system-o yaml
可能是由系统故障引起的。请看我的回答: