Kubernetes DNS不再解析名称
我有一个由6台服务器、3台主机和3台工人组成的集群。 直到今天早上,一切正常,直到我从集群中移除了两名工人 现在,内部DNS不再工作。 我无法解析内部名称。 显然google.com已经解决了,我可以ping它 我的集群正在运行Kubernetes V1.18.2(用于网络的印花布),并安装了kubespray。 我可以从外部访问我的服务,但当它们相互连接时,它们会失败(例如,当UI尝试连接到数据库时) 我在下面提供了此处列出的命令的一些输出: kubectl exec-ti busybox-6899b748d7-pbdk4-cat/etc/resolv.confKubernetes DNS不再解析名称,kubernetes,dns,project-calico,kubespray,Kubernetes,Dns,Project Calico,Kubespray,我有一个由6台服务器、3台主机和3台工人组成的集群。 直到今天早上,一切正常,直到我从集群中移除了两名工人 现在,内部DNS不再工作。 我无法解析内部名称。 显然google.com已经解决了,我可以ping它 我的集群正在运行Kubernetes V1.18.2(用于网络的印花布),并安装了kubespray。 我可以从外部访问我的服务,但当它们相互连接时,它们会失败(例如,当UI尝试连接到数据库时) 我在下面提供了此处列出的命令的一些输出: kubectl exec-ti busybox-6
nameserver 10.233.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ovh.net
options ndots:5
kubectl exec-ti busybox-6899b748d7-pbdk4--nslookup kubernetes.default
Server: 10.233.0.10
Address: 10.233.0.10:53
** server can't find kubernetes.default: NXDOMAIN
*** Can't find kubernetes.default: No answer
command terminated with exit code 1
kubectl exec-ti busybox-6899b748d7-pbdk4--nslookup google.com
Server: 10.233.0.10
Address: 10.233.0.10:53
Non-authoritative answer:
Name: google.com
Address: 172.217.22.142
*** Can't find google.com: No answer
PING google.com (172.217.22.142): 56 data bytes
64 bytes from 172.217.22.142: seq=0 ttl=52 time=4.409 ms
64 bytes from 172.217.22.142: seq=1 ttl=52 time=4.359 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 4.359/4.384/4.409 ms
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 7d4h
dashboard-metrics-scraper ClusterIP 10.233.52.242 <none> 8000/TCP 7d4h
kubernetes-dashboard ClusterIP 10.233.63.42 <none> 443/TCP 7d4h
voyager-operator ClusterIP 10.233.31.206 <none> 443/TCP,56791/TCP 6d5h
NAME ENDPOINTS AGE
coredns 10.233.68.9:53,10.233.79.7:53,10.233.68.9:9153 + 3 more... 7d4h
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5d9cfb4bfd-8h7jd 1/1 Running 0 3d14h
calico-node-6w8g6 1/1 Running 13 4d15h
calico-node-78thq 1/1 Running 6 7d19h
calico-node-cr4jl 1/1 Running 23 4d16h
calico-node-g5q99 1/1 Running 1 3d15h
calico-node-pmss2 1/1 Running 0 3d15h
calico-node-zw9fk 1/1 Running 18 4d19h
coredns-74b594f4c6-5k6kq 1/1 Running 2 6d22h
coredns-74b594f4c6-9ct8x 1/1 Running 0 15h
dns-autoscaler-7594b8c675-j5jfv 1/1 Running 0 15h
kube-apiserver-kub1 1/1 Running 42 7d20h
kube-apiserver-kub2 1/1 Running 1 7d19h
kube-apiserver-kub3 1/1 Running 33 7d19h
kube-controller-manager-kub1 1/1 Running 37 7d20h
kube-controller-manager-kub2 1/1 Running 4 3d15h
kube-controller-manager-kub3 1/1 Running 55 7d19h
kube-proxy-4dlf8 1/1 Running 4 4d15h
kube-proxy-4nlhf 1/1 Running 2 4d15h
kube-proxy-82kkz 1/1 Running 3 4d15h
kube-proxy-lvsfz 1/1 Running 0 3d15h
kube-proxy-pmhnx 1/1 Running 4 4d15h
kube-proxy-wpfnn 1/1 Running 10 4d15h
kube-scheduler-kub1 1/1 Running 34 7d20h
kube-scheduler-kub2 1/1 Running 3 7d19h
kube-scheduler-kub3 1/1 Running 51 7d19h
kubernetes-dashboard-7dbcd59666-79gxv 1/1 Running 0 3d14h
kubernetes-metrics-scraper-6858b8c44d-g9m9w 1/1 Running 1 5d22h
nginx-proxy-galaxy 1/1 Running 2 4d15h
nginx-proxy-kub4 1/1 Running 7 4d19h
nginx-proxy-kub5 1/1 Running 6 4d16h
nodelocaldns-2dv59 1/1 Running 0 3d15h
nodelocaldns-9skxm 1/1 Running 5 4d16h
nodelocaldns-dwg4z 1/1 Running 4 4d15h
nodelocaldns-nmwwz 1/1 Running 12 7d19h
nodelocaldns-qkq8n 1/1 Running 4 4d19h
nodelocaldns-v84jj 1/1 Running 8 7d19h
voyager-operator-5677998d47-psskf 1/1 Running 10 4d15h
kubectl exec-ti busybox-6899b748d7-pbdk4——ping google.com
Server: 10.233.0.10
Address: 10.233.0.10:53
Non-authoritative answer:
Name: google.com
Address: 172.217.22.142
*** Can't find google.com: No answer
PING google.com (172.217.22.142): 56 data bytes
64 bytes from 172.217.22.142: seq=0 ttl=52 time=4.409 ms
64 bytes from 172.217.22.142: seq=1 ttl=52 time=4.359 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 4.359/4.384/4.409 ms
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 7d4h
dashboard-metrics-scraper ClusterIP 10.233.52.242 <none> 8000/TCP 7d4h
kubernetes-dashboard ClusterIP 10.233.63.42 <none> 443/TCP 7d4h
voyager-operator ClusterIP 10.233.31.206 <none> 443/TCP,56791/TCP 6d5h
NAME ENDPOINTS AGE
coredns 10.233.68.9:53,10.233.79.7:53,10.233.68.9:9153 + 3 more... 7d4h
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5d9cfb4bfd-8h7jd 1/1 Running 0 3d14h
calico-node-6w8g6 1/1 Running 13 4d15h
calico-node-78thq 1/1 Running 6 7d19h
calico-node-cr4jl 1/1 Running 23 4d16h
calico-node-g5q99 1/1 Running 1 3d15h
calico-node-pmss2 1/1 Running 0 3d15h
calico-node-zw9fk 1/1 Running 18 4d19h
coredns-74b594f4c6-5k6kq 1/1 Running 2 6d22h
coredns-74b594f4c6-9ct8x 1/1 Running 0 15h
dns-autoscaler-7594b8c675-j5jfv 1/1 Running 0 15h
kube-apiserver-kub1 1/1 Running 42 7d20h
kube-apiserver-kub2 1/1 Running 1 7d19h
kube-apiserver-kub3 1/1 Running 33 7d19h
kube-controller-manager-kub1 1/1 Running 37 7d20h
kube-controller-manager-kub2 1/1 Running 4 3d15h
kube-controller-manager-kub3 1/1 Running 55 7d19h
kube-proxy-4dlf8 1/1 Running 4 4d15h
kube-proxy-4nlhf 1/1 Running 2 4d15h
kube-proxy-82kkz 1/1 Running 3 4d15h
kube-proxy-lvsfz 1/1 Running 0 3d15h
kube-proxy-pmhnx 1/1 Running 4 4d15h
kube-proxy-wpfnn 1/1 Running 10 4d15h
kube-scheduler-kub1 1/1 Running 34 7d20h
kube-scheduler-kub2 1/1 Running 3 7d19h
kube-scheduler-kub3 1/1 Running 51 7d19h
kubernetes-dashboard-7dbcd59666-79gxv 1/1 Running 0 3d14h
kubernetes-metrics-scraper-6858b8c44d-g9m9w 1/1 Running 1 5d22h
nginx-proxy-galaxy 1/1 Running 2 4d15h
nginx-proxy-kub4 1/1 Running 7 4d19h
nginx-proxy-kub5 1/1 Running 6 4d16h
nodelocaldns-2dv59 1/1 Running 0 3d15h
nodelocaldns-9skxm 1/1 Running 5 4d16h
nodelocaldns-dwg4z 1/1 Running 4 4d15h
nodelocaldns-nmwwz 1/1 Running 12 7d19h
nodelocaldns-qkq8n 1/1 Running 4 4d19h
nodelocaldns-v84jj 1/1 Running 8 7d19h
voyager-operator-5677998d47-psskf 1/1 Running 10 4d15h
kubectl获取pods——名称空间=kube系统-l k8s应用程序=kube dns
NAME READY STATUS RESTARTS AGE
coredns-74b594f4c6-5k6kq 1/1 Running 2 6d7h
coredns-74b594f4c6-9ct8x 1/1 Running 0 16m
当我获取DNS播客的日志时:
对于美元中的p(kubectl get pods--namespace=kube system-l k8s app=kube dns-o name);do kubectl logs--namespace=kube system$p;完成
它们充满了:
E0522 11:56:22.613704 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.233.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: net/http: TLS handshake timeout
E0522 11:56:33.678487 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Service: Get https://10.233.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1667490&timeout=8m12s&timeoutSeconds=492&watch=true: dial tcp 10.233.0.1:443: connect: connection refused
E0522 12:19:42.356157 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Namespace: Get https://10.233.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1667490&timeout=6m39s&timeoutSeconds=399&watch=true: dial tcp 10.233.0.1:443: connect: connection refused
E0522 12:19:42.356327 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to watch *v1.Service: Get https://10.233.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1667490&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp 10.233.0.1:443: connect: connection refused
coredns服务已启动:
kubectl get svc--namespace=kube-system
Server: 10.233.0.10
Address: 10.233.0.10:53
Non-authoritative answer:
Name: google.com
Address: 172.217.22.142
*** Can't find google.com: No answer
PING google.com (172.217.22.142): 56 data bytes
64 bytes from 172.217.22.142: seq=0 ttl=52 time=4.409 ms
64 bytes from 172.217.22.142: seq=1 ttl=52 time=4.359 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 4.359/4.384/4.409 ms
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 7d4h
dashboard-metrics-scraper ClusterIP 10.233.52.242 <none> 8000/TCP 7d4h
kubernetes-dashboard ClusterIP 10.233.63.42 <none> 443/TCP 7d4h
voyager-operator ClusterIP 10.233.31.206 <none> 443/TCP,56791/TCP 6d5h
NAME ENDPOINTS AGE
coredns 10.233.68.9:53,10.233.79.7:53,10.233.68.9:9153 + 3 more... 7d4h
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5d9cfb4bfd-8h7jd 1/1 Running 0 3d14h
calico-node-6w8g6 1/1 Running 13 4d15h
calico-node-78thq 1/1 Running 6 7d19h
calico-node-cr4jl 1/1 Running 23 4d16h
calico-node-g5q99 1/1 Running 1 3d15h
calico-node-pmss2 1/1 Running 0 3d15h
calico-node-zw9fk 1/1 Running 18 4d19h
coredns-74b594f4c6-5k6kq 1/1 Running 2 6d22h
coredns-74b594f4c6-9ct8x 1/1 Running 0 15h
dns-autoscaler-7594b8c675-j5jfv 1/1 Running 0 15h
kube-apiserver-kub1 1/1 Running 42 7d20h
kube-apiserver-kub2 1/1 Running 1 7d19h
kube-apiserver-kub3 1/1 Running 33 7d19h
kube-controller-manager-kub1 1/1 Running 37 7d20h
kube-controller-manager-kub2 1/1 Running 4 3d15h
kube-controller-manager-kub3 1/1 Running 55 7d19h
kube-proxy-4dlf8 1/1 Running 4 4d15h
kube-proxy-4nlhf 1/1 Running 2 4d15h
kube-proxy-82kkz 1/1 Running 3 4d15h
kube-proxy-lvsfz 1/1 Running 0 3d15h
kube-proxy-pmhnx 1/1 Running 4 4d15h
kube-proxy-wpfnn 1/1 Running 10 4d15h
kube-scheduler-kub1 1/1 Running 34 7d20h
kube-scheduler-kub2 1/1 Running 3 7d19h
kube-scheduler-kub3 1/1 Running 51 7d19h
kubernetes-dashboard-7dbcd59666-79gxv 1/1 Running 0 3d14h
kubernetes-metrics-scraper-6858b8c44d-g9m9w 1/1 Running 1 5d22h
nginx-proxy-galaxy 1/1 Running 2 4d15h
nginx-proxy-kub4 1/1 Running 7 4d19h
nginx-proxy-kub5 1/1 Running 6 4d16h
nodelocaldns-2dv59 1/1 Running 0 3d15h
nodelocaldns-9skxm 1/1 Running 5 4d16h
nodelocaldns-dwg4z 1/1 Running 4 4d15h
nodelocaldns-nmwwz 1/1 Running 12 7d19h
nodelocaldns-qkq8n 1/1 Running 4 4d19h
nodelocaldns-v84jj 1/1 Running 8 7d19h
voyager-operator-5677998d47-psskf 1/1 Running 10 4d15h
我打碎了什么?我怎样才能解决这个问题
编辑:
评论中要求提供更多信息:
kubectl获得吊舱-n kube系统
Server: 10.233.0.10
Address: 10.233.0.10:53
Non-authoritative answer:
Name: google.com
Address: 172.217.22.142
*** Can't find google.com: No answer
PING google.com (172.217.22.142): 56 data bytes
64 bytes from 172.217.22.142: seq=0 ttl=52 time=4.409 ms
64 bytes from 172.217.22.142: seq=1 ttl=52 time=4.359 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 4.359/4.384/4.409 ms
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 7d4h
dashboard-metrics-scraper ClusterIP 10.233.52.242 <none> 8000/TCP 7d4h
kubernetes-dashboard ClusterIP 10.233.63.42 <none> 443/TCP 7d4h
voyager-operator ClusterIP 10.233.31.206 <none> 443/TCP,56791/TCP 6d5h
NAME ENDPOINTS AGE
coredns 10.233.68.9:53,10.233.79.7:53,10.233.68.9:9153 + 3 more... 7d4h
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5d9cfb4bfd-8h7jd 1/1 Running 0 3d14h
calico-node-6w8g6 1/1 Running 13 4d15h
calico-node-78thq 1/1 Running 6 7d19h
calico-node-cr4jl 1/1 Running 23 4d16h
calico-node-g5q99 1/1 Running 1 3d15h
calico-node-pmss2 1/1 Running 0 3d15h
calico-node-zw9fk 1/1 Running 18 4d19h
coredns-74b594f4c6-5k6kq 1/1 Running 2 6d22h
coredns-74b594f4c6-9ct8x 1/1 Running 0 15h
dns-autoscaler-7594b8c675-j5jfv 1/1 Running 0 15h
kube-apiserver-kub1 1/1 Running 42 7d20h
kube-apiserver-kub2 1/1 Running 1 7d19h
kube-apiserver-kub3 1/1 Running 33 7d19h
kube-controller-manager-kub1 1/1 Running 37 7d20h
kube-controller-manager-kub2 1/1 Running 4 3d15h
kube-controller-manager-kub3 1/1 Running 55 7d19h
kube-proxy-4dlf8 1/1 Running 4 4d15h
kube-proxy-4nlhf 1/1 Running 2 4d15h
kube-proxy-82kkz 1/1 Running 3 4d15h
kube-proxy-lvsfz 1/1 Running 0 3d15h
kube-proxy-pmhnx 1/1 Running 4 4d15h
kube-proxy-wpfnn 1/1 Running 10 4d15h
kube-scheduler-kub1 1/1 Running 34 7d20h
kube-scheduler-kub2 1/1 Running 3 7d19h
kube-scheduler-kub3 1/1 Running 51 7d19h
kubernetes-dashboard-7dbcd59666-79gxv 1/1 Running 0 3d14h
kubernetes-metrics-scraper-6858b8c44d-g9m9w 1/1 Running 1 5d22h
nginx-proxy-galaxy 1/1 Running 2 4d15h
nginx-proxy-kub4 1/1 Running 7 4d19h
nginx-proxy-kub5 1/1 Running 6 4d16h
nodelocaldns-2dv59 1/1 Running 0 3d15h
nodelocaldns-9skxm 1/1 Running 5 4d16h
nodelocaldns-dwg4z 1/1 Running 4 4d15h
nodelocaldns-nmwwz 1/1 Running 12 7d19h
nodelocaldns-qkq8n 1/1 Running 4 4d19h
nodelocaldns-v84jj 1/1 Running 8 7d19h
voyager-operator-5677998d47-psskf 1/1 Running 10 4d15h
我能够重现这个情景
$ kubectl exec -it busybox -n dev -- nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10:53
** server can't find kubernetes.default: NXDOMAIN
*** Can't find kubernetes.default: No answer
command terminated with exit code 1
$ kubectl exec -it busybox -n dev -- nslookup google.com
Server: 10.96.0.10
Address: 10.96.0.10:53
Non-authoritative answer:
Name: google.com
Address: 172.217.168.238
*** Can't find google.com: No answer
$ kubectl exec -it busybox -n dev -- ping google.com
PING google.com (172.217.168.238): 56 data bytes
64 bytes from 172.217.168.238: seq=0 ttl=52 time=18.425 ms
64 bytes from 172.217.168.238: seq=1 ttl=52 time=27.176 ms
64 bytes from 172.217.168.238: seq=2 ttl=52 time=18.603 ms
64 bytes from 172.217.168.238: seq=3 ttl=52 time=15.445 ms
64 bytes from 172.217.168.238: seq=4 ttl=52 time=16.492 ms
64 bytes from 172.217.168.238: seq=5 ttl=52 time=19.294 ms
^C
--- google.com ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 15.445/19.239/27.176 ms
但是我使用dnsutils
image遵循相同的步骤。这在库伯内特斯中已经提到。它给出了积极的回应
$ kubectl exec -ti dnsutils -n dev -- nslookup kubernetes.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
$ kubectl exec -ti dnsutils -n dev -- nslookup google.com
Server: 10.96.0.10
Address: 10.96.0.10#53
Non-authoritative answer:
Name: google.com
Address: 172.217.168.238
Name: google.com
Address: 2a00:1450:400e:80c::200e
据我所知,busybox集装箱中的dnsutils有问题。这就是为什么我们得到这个DNS解析错误 共享kubectl get pods-n kube系统的输出。如果删除pod并重新创建它是否有效?pods的/etc/resolve.conf文件没有正确的coredns服务ip。您是如何删除节点的?是否检查了kubelet服务日志?Kubelet假设在它创建的每个pod中,都在
/etc/resolv.conf
文件中编写kubedns ClusterIP。但在这里它配置了错误的IP。@ArghyaSadhu正确的IP是什么?我将kube系统名称空间中的pod添加到问题中。如果我删除并重新创建pod,它将不起作用。@paltaa ansible playbook-I inventory/../hosts.yaml-b-v remove-node.yml-e limit=kub6,kub7I将把它标记为解决方案,因为事实上,即使在我可以跨容器访问服务之后,busybox容器也会带来一些问题。但这不是罪魁祸首,pod的/etc/resolv.conf中也没有配置名称服务器的IP。我想在我删除集群中所有与DNS相关的容器和服务之后,一切都会自行解决。不过我还是不确定。