Kubernetes K8s容器内无互联网连接

Kubernetes K8s容器内无互联网连接,kubernetes,alpine,flannel,coredns,Kubernetes,Alpine,Flannel,Coredns,我在虚拟机(Debian 10)中安装了一个干净的K8s集群。在安装并集成到我的环境中之后,我检查了我的测试阿尔卑斯山图像中的连通性。因此,传出流量的连接不工作,coreDNS日志中没有任何信息。我使用构建映像上的解决方法覆盖/etc/resolv.conf并替换DNS条目(例如,将1.1.1.1设置为Nameserver)。在那次短暂的“黑客攻击”之后,与互联网的连接就完美了。但解决办法不是一个长期的解决方案,我想用官方的方式。在K8s coreDNS的文档中,我找到了forward部分,并将

我在虚拟机(Debian 10)中安装了一个干净的K8s集群。在安装并集成到我的环境中之后,我检查了我的测试阿尔卑斯山图像中的连通性。因此,传出流量的连接不工作,coreDNS日志中没有任何信息。我使用构建映像上的解决方法覆盖/etc/resolv.conf并替换DNS条目(例如,将1.1.1.1设置为Nameserver)。在那次短暂的“黑客攻击”之后,与互联网的连接就完美了。但解决办法不是一个长期的解决方案,我想用官方的方式。在K8s coreDNS的文档中,我找到了forward部分,并将该标志解释为一个选项,将查询转发给预定义的本地解析器。我认为转发到本地resolv.conf和解析过程工作不正常。有人能帮我解决这个问题吗

基本设置:

.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: ""
  name: coredns
  namespace: kube-system
  resourceVersion: "219"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
  uid: xxx
/ # nslookup -debug google.de
;; connection timed out; no servers could be reached
  • K8s版本:1.19.0
  • K8s设置:1个主节点+2个工作节点
  • 基于:Debian 10虚拟机
  • 法兰绒
CoreDNS吊舱的状态

kube-system            coredns-xxxx 1/1     Running   1          26h
kube-system            coredns-yyyy 1/1     Running   1          26h
CoreDNS日志:

.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: ""
  name: coredns
  namespace: kube-system
  resourceVersion: "219"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
  uid: xxx
/ # nslookup -debug google.de
;; connection timed out; no servers could be reached
CoreDNS配置:

.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: ""
  name: coredns
  namespace: kube-system
  resourceVersion: "219"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
  uid: xxx
/ # nslookup -debug google.de
;; connection timed out; no servers could be reached
输出阿尔卑斯山图像:

.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: ""
  name: coredns
  namespace: kube-system
  resourceVersion: "219"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
  uid: xxx
/ # nslookup -debug google.de
;; connection timed out; no servers could be reached
pods resolv.conf的输出

/ # cat /etc/resolv.conf 
nameserver 10.96.0.10
search development.svc.cluster.local svc.cluster.local cluster.local invalid
options ndots:5
cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 213.136.95.11
nameserver 213.136.95.10
search invalid
主机resolv.conf的输出

/ # cat /etc/resolv.conf 
nameserver 10.96.0.10
search development.svc.cluster.local svc.cluster.local cluster.local invalid
options ndots:5
cat /etc/resolv.conf 
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 213.136.95.11
nameserver 213.136.95.10
search invalid
主机/run/flannel/subnet.env的输出

cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
kubectl get pods的输出-n kube系统-o范围

coredns-54694b8f47-4sm4t                 1/1     Running   0          14d   10.244.1.48    xxx3-node-1   <none>           <none>
coredns-54694b8f47-6c7zh                 1/1     Running   0          14d   10.244.0.43    xxx2-master   <none>           <none>
coredns-54694b8f47-lcthf                 1/1     Running   0          14d   10.244.2.88    xxx4-node-2   <none>           <none>
etcd-xxx2-master                      1/1     Running   7          27d   xxx.xx.xx.xxx   xxx2-master   <none>           <none>
kube-apiserver-xxx2-master            1/1     Running   7          27d   xxx.xx.xx.xxx   xxx2-master   <none>           <none>
kube-controller-manager-xxx2-master   1/1     Running   7          27d   xxx.xx.xx.xxx   xxx2-master   <none>           <none>
kube-flannel-ds-amd64-4w8zl              1/1     Running   8          28d   xxx.xx.xx.xxx   xxx2-master   <none>           <none>
kube-flannel-ds-amd64-w7m44              1/1     Running   7          28d   xxx.xx.xx.xxx   xxx3-node-1   <none>           <none>
kube-flannel-ds-amd64-xztqm              1/1     Running   6          28d   xxx.xx.xx.xxx   xxx4-node-2   <none>           <none>
kube-proxy-dfs85                         1/1     Running   4          28d   xxx.xx.xx.xxx   xxx4-node-2   <none>           <none>
kube-proxy-m4hl2                         1/1     Running   4          28d   xxx.xx.xx.xxx   xxx3-node-1   <none>           <none>
kube-proxy-s7p4s                         1/1     Running   8          28d   xxx.xx.xx.xxx   xxx2-master   <none>           <none>
kube-scheduler-xxx2-master            1/1     Running   7          27d   xxx.xx.xx.xxx   xxx2-master   <none>           <none>
coredns-54694b8f47-4sm4t 1/1运行0 14d 10.244.1.48 xxx3-node-1
coredns-54694b8f47-6c7zh 1/1运行0 14d 10.244.0.43 xxx2主机
coredns-54694b8f47-lcthf 1/1运行0 14d 10.244.2.88 xxx4-node-2
etcd-xxx2-master 1/1运行7 27d xxx.xx.xx.xxx xxx2 master
kube-apiserver-xxx2-master 1/1运行7 27d xxx.xx.xx.xxx xxx2 master
kube-controller-manager-xxx2-master 1/1运行7 27d xxx.xx.xx.xxx xxx2 master
kube-flannel-ds-amd64-4w8zl 1/1运行8 28d xxx.xx.xx.xxx xxx2主机
kube-flannel-ds-amd64-w7m44 1/1运行7 28d xxx.xx.xx.xxx xxx3-node-1
kube-flannel-ds-amd64-xztqm 1/1运行6 28d xxx.xx.xx.xxx xxx4-node-2
kube-proxy-dfs85 1/1运行4 28d xxx.xx.xx.xxx xxx4-node-2
kube-proxy-m4hl2 1/1运行4 28d xxx.xx.xx.xxx xxx3-node-1
kube-proxy-s7p4s 1/1运行8 28d xxx.xx.xx.xxx xxx2主机
kube-scheduler-xxx2-master 1/1运行7 27d xxx.xx.xx.xxx xxx2 master

问题:

.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: ""
  name: coredns
  namespace: kube-system
  resourceVersion: "219"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
  uid: xxx
/ # nslookup -debug google.de
;; connection timed out; no servers could be reached
(两)个coreDNS吊舱仅部署在主节点上。您可以使用此命令检查设置

kubectl get pods -n kube-system -o wide | grep coredns
解决方案:

.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: ""
  name: coredns
  namespace: kube-system
  resourceVersion: "219"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
  uid: xxx
/ # nslookup -debug google.de
;; connection timed out; no servers could be reached
我可以通过扩展coreDNS吊舱和编辑部署配置来解决这个问题。必须执行以下命令

  • kubectl编辑部署coredns-n kube系统
  • 将副本值设置为节点数量,例如3
  • kubectl补丁部署coredns-n kube系统-p“{”规范\“:{”模板\“:{”元数据\“:{”注释\“:{”强制更新/更新时间\“:\”$(日期+%s)\“}}}”
  • kubectl获取吊舱-n kube系统-o范围| grep coredns
  • 来源

    提示


    如果您的coreDNS仍然存在问题,并且您的DNS解析偶尔工作,请查看此问题。

    问题:

    .:53
    [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
    CoreDNS-1.6.7
    
    apiVersion: v1
    data:
      Corefile: |
        .:53 {
            errors
            health {
               lameduck 5s
            }
            ready
            kubernetes cluster.local in-addr.arpa ip6.arpa {
               pods insecure
               fallthrough in-addr.arpa ip6.arpa
               ttl 30
            }
            prometheus :9153
            forward . /etc/resolv.conf
            cache 30
            loop
            reload
            loadbalance
        }
    kind: ConfigMap
    metadata:
      creationTimestamp: ""
      name: coredns
      namespace: kube-system
      resourceVersion: "219"
      selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
      uid: xxx
    
    / # nslookup -debug google.de
    ;; connection timed out; no servers could be reached
    
    (两)个coreDNS吊舱仅部署在主节点上。您可以使用此命令检查设置

    kubectl get pods -n kube-system -o wide | grep coredns
    
    解决方案:

    .:53
    [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
    CoreDNS-1.6.7
    
    apiVersion: v1
    data:
      Corefile: |
        .:53 {
            errors
            health {
               lameduck 5s
            }
            ready
            kubernetes cluster.local in-addr.arpa ip6.arpa {
               pods insecure
               fallthrough in-addr.arpa ip6.arpa
               ttl 30
            }
            prometheus :9153
            forward . /etc/resolv.conf
            cache 30
            loop
            reload
            loadbalance
        }
    kind: ConfigMap
    metadata:
      creationTimestamp: ""
      name: coredns
      namespace: kube-system
      resourceVersion: "219"
      selfLink: /api/v1/namespaces/kube-system/configmaps/coredns
      uid: xxx
    
    / # nslookup -debug google.de
    ;; connection timed out; no servers could be reached
    
    我可以通过扩展coreDNS吊舱和编辑部署配置来解决这个问题。必须执行以下命令

  • kubectl编辑部署coredns-n kube系统
  • 将副本值设置为节点数量,例如3
  • kubectl补丁部署coredns-n kube系统-p“{”规范\“:{”模板\“:{”元数据\“:{”注释\“:{”强制更新/更新时间\“:\”$(日期+%s)\“}}}”
  • kubectl获取吊舱-n kube系统-o范围| grep coredns
  • 来源

    提示


    如果您的coreDNS仍然存在问题,并且DNS解析偶尔会工作,请查看此项。

    pod中的resolve.conf的内容是什么?另外,coredns吊舱的状态如何?嗨@AbhiGadroo,我将resolv.conf的状态和内容添加到主帖子中。就我所用的而言,到目前为止,我还没有对默认配置进行任何黑客攻击。这里一切似乎都很好。您能否在主机上的etc解析中硬编码8.8.8.8?然后重新启动机器?@AbhiGadroo当然,但我不明白我的主机resolv.conf的实际配置之间的区别。我使用我的提供商的DNS服务器,你使用谷歌DNS服务器。我将主机resolv.conf添加到主帖子。@SoftwareEngineer您完全正确!我已立即对此进行了改编。pod中resolve.conf的内容是什么?另外,coredns吊舱的状态如何?嗨@AbhiGadroo,我将resolv.conf的状态和内容添加到主帖子中。就我所用的而言,到目前为止,我还没有对默认配置进行任何黑客攻击。这里一切似乎都很好。您能否在主机上的etc解析中硬编码8.8.8.8?然后重新启动机器?@AbhiGadroo当然,但我不明白其中的区别