Kubernetes中缺少cni0接口
cni0接口完全缺失。 对于如何在不拆下集群的情况下将其取回的任何指导,我们都将不胜感激。 基本上,内部容器网络无法从中恢复,我发现了这一点 coredns的IP是docker0接口而不是cni0,因此如果我得到cni0,一切都将开始工作 下面是屏幕截图,如果您需要任何其他命令输出,请告诉我 主人Kubernetes中缺少cni0接口,kubernetes,Kubernetes,cni0接口完全缺失。 对于如何在不拆下集群的情况下将其取回的任何指导,我们都将不胜感激。 基本上,内部容器网络无法从中恢复,我发现了这一点 coredns的IP是docker0接口而不是cni0,因此如果我得到cni0,一切都将开始工作 下面是屏幕截图,如果您需要任何其他命令输出,请告诉我 主人 ip ro default via 10.123.0.1 dev ens160 proto static metric 100 10.123.0.0/19 dev ens160 proto kerne
ip ro
default via 10.123.0.1 dev ens160 proto static metric 100
10.123.0.0/19 dev ens160 proto kernel scope link src 10.123.24.103 metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink
172.17.77.0/24 dev docker0 proto kernel scope link src 172.17.77.1
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
工作节点
default via 10.123.0.1 dev ens160 proto static metric 100
10.123.0.0/19 dev ens160 proto kernel scope link src 10.123.24.105 metric 100
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
ifconfig -a
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:27ff:fe72:a287 prefixlen 64 scopeid 0x20<link>
ether 02:42:27:72:a2:87 txqueuelen 0 (Ethernet)
RX packets 3218 bytes 272206 (265.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 286 bytes 199673 (194.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system coredns-99b9bb8bd-j77zx 1/1 Running 1 20m 172.17.0.2 abc-sjkubenode02
kube-system coredns-99b9bb8bd-sjnhs 1/1 Running 1 20m 172.17.0.3 abc-xxxxxxxxxxxx02
kube-system elasticsearch-logging-0 1/1 Running 6 2d 172.17.0.2 abc-xxxxxxxxxxxx02
kube-system etcd-abc-xxxxxxxxxxxx01 1/1 Running 3 26d 10.123.24.103 abc-xxxxxxxxxxxx01
kube-system fluentd-es-v2.0.3-6flxh 1/1 Running 5 2d 172.17.0.4 abc-xxxxxxxxxxxx02
kube-system fluentd-es-v2.0.3-7qdxl 1/1 Running 19 131d 172.17.0.2 abc-sjkubenode01
kube-system fluentd-es-v2.0.3-l5thl 1/1 Running 6 2d 172.17.0.3 abc-sjkubenode02
kube-system heapster-66bf5bd78f-twwd2 1/1 Running 4 2d 172.17.0.4 abc-sjkubenode01
kube-system kibana-logging-8b9699f9c-nrcpb 1/1 Running 3 2d 172.17.0.3 abc-sjkubenode01
kube-system kube-apiserver-abc-xxxxxxxxxxxx01 1/1 Running 2 2h 10.123.24.103 abc-xxxxxxxxxxxx01
kube-system kube-controller-manager-abc-xxxxxxxxxxxx01 1/1 Running 3 2h 10.123.24.103 abc-xxxxxxxxxxxx01
kube-system kube-flannel-ds-5lmmd 1/1 Running 3 3h 10.123.24.106 abc-sjkubenode02
kube-system kube-flannel-ds-92gd9 1/1 Running 2 3h 10.123.24.104 abc-xxxxxxxxxxxx02
kube-system kube-flannel-ds-nnxv6 1/1 Running 3 3h 10.123.24.105 abc-sjkubenode01
kube-system kube-flannel-ds-ns9ls 1/1 Running 2 3h 10.123.24.103 abc-xxxxxxxxxxxx01
kube-system kube-proxy-7h54h 1/1 Running 3 3h 10.123.24.105 abc-sjkubenode01
kube-system kube-proxy-7hrln 1/1 Running 2 3h 10.123.24.104 abc-xxxxxxxxxxxx02
kube-system kube-proxy-s4rt7 1/1 Running 3 3h 10.123.24.103 abc-xxxxxxxxxxxx01
kube-system kube-proxy-swmrc 1/1 Running 2 3h 10.123.24.106 abc-sjkubenode02
kube-system kube-scheduler-abc-xxxxxxxxxxxx01 1/1 Running 2 2h 10.123.24.103 abc-xxxxxxxxxxxx01
kube-system kubernetes-dashboard-58c479587f-bkqgf 1/1 Running 30 116d 10.244.0.56 abc-xxxxxxxxxxxx01
kube-system monitoring-influxdb-54bd58b4c9-4phxl 1/1 Running 3 2d 172.17.0.5 abc-sjkubenode01
kube-system nginx-ingress-5565bdd5fc-nc962 1/1 Running 2 2d 10.123.24.103 abc-xxxxxxxxxxxx01
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
abc-sjkubemaster01 Ready master 131d v1.11.2 10.123.24.103 <none> CentOS Linux 7 (Core) 3.10.0-862.2.3.el7.x86_64 docker://17.12.1-ce
abc-sjkubemaster02 Ready <none> 131d v1.11.2 10.123.24.104 <none> CentOS Linux 7 (Core) 3.10.0-862.2.3.el7.x86_64 docker://17.12.1-ce
abc-sjkubenode01 Ready <none> 131d v1.11.2 10.123.24.105 <none> CentOS Linux 7 (Core) 3.10.0-862.2.3.el7.x86_64 docker://17.12.1-ce
abc-sjkubenode02 Ready <none> 131d v1.11.2 10.123.24.106 <none> CentOS Linux 7 (Core) 3.10.0-862.2.3.el7.x86_64 docker://17.12.1-ce
默认值通过10.123.0.1 dev ens160 proto-static metric 100
10.123.0.0/19 dev ens160原型内核作用域链接src 10.123.24.105公制100
10.244.0.0/24通过10.244.0.0 dev flannel.1联机
10.244.2.0/24通过10.244.2.0 dev flannel.1联机
10.244.3.0/24通过10.244.3.0 dev flannel.1联机
172.17.0.0/16 dev docker0原型内核作用域链接src 172.17.0.1
192.168.122.0/24 dev virbr0原型内核作用域链接src 192.168.122.1
ifconfig-a
docker0:flags=4163 mtu 1500
inet 172.17.0.1网络掩码255.255.0.0广播172.17.255.255
inet6 fe80::42:27ff:fe72:a287预桥64作用域ID 0x20
乙醚02:42:27:72:a2:87 txqueuelen 0(以太网)
接收数据包3218字节272206(265.8kib)
接收错误0丢弃0超出0帧0
发送数据包286字节199673(194.9千字节)
发送错误0丢弃0溢出0载波0冲突0
kubectl获得pods-o宽——所有名称空间
命名空间名称就绪状态重新启动旧IP节点
kube系统coredns-99b9bb8bd-j77zx 1/1运行120M 172.17.0.2 abc-sjkubenode02
kube系统coredns-99b9bb8bd-sjnhs 1/1运行120M 172.17.0.3 abc-XXXXXXXXXX 02
kube系统elasticsearch-logging-0 1/1运行6 2d 172.17.0.2 abc-XXXXXXXXXX 02
kube系统etcd-abc-XXXXXXXXXX 01 1/1运行3 26d 10.123.24.103 abc-XXXXXXXXXX 01
kube系统fluentd-es-v2.0.3-6flxh 1/1运行5 2d 172.17.0.4 abc-XXXXXXXXXX 02
kube系统fluentd-es-v2.0.3-7qdxl 1/1运行19 131d 172.17.0.2 abc-sjkubenode01
kube系统fluentd-es-v2.0.3-l5thl 1/1运行6 2d 172.17.0.3 abc-sjkubenode02
kube系统heapster-66bf5bd78f-twwd2 1/1运行4 2d 172.17.0.4 abc-sjkubenode01
kube系统kibana-logging-8b9699f9c-nrcpb 1/1运行3 2d 172.17.0.3 abc-sjkubenode01
kube系统kube-apiserver-abc-XXXXXXXXXX 01 1/1运行2小时10.123.24.103 abc-XXXXXXXXXX 01
kube系统kube-controller-manager-abc-XXXXXXXXXX 01 1/1运行3 2h 10.123.24.103 abc-XXXXXXXXXX 01
kube系统kube-flannel-ds-5lmmd 1/1运行3 3h 10.123.24.106 abc-sjkubenode02
kube系统kube-flannel-ds-92gd9 1/1运行2 3h 10.123.24.104 abc-XXXXXXXXXX 02
kube系统kube-flannel-ds-nnxv6 1/1运行3 3h 10.123.24.105 abc-sjkubenode01
kube系统kube-flannel-ds-ns9ls 1/1运行2 3h 10.123.24.103 abc-XXXXXXXXXX 01
kube系统kube-proxy-7h54h 1/1运行3小时10.123.24.105 abc-sjkubenode01
kube系统kube-proxy-7hrln 1/1运行2 3h 10.123.24.104 abc-XXXXXXXXXX 02
kube系统kube-proxy-s4rt7 1/1运行3小时10.123.24.103 abc-XXXXXXXXXX 01
kube系统kube代理swmrc 1/1运行2 3h 10.123.24.106 abc-sjkubenode02
kube系统kube-scheduler-abc-XXXXXXXXXX 01 1/1运行2小时10.123.24.103 abc-XXXXXXXXXX 01
kube系统kubernetes-dashboard-58c479587f-bkqgf 1/1运行30 116d 10.244.0.56 abc-XXXXXXXXXX 01
kube系统监控-XDB-54bd58b4c9-4phxl 1/1运行3 2d 172.17.0.5 abc-sjkubenode01
kube系统nginx-ingress-5565bdd5fc-nc962 1/1运行2 2d 10.123.24.103 abc-XXXXXXXXXX 01
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
abc-sjkubemaster01 Ready master 131d v1.11.2 10.123.24.103 CentOS Linux 7(核心)3.10.0-862.2.3.el7.x86_64docker://17.12.1-ce
abc-sjkubemaster02 Ready 131d v1.11.2 10.123.24.104 CentOS Linux 7(核心)3.10.0-862.2.3.el7.x86_64docker://17.12.1-ce
abc-sjkubenode01 Ready 131d v1.11.2 10.123.24.105 CentOS Linux 7(核心)3.10.0-862.2.3.el7.x86_64docker://17.12.1-ce
abc-sjkubenode02 Ready 131d v1.11.2 10.123.24.106 CentOS Linux 7(核心)3.10.0-862.2.3.el7.x86_64docker://17.12.1-ce
编辑:
我想补充的另一件事是如何删除pod coredns并重新创建它?我没有yaml文件,它是在我使用kubeadm安装kubebernets集群时创建的
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:3fff:fe60:fea9 prefixlen 64 scopeid 0x20<link>
ether 02:42:3f:60:fe:a9 txqueuelen 0 (Ethernet)
RX packets 123051 bytes 8715267 (8.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 88559 bytes 33067497 (31.5 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.123.24.106 netmask 255.255.224.0 broadcast 10.123.31.255
inet6 fd0f:f1c3:ba53:6c01:5de2:b5af:362e:a9b2 prefixlen 64 scopeid 0x0<global>
inet6 fe80::ee61:b84b:bf18:93f2 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:91:75:d2 txqueuelen 1000 (Ethernet)
RX packets 1580516 bytes 534188729 (509.4 MiB)
RX errors 0 dropped 114794 overruns 0 frame 0
TX packets 303093 bytes 28327667 (27.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.1.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::4c0e:7dff:fe4b:12f2 prefixlen 64 scopeid 0x20<link>
ether 4e:0e:7d:4b:12:f2 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 40 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 75 bytes 5864 (5.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75 bytes 5864 (5.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:fc:5b:de txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0-nic: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 52:54:00:fc:5b:de txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0:flags=4099 mtu 1500
inet 172.17.0.1网络掩码255.255.0.0广播172.17.255.255
inet6 fe80::42:3fff:fe60:fea9预桥64作用域ID 0x20
乙醚02:42:3f:60:fe:a9 txqueuelen 0(以太网)
接收数据包123051字节8715267(8.3 MiB)
接收错误0丢弃0超出0帧0
德克萨斯州