Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/kubernetes/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kubernetes Kubernate:无法ping其他节点上的pod ip_Kubernetes_Nodes_Project Calico_Bare Metal Server - Fatal编程技术网

Kubernetes Kubernate:无法ping其他节点上的pod ip

Kubernetes Kubernate:无法ping其他节点上的pod ip,kubernetes,nodes,project-calico,bare-metal-server,Kubernetes,Nodes,Project Calico,Bare Metal Server,Pod IP仅从同一节点ping。 当我尝试从其他节点/工作者ping pod ip时,它没有ping master2@master2:~$ kubectl get pods --namespace=kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READI

Pod IP仅从同一节点ping。

当我尝试从其他节点/工作者ping pod ip时,它没有ping

master2@master2:~$ kubectl get pods --namespace=kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP                NODE      NOMINATED NODE   READINESS GATES
calico-kube-controllers-6ff8cbb789-lxwqq   1/1     Running   0          6d21h   192.168.180.2     master2   <none>           <none>
calico-node-4mnfk                          1/1     Running   0          4d20h   10.10.41.165      node3     <none>           <none>
calico-node-c4rjb                          1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
calico-node-dgqwx                          1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
calico-node-fhtvz                          1/1     Running   0          6d21h   10.10.41.161      node2     <none>           <none>
calico-node-mhd7w                          1/1     Running   0          4d21h   10.10.41.155      node1     <none>           <none>
coredns-8b5d5b85f-fjq72                    1/1     Running   0          45m     192.168.135.11    node3     <none>           <none>
coredns-8b5d5b85f-hgg94                    1/1     Running   0          45m     192.168.166.136   node1     <none>           <none>
etcd-master1                               1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
etcd-master2                               1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-apiserver-master1                     1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-apiserver-master2                     1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-controller-manager-master1            1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-controller-manager-master2            1/1     Running   2          6d21h   10.10.41.159      master2   <none>           <none>
kube-proxy-66nxz                           1/1     Running   0          6d21h   10.10.41.159      master2   <none>           <none>
kube-proxy-fnrrz                           1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-proxy-lq5xp                           1/1     Running   0          6d21h   10.10.41.161      node2     <none>           <none>
kube-proxy-vxhwm                           1/1     Running   0          4d21h   10.10.41.155      node1     <none>           <none>
kube-proxy-zgwzq                           1/1     Running   0          4d20h   10.10.41.165      node3     <none>           <none>
kube-scheduler-master1                     1/1     Running   0          4d20h   10.10.41.153      master1   <none>           <none>
kube-scheduler-master2                     1/1     Running   1          6d21h   10.10.41.159      master2   <none>           <none>
master2@master2:~$kubectl get pods--namespace=kube-system-o-wide
名称就绪状态重新启动老化IP节点指定节点就绪门
calico-kube-controllers-6ff8cbb789-lxwqq 1/1运行0 6d21h 192.168.180.2 master2
印花布节点-4mnfk 1/1运行0 4d20h 10.10.41.165节点3
印花布-node-c4rjb 1/1运行0 6d21h 10.10.41.159 master2
印花布节点dgqwx 1/1运行0 4d20h 10.10.41.153 master1
印花布节点fhtvz 1/1运行0 6d21h 10.10.41.161节点2
印花布节点-mhd7w 1/1运行0 4d21h 10.10.41.155节点1
coredns-8b5d5b85f-fjq72 1/1运行0 45m 192.168.135.11节点3
coredns-8b5d5b85f-hgg94 1/1运行0 45m 192.168.166.136节点1
etcd-master1 1/1运行0 4d20h 10.10.41.153 master1
etcd-master2 1/1运行0 6d21h 10.10.41.159 master2
kube-apiserver-master1 1/1运行0 4d20h 10.10.41.153 master1
kube-apiserver-master2 1/1运行0 6d21h 10.10.41.159 master2
kube-controller-manager-master1 1 1/1运行0 4d20h 10.10.41.153 master1
kube-controller-manager-master2 1/1运行2 6d21h 10.10.41.159 master2
kube-proxy-66nxz 1/1运行0 6d21h 10.10.41.159 master2
kube代理FNRZ 1/1运行0 4d20h 10.10.41.153 master1
kube-proxy-lq5xp 1/1运行0 6d21h 10.10.41.161节点2
运行0 4d21h 10.10.41.155节点1的kube代理vxhwm 1/1
kube代理zgwzq 1/1运行0 4d20h 10.10.41.165节点3
kube-scheduler-master1 1/1运行0 4d20h 10.10.41.153 master1
kube-scheduler-master2 1/1运行1 6d21h 10.10.41.159 master2
当我尝试在节点3的节点2上使用ip 192.168.104.8 ping pod时,它失败了,并表示100%的数据丢失

master1@master1:~/cluster$ sudo kubectl get pods  -o wide
NAME                         READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES

contentms-cb475f569-t54c2    1/1     Running   0          6d21h   192.168.104.1    node2   <none>           <none>
nav-6f67d5bd79-9khmm         1/1     Running   0          6d8h    192.168.104.8    node2   <none>           <none>
react                        1/1     Running   0          7m24s   192.168.135.12   node3   <none>           <none>
statistics-5668cd7dd-thqdf   1/1     Running   0          6d15h   192.168.104.4    node2   <none>           <none>
master1@master1:~/cluster$sudo kubectl获得吊舱-o宽
名称就绪状态重新启动老化IP节点指定节点就绪门
contentms-cb475f569-t54c2 1/1运行0 6d21h 192.168.104.1节点2
nav-6f67d5bd79-9khmm 1/1运行0 6d8h 192.168.104.8节点2
反应1/1运行0 7m24s 192.168.135.12节点3
统计数据-5668cd7dd-thqdf 1/1运行0 6d15h 192.168.104.4节点2

Its是路线问题

我对每个节点eth0和eth1使用了两个IP

在路由中,它使用eth1代替eth0 ip


我禁用了eth1 IP,所有都工作了。

是从node2到node3只有一个特定的pod,还是从node2到node3的所有pod?请共享
kubectl description ds calico node-n kube system的输出
能否在node3上共享
ip路由
的输出?我只是快速地检查了我的集群,并且我能够跨节点ping pod。使用印花布。嗨,你使用什么基础设施?你有没有使用GCP?