Kubernetes Kubeadm-无法加入节点-在等待连接时请求已取消
尝试在3台Debian 10虚拟机上使用kubeadm配置k8s群集 所有虚拟机都有2个网络接口,eth0作为静态ip的公共接口,eth1作为192.168.0.0/16中静态ip的本地接口:Kubernetes Kubeadm-无法加入节点-在等待连接时请求已取消,kubernetes,kubeadm,Kubernetes,Kubeadm,尝试在3台Debian 10虚拟机上使用kubeadm配置k8s群集 所有虚拟机都有2个网络接口,eth0作为静态ip的公共接口,eth1作为192.168.0.0/16中静态ip的本地接口: 硕士学位:192.168.1.1 节点1:192.168.2.1 节点2:192.168.2.2 所有节点之间都有互连 来自主主机的ip a: # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOW
- 硕士学位:192.168.1.1
- 节点1:192.168.2.1
- 节点2:192.168.2.2
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:52:70:53:d5:12 brd ff:ff:ff:ff:ff:ff
inet XXX.XXX.244.240/24 brd XXX.XXX.244.255 scope global dynamic eth0
valid_lft 257951sec preferred_lft 257951sec
inet6 2a01:367:c1f2::112/48 scope global
valid_lft forever preferred_lft forever
inet6 fe80::252:70ff:fe53:d512/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:95:af:b0:8c:c4 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.1/16 brd 192.168.255.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::295:afff:feb0:8cc4/64 scope link
valid_lft forever preferred_lft forever
但当我加入工作节点时,无法访问kube api:
kubeadm join 192.168.1.1:6443 --token 7bl0in.s6o5kyqg27utklcl --discovery-token-ca-cert-hash sha256:7829b6c7580c0c0f66aa378c9f7e12433eb2d3b67858dd3900f7174ec99cda0e -v=5
来自主机的Netstat:
# netstat -tupn | grep :6443
tcp 0 0 192.168.1.1:43332 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:41774 192.168.1.1:6443 ESTABLISHED 5362/kube-proxy
tcp 0 0 192.168.1.1:41744 192.168.1.1:6443 ESTABLISHED 5236/kubelet
tcp 0 0 192.168.1.1:43376 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43398 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:41652 192.168.1.1:6443 ESTABLISHED 4914/kube-scheduler
tcp 0 0 192.168.1.1:43448 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43328 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43452 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43386 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43350 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:41758 192.168.1.1:6443 ESTABLISHED 5182/kube-controlle
tcp 0 0 192.168.1.1:43306 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43354 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43296 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43408 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:41730 192.168.1.1:6443 ESTABLISHED 5182/kube-controlle
tcp 0 0 192.168.1.1:41738 192.168.1.1:6443 ESTABLISHED 4914/kube-scheduler
tcp 0 0 192.168.1.1:43444 192.168.1.1:6443 TIME_WAIT -
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41730 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41744 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41738 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41652 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 ::1:6443 ::1:42862 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41758 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 ::1:42862 ::1:6443 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41774 ESTABLISHED 5094/kube-apiserver
# kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-558bd4d5db-8qhhl 0/1 Pending 0 12m <none> <none> <none> <none>
coredns-558bd4d5db-9hj7z 0/1 Pending 0 12m <none> <none> <none> <none>
etcd-cloud604486.fastpipe.io 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
kube-apiserver-cloud604486.fastpipe.io 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
kube-controller-manager-cloud604486.fastpipe.io 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
kube-proxy-dzd42 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
kube-scheduler-cloud604486.fastpipe.io 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
主站播客:
# netstat -tupn | grep :6443
tcp 0 0 192.168.1.1:43332 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:41774 192.168.1.1:6443 ESTABLISHED 5362/kube-proxy
tcp 0 0 192.168.1.1:41744 192.168.1.1:6443 ESTABLISHED 5236/kubelet
tcp 0 0 192.168.1.1:43376 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43398 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:41652 192.168.1.1:6443 ESTABLISHED 4914/kube-scheduler
tcp 0 0 192.168.1.1:43448 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43328 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43452 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43386 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43350 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:41758 192.168.1.1:6443 ESTABLISHED 5182/kube-controlle
tcp 0 0 192.168.1.1:43306 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43354 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43296 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:43408 192.168.1.1:6443 TIME_WAIT -
tcp 0 0 192.168.1.1:41730 192.168.1.1:6443 ESTABLISHED 5182/kube-controlle
tcp 0 0 192.168.1.1:41738 192.168.1.1:6443 ESTABLISHED 4914/kube-scheduler
tcp 0 0 192.168.1.1:43444 192.168.1.1:6443 TIME_WAIT -
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41730 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41744 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41738 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41652 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 ::1:6443 ::1:42862 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41758 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 ::1:42862 ::1:6443 ESTABLISHED 5094/kube-apiserver
tcp6 0 0 192.168.1.1:6443 192.168.1.1:41774 ESTABLISHED 5094/kube-apiserver
# kubectl --kubeconfig=/etc/kubernetes/admin.conf get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-558bd4d5db-8qhhl 0/1 Pending 0 12m <none> <none> <none> <none>
coredns-558bd4d5db-9hj7z 0/1 Pending 0 12m <none> <none> <none> <none>
etcd-cloud604486.fastpipe.io 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
kube-apiserver-cloud604486.fastpipe.io 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
kube-controller-manager-cloud604486.fastpipe.io 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
kube-proxy-dzd42 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
kube-scheduler-cloud604486.fastpipe.io 1/1 Running 0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io <none> <none>
#kubectl--kubeconfig=/etc/kubernetes/admin.conf get pods-n kube system-o wide
名称就绪状态重新启动老化IP节点指定节点就绪门
coredns-558bd4d5db-8qhhl 0/1待定0 12m
coredns-558bd4d5db-9hj7z 0/1待定0 12m
etcd-cloud604486.fastpipe.io 1/1运行0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io
kube-apiserver-cloud604486.fastpipe.io 1/1运行0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io
kube-controller-manager-cloud604486.fastpipe.io 1/1运行0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io
kube-proxy-dzd42 1/1运行0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io
kube-scheduler-cloud604486.fastpipe.io 1/1运行0 12m 2a01:367:c1f2::112 cloud604486.fastpipe.io
所有虚拟机都设置了以下内核参数:
{name:'vm.swappiness',value:'0'}
{name:'net.bridge.bridge nf call iptables',value:'1'}
{name:'net.bridge.bridge-nf-call-ip6tables',值:'1'}
{name:'net.ipv4.ip_forward',值:1}
{name:'net.ipv6.conf.all.forwarding',值:1}
我遗漏了什么吗?您出现问题的原因是必须确保组件之间的TLS连接安全。从
kubelet
的角度来看,如果Api服务器
证书将包含我们要连接到的服务器的IP的替代名称,那么这将是安全的。您可以注意到,您只向SANs
添加了一个IP地址
你怎么能解决这个问题?有两种方法:
--discovery token unsafe skip ca verification
标志
NIC
的IP地址添加到SAN
api证书
欲了解更多信息,请查看kubernetes 1.19中引入的这一直接相关的PR。经过一周的修补,问题归结为服务提供商网络配置错误
对于有相同问题的任何人,请检查网络的MTU,在我的情况下,MTU默认为1500,而不是建议的1450。我感谢master1可以连接到自身,但我希望调试步骤由投诉的工作人员运行。如果为了调试,将
-k
或--unsecure
包含在curl中,那么它将允许您确保您访问的是kubernetes apiserver,而不是其他内容。这看起来确实像是一个普通的防火墙问题,除非您提供事实证明并非如此。工作节点可以在任何端口连接到主节点,6443除外。我试着用openssl s_客户端进行调试,它甚至没有连接,我重新配置了本地网络也没有用,同样的症状。kubeapi在公共接口上可用,而不是在本地。请尝试从失败的节点卷曲api地址,您尝试按照mdaniel的建议加入群集?我添加了openssl和curl详细连接,它们都在ssl头上冻结。这本身没有帮助。我在主服务器上的kubelet中添加了--node ip标志,但它不起作用。Kubeadm init was:Kubeadm init--upload certs--apiserver advertive address=192.168.1.1--apiserver cert extra sans=192.168.1.1,XXX.XXX.244.240--pod network cidr=10.40.0.0/16和错误(等待报头时超过Client.Timeout)建议根本没有联系。您是否可以使用初始化集群时执行的精确信息/命令及其产生的错误(最好是v=5)更新/编辑您的问题。我很高兴能在这方面进一步帮助你,但你澄清这个问题将是向前迈进的第一步,因为在互联网上提供任何建议都是必不可少的。我已经通过pastebin添加了完整的comand输出