Networking Prometheus pod无法调用apiserver终结点

Networking Prometheus pod无法调用apiserver终结点,networking,kubernetes,prometheus,Networking,Kubernetes,Prometheus,我正试图通过helm install stable/prometheus在我设置的raspberry pi k8s集群(1个主节点+3个工作节点)上设置监控堆栈(prometheus+alertmanager+node_exporter等) 设法使所有必需的吊舱运行 pi-monitoring-prometheus-alertmanager-767cd8bc65-89hxt 2/2 Running 0 131m 10.17.2.56

我正试图通过
helm install stable/prometheus
在我设置的raspberry pi k8s集群(1个主节点+3个工作节点)上设置监控堆栈(prometheus+alertmanager+node_exporter等)

设法使所有必需的吊舱运行

pi-monitoring-prometheus-alertmanager-767cd8bc65-89hxt   2/2     Running            0          131m    10.17.2.56      kube2   <none>           <none>
pi-monitoring-prometheus-node-exporter-h86gt             1/1     Running            0          131m    192.168.1.212   kube2   <none>           <none>
pi-monitoring-prometheus-node-exporter-kg957             1/1     Running            0          131m    192.168.1.211   kube1   <none>           <none>
pi-monitoring-prometheus-node-exporter-x9wgb             1/1     Running            0          131m    192.168.1.213   kube3   <none>           <none>
pi-monitoring-prometheus-pushgateway-799d4ff9d6-rdpkf    1/1     Running            0          131m    10.17.3.36      kube1   <none>           <none>
pi-monitoring-prometheus-server-5d989754b6-gp69j         2/2     Running            0          98m     10.17.1.60      kube3   <none>           <none>
问题:为什么普罗米修斯吊舱无法调用apiserver端点?不确定配置在哪里出错

后续和已实现的单个节点无法解析其他节点上的服务

在过去的一天里,我阅读了各种资料,但老实说,我甚至不知道从哪里开始

这些是运行在
kube系统
名称空间中的pod。希望这能更好地了解我的系统是如何建立的

pi@kube4:~ $ kubectl get pods -n kube-system -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
coredns-66bff467f8-nzvq8        1/1     Running   0          13d   10.17.0.2       kube4   <none>           <none>
coredns-66bff467f8-z7wdb        1/1     Running   0          13d   10.17.0.3       kube4   <none>           <none>
etcd-kube4                      1/1     Running   0          13d   192.168.1.214   kube4   <none>           <none>
kube-apiserver-kube4            1/1     Running   2          13d   192.168.1.214   kube4   <none>           <none>
kube-controller-manager-kube4   1/1     Running   2          13d   192.168.1.214   kube4   <none>           <none>
kube-flannel-ds-arm-8g9fb       1/1     Running   1          13d   192.168.1.212   kube2   <none>           <none>
kube-flannel-ds-arm-c5qt9       1/1     Running   0          13d   192.168.1.214   kube4   <none>           <none>
kube-flannel-ds-arm-q5pln       1/1     Running   1          13d   192.168.1.211   kube1   <none>           <none>
kube-flannel-ds-arm-tkmn6       1/1     Running   1          13d   192.168.1.213   kube3   <none>           <none>
kube-proxy-4zjjh                1/1     Running   0          13d   192.168.1.213   kube3   <none>           <none>
kube-proxy-6mk2z                1/1     Running   0          13d   192.168.1.211   kube1   <none>           <none>
kube-proxy-bbr8v                1/1     Running   0          13d   192.168.1.212   kube2   <none>           <none>
kube-proxy-wfsbm                1/1     Running   0          13d   192.168.1.214   kube4   <none>           <none>
kube-scheduler-kube4            1/1     Running   3          13d   192.168.1.214   kube4   <none>           <none>
pi@kube4:~$kubectl获得吊舱-n kube系统-o范围
名称就绪状态重新启动老化IP节点指定节点就绪门
coredns-66bff467f8-nzvq8 1/1运行0 13d 10.17.0.2 kube4
coredns-66bff467f8-z7wdb 1/1运行0 13d 10.17.0.3 kube4
etcd-kube4 1/1运行0 13d 192.168.1.214 kube4
kube-apiserver-kube4 1/1运行2 13d 192.168.1.214 kube4
kube-controller-manager-kube4 1/1运行2 13d 192.168.1.214 kube4
kube-flannel-ds-arm-8g9fb 1/1运行1 13d 192.168.1.212 kube2
kube-flannel-ds-arm-c5qt9 1/1运行0 13d 192.168.1.214 kube4
kube-flannel-ds-arm-q5pln 1/1运行1 13d 192.168.1.211 kube1
kube-flannel-ds-arm-tkmn6 1/1运行1 13d 192.168.1.213 kube3
kube-proxy-4zjjh 1/1运行0 13d 192.168.1.213 kube3
kube-proxy-6mk2z 1/1运行0 13d 192.168.1.211 kube1
kube-proxy-bbr8v 1/1运行0 13d 192.168.1.212 kube2
kube代理wfsbm 1/1运行0 13d 192.168.1.214 kube4
kube-scheduler-kube4 1/1运行3 13d 192.168.1.214 kube4

我怀疑存在阻止您访问API服务器的网络问题。“拨号tcp 10.18.0.1:443:i/o超时”通常反映您无法连接或读取服务器。您可以使用以下步骤来解决问题: 1.使用
kubectl运行busybox--image=busybox-n kube系统部署一个busybox吊舱
2.使用
kubectl exec-n kube system-it sh进入吊舱
3.尝试从tty执行telnet,如
telnet 10.18.0.1 443
,以解决连接问题


让我知道输出

我怀疑存在阻止您访问API服务器的网络问题。“拨号tcp 10.18.0.1:443:i/o超时”通常反映您无法连接或读取服务器。您可以使用以下步骤来解决问题: 1.使用
kubectl运行busybox--image=busybox-n kube系统部署一个busybox吊舱
2.使用
kubectl exec-n kube system-it sh进入吊舱
3.尝试从tty执行telnet,如
telnet 10.18.0.1 443
,以解决连接问题

让我知道输出

指出:

注意:如果使用了
kubeadm
,则将
--pod network cidr=10.244.0.0/16
传递到
kubeadm init
,以确保设置了
podCIDR

这是因为默认情况下,flannel ConfigMap配置为在
网络上工作:“10.244.0.0/16”

您已经用
--pod网络cidr=10.17.0.0/16
配置了kubeadm,现在需要在flannel ConfigMap
kube flannel cfg中配置它,如下所示:

kind:ConfigMap
版本:v1
元数据:
名称:库贝法兰绒
名称空间:kube系统
标签:
层:节点
应用:法兰绒
数据:
cni-conf.json:|
{
“名称”:“cbr0”,
“CNIVERION”:“0.3.1”,
“插件”:[
{
“类型”:“法兰绒”,
“代表”:{
“发夹模式”:没错,
“isDefaultGateway”:true
}
},
{
“类型”:“端口映射”,
“能力”:{
“端口映射”:true
}
}
]
}
net-conf.json:|
{
“网络”:“10.17.0.0/16”,
“后端”:{
“类型”:“vxlan”
}
}
感谢他的调试帮助。

说明:

注意:如果使用了
kubeadm
,则将
--pod network cidr=10.244.0.0/16
传递到
kubeadm init
,以确保设置了
podCIDR

这是因为默认情况下,flannel ConfigMap配置为在
网络上工作:“10.244.0.0/16”

您已经用
--pod网络cidr=10.17.0.0/16
配置了kubeadm,现在需要在flannel ConfigMap
kube flannel cfg中配置它,如下所示:

kind:ConfigMap
版本:v1
元数据:
名称:库贝法兰绒
名称空间:kube系统
标签:
层:节点
应用:法兰绒
数据:
cni-conf.json:|
{
“名称”:“cbr0”,
“CNIVERION”:“0.3.1”,
“插件”:[
{
“类型”:“法兰绒”,
“代表”:{
“发夹模式”:没错,
“isDefaultGateway”:true
}
},
{
“类型”:“端口映射”,
“能力”:{
“端口映射”:true
}
}
]
}
net-conf.json:|
{
“网络”:“10.17.0.0/16”,
“后端”:{
“类型”:“vxlan”
}
}
谢谢
pi@kube4:~ $ kubectl get pods -n kube-system -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
coredns-66bff467f8-nzvq8        1/1     Running   0          13d   10.17.0.2       kube4   <none>           <none>
coredns-66bff467f8-z7wdb        1/1     Running   0          13d   10.17.0.3       kube4   <none>           <none>
etcd-kube4                      1/1     Running   0          13d   192.168.1.214   kube4   <none>           <none>
kube-apiserver-kube4            1/1     Running   2          13d   192.168.1.214   kube4   <none>           <none>
kube-controller-manager-kube4   1/1     Running   2          13d   192.168.1.214   kube4   <none>           <none>
kube-flannel-ds-arm-8g9fb       1/1     Running   1          13d   192.168.1.212   kube2   <none>           <none>
kube-flannel-ds-arm-c5qt9       1/1     Running   0          13d   192.168.1.214   kube4   <none>           <none>
kube-flannel-ds-arm-q5pln       1/1     Running   1          13d   192.168.1.211   kube1   <none>           <none>
kube-flannel-ds-arm-tkmn6       1/1     Running   1          13d   192.168.1.213   kube3   <none>           <none>
kube-proxy-4zjjh                1/1     Running   0          13d   192.168.1.213   kube3   <none>           <none>
kube-proxy-6mk2z                1/1     Running   0          13d   192.168.1.211   kube1   <none>           <none>
kube-proxy-bbr8v                1/1     Running   0          13d   192.168.1.212   kube2   <none>           <none>
kube-proxy-wfsbm                1/1     Running   0          13d   192.168.1.214   kube4   <none>           <none>
kube-scheduler-kube4            1/1     Running   3          13d   192.168.1.214   kube4   <none>           <none>
Chain FORWARD (policy DROP)
target     prot opt source               destination
KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere