Docker Kubernetes节点端口连接被拒绝

Docker Kubernetes节点端口连接被拒绝,docker,kubernetes,centos,load-balancing,Docker,Kubernetes,Centos,Load Balancing,我在virtualbox环境中有一个具有3个节点的集群。我创建了带有标志的集群 kubeadm init --pod-network-cidr=10.244.0.0/16 然后我安装了flannel并将其余两个节点添加到集群中。之后,创建了新的虚拟机来托管docker映像的私有存储库。接下来,我使用此.yaml创建我的应用程序的部署: apiVersion: apps/v1 kind: Deployment metadata: name: gunicorn spec: selector

我在virtualbox环境中有一个具有3个节点的集群。我创建了带有标志的集群

kubeadm init --pod-network-cidr=10.244.0.0/16
然后我安装了flannel并将其余两个节点添加到集群中。之后,创建了新的虚拟机来托管docker映像的私有存储库。接下来,我使用此.yaml创建我的应用程序的部署:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gunicorn
spec:
  selector:
    matchLabels:
      app: gunicorn
  replicas: 1
  template:
    metadata:
      labels:
        app: gunicorn
    spec:
      imagePullSecrets:
      - name: my-registry-key
      containers:
      - name: ipcheck2
        image: 192.168.2.4:8083/ipcheck2:1
        imagePullPolicy: Always
        command:
        - sleep
        - "infinity"
        ports:
        - containerPort: 8080
          hostPort: 8080
映像是从以下dockerfile创建的,并推送到repo:

FROM python:3

EXPOSE 8080

ADD /IP_check/ /

WORKDIR /

RUN pip install pip --upgrade

RUN pip install -r requirements.txt

CMD ["gunicorn", "IP_check.wsgi", "-b :8080"]
在这一刻,我可以告诉你们,若我从docker引擎端运行容器,暴露这个端口,我就可以和应用程序连接

接下来,我为我的应用程序创建了节点端口服务:

apiVersion: v1
kind: Service
metadata:
  name: ipcheck
spec:
  selector:
    app: gunicorn
  ports:
  - port: 70
    targetPort: 8080
    nodePort: 30000
  type: NodePort
这就是问题所在。我与kubectl Descripte pods进行了核对,该节点正在使用我的应用程序运行pod。然后我试着用curl:30000来访问这个应用程序,但它不起作用

curl: (7) Failed connect to 192.168.2.3:30000; Connection refused
我还从安装了hello world应用程序,并使用NodePort将其公开。这也不管用

有人知道为什么我不能从集群内部和集群外部通过NodePort访问pod吗

OS:Centos7

IP地址:

Node1 192.168.2.1   -   Master
Node2 192.168.2.2   -   Worker
Node3 192.168.2.3   -   Worker
Node4 192.168.2.4   -   Private repo (outside of cluster)
Pod描述:

Name:         gunicorn-5f7f485585-wjdnf
Namespace:    default
Priority:     0
Node:         node3/192.168.2.3
Start Time:   Thu, 16 Jul 2020 18:01:54 +0200
Labels:       app=gunicorn
              pod-template-hash=5f7f485585
Annotations:  <none>
Status:       Running
IP:           10.244.1.20
IPs:
  IP:           10.244.1.20
Controlled By:  ReplicaSet/gunicorn-5f7f485585
Containers:
  ipcheck2:
    Container ID:  docker://9aa18f3fff1d13dfc76355dde72554fd3af304435c9b7fc4f7365b4e6ac9059a
    Image:         192.168.2.4:8083/ipcheck2:1
    Image ID:      docker-pullable://192.168.2.4:8083/ipcheck2@sha256:e48469c6d1bec474b32cd04ca5ccbc057da0377dff60acc37e7fa786cbc39526
    Port:          8080/TCP
    Host Port:     8080/TCP
    Command:
      sleep
      infinity
    State:          Running
      Started:      Thu, 16 Jul 2020 18:01:55 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9q77c (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-9q77c:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9q77c
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  40m   default-scheduler  Successfully assigned default/gunicorn-5f7f485585-wjdnf to node3
  Normal  Pulling    40m   kubelet, node3     Pulling image "192.168.2.4:8083/ipcheck2:1"
  Normal  Pulled     40m   kubelet, node3     Successfully pulled image "192.168.2.4:8083/ipcheck2:1"
  Normal  Created    40m   kubelet, node3     Created container ipcheck2
  Normal  Started    40m   kubelet, node3     Started container ipcheck2
Name:                     ipcheck
Namespace:                default
Labels:                   <none>
Annotations:              Selector:  app=gunicorn
Type:                     NodePort
IP:                       10.111.7.129
Port:                     <unset>  70/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30000/TCP
Endpoints:                10.244.1.20:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
节点3上的“ip a”:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:a4:1d:ff brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3
       valid_lft 86181sec preferred_lft 86181sec
    inet6 fe80::1272:64b5:b03b:2b75/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:14:7f:ad brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.3/24 brd 192.168.2.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::2704:2b92:cc02:e88/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:a1:17:41:be brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 6e:c6:9c:0f:ab:55 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::6cc6:9cff:fe0f:ab55/64 scope link
       valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 4a:66:88:71:56:6a brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::4866:88ff:fe71:566a/64 scope link
       valid_lft forever preferred_lft forever
7: veth0ded1d29@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether 22:c2:6b:c7:cc:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::20c2:6bff:fec7:cc7a/64 scope link
       valid_lft forever preferred_lft forever

我希望您能够使用clusterip在内部使用
我希望您能够使用clusterip在内部使用
不适用于我,我从所有节点和freez终端使用它。我已经退出了firewalld,但我添加了这条规则,但仍然一无所获。不适用于我,我从所有节点和freez终端使用它。我已经退出了firewalld,但我添加了这个规则,但仍然没有添加任何内容。此时,你所说的
是什么意思?我可以看出,如果我从docker引擎端运行容器,暴露这个端口,我就可以与应用程序连接。
?它和docker run一起工作?您能尝试部署ClusterIP并检查它是否在内部工作吗?是的,当我使用docker run时,我能够访问应用程序。我尝试部署这个示例。我使用nginx部署应用了yaml。豆荚IP看起来也一样。然后,我从给定的yaml创建了服务。描述svc看起来也一样。我试图卷曲:,但它再次冻结了终端,一段时间后,我得到的连接超时。我想说的是centos或虚拟盒网络问题。看看他们是如何配置centos和virtualbox网络使其工作的,也许你会发现一些阻碍你的因素。据我所知,你们所有的吊舱都运行正常?我刚刚(无意中)发现了nodePort不工作的原因,或者在服务正常时我无法到达吊舱的原因。实际上,我忘了从部署清单中删除'sleep'命令,所以容器启动了gunicorn,然后进入睡眠状态,这意味着gunicorn被'sleep'停止。不过,非常感谢大家的帮助。(顺便说一句,很奇怪,我甚至无法从示例中接触到pod)您所说的
是什么意思现在我可以看出,如果我从docker引擎端运行container,暴露这个端口,我就能够连接到应用程序。
?它和docker run一起工作?您能尝试部署ClusterIP并检查它是否在内部工作吗?是的,当我使用docker run时,我能够访问应用程序。我尝试部署这个示例。我使用nginx部署应用了yaml。豆荚IP看起来也一样。然后,我从给定的yaml创建了服务。描述svc看起来也一样。我试图卷曲:,但它再次冻结了终端,一段时间后,我得到的连接超时。我想说的是centos或虚拟盒网络问题。看看他们是如何配置centos和virtualbox网络使其工作的,也许你会发现一些阻碍你的因素。据我所知,你们所有的吊舱都运行正常?我刚刚(无意中)发现了nodePort不工作的原因,或者在服务正常时我无法到达吊舱的原因。实际上,我忘了从部署清单中删除'sleep'命令,所以容器启动了gunicorn,然后进入睡眠状态,这意味着gunicorn被'sleep'停止。不过,非常感谢大家的帮助。(顺便说一句,很奇怪,我甚至连例子中的豆荚都够不着)
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:a4:1d:ff brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3
       valid_lft 86181sec preferred_lft 86181sec
    inet6 fe80::1272:64b5:b03b:2b75/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:14:7f:ad brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.3/24 brd 192.168.2.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::2704:2b92:cc02:e88/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:a1:17:41:be brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 6e:c6:9c:0f:ab:55 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::6cc6:9cff:fe0f:ab55/64 scope link
       valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 4a:66:88:71:56:6a brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::4866:88ff:fe71:566a/64 scope link
       valid_lft forever preferred_lft forever
7: veth0ded1d29@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether 22:c2:6b:c7:cc:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::20c2:6bff:fec7:cc7a/64 scope link
       valid_lft forever preferred_lft forever
ipcheck            10.244.1.21:8080   51m
kubernetes         192.168.2.1:6443   9d