kubernetes DNS-让服务通过DNS联系自身

kubernetes DNS-让服务通过DNS联系自身,kubernetes,dns,kube-proxy,Kubernetes,Dns,Kube Proxy,kubernetes集群中的POD可以通过向其所属服务的dns发送网络请求来访问。网络请求必须发送到[service].[namespace].svc.cluster.local,并在该服务的所有成员之间实现负载平衡 这可以让一个吊舱到达另一个吊舱,但如果一个吊舱试图通过他所属的服务到达自己,它就失败了。它总是导致超时 这是Kubernetes(在我的例子中是minikube v0.35.0)中的一个bug,还是需要一些额外的配置 以下是一些调试信息: 让我们从其他吊舱联系服务人员。这很好:

kubernetes集群中的POD可以通过向其所属服务的dns发送网络请求来访问。网络请求必须发送到
[service].[namespace].svc.cluster.local
,并在该服务的所有成员之间实现负载平衡

这可以让一个吊舱到达另一个吊舱,但如果一个吊舱试图通过他所属的服务到达自己,它就失败了。它总是导致超时

这是Kubernetes(在我的例子中是minikube v0.35.0)中的一个bug,还是需要一些额外的配置


以下是一些调试信息:

让我们从其他吊舱联系服务人员。这很好:

daemon@auth-796d88df99-twj2t:/opt/docker$ curl -v -X POST -H "Accept: application/json" --data '{}' http://message-service.message.svc.cluster.local:9000/message/get-messages
Note: Unnecessary use of -X or --request, POST is already inferred.
*   Trying 10.107.209.9...
* TCP_NODELAY set
* Connected to message-service.message.svc.cluster.local (10.107.209.9) port 9000 (#0)
> POST /message/get-messages HTTP/1.1
> Host: message-service.message.svc.cluster.local:9000
> User-Agent: curl/7.52.1
> Accept: application/json
> Content-Length: 2
> Content-Type: application/x-www-form-urlencoded
> 
* upload completely sent off: 2 out of 2 bytes
< HTTP/1.1 401 Unauthorized
< Referrer-Policy: origin-when-cross-origin, strict-origin-when-cross-origin
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< Content-Security-Policy: default-src 'self'
< X-Permitted-Cross-Domain-Policies: master-only
< Date: Wed, 20 Mar 2019 04:36:51 GMT
< Content-Type: text/plain; charset=UTF-8
< Content-Length: 12
< 
* Curl_http_done: called premature == 0
* Connection #0 to host message-service.message.svc.cluster.local left intact
Unauthorized
如果我正确读取了curl调试日志,那么dns解析为ip地址10.107.209.9。可通过该ip从任何其他吊舱到达吊舱,但吊舱无法使用该ip到达自身

以下是pod试图到达自身的网络接口:

daemon@message-58466bbc45-lch9j:/opt/docker$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
296: eth0@if297: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.9/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
这是一个众所周知的问题。讨论包含以下解决方法:

1) 尝试:


2) 使用:
clusterIP:None

只是想知道,为什么您要尝试以这种方式实现通信,简单的本地主机是否工作正常或不好?我为每个微服务的每个api都有客户端实现。他们知道名称空间和服务名称,并使用它们通过dns联系相应的api。告诉这些客户端他们是否正在联系一个服务,他们包含的容器主机并不容易,因为他们使用scala singleton对象。这很奇怪-这对我来说很好(不过运行在“真正的”集群上,而不是minikube)。您如何定义服务?它是无头的吗?@PoweredByOrange我已将部署的.yml文件附加到问题中
daemon@message-58466bbc45-lch9j:/opt/docker$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
296: eth0@if297: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.9/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
apiVersion: v1
kind: Namespace
metadata:
  name: message

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: message
  name: message
  namespace: message
spec:
  replicas: 1
  selector:
    matchLabels:
      app: message
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: message
    spec:
      containers:
        - name: message
          image: message-impl:0.1.0-SNAPSHOT
          imagePullPolicy: Never
          ports:
            - name: http
              containerPort: 9000
              protocol: TCP
          env:
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace    
            - name: KAFKA_KUBERNETES_NAMESPACE
              value: kafka
            - name: KAFKA_KUBERNETES_SERVICE
              value: kafka-svc
            - name: CASSANDRA_KUBERNETES_NAMESPACE
              value: cassandra
            - name: CASSANDRA_KUBERNETES_SERVICE
              value: cassandra
            - name: CASSANDRA_KEYSPACE
              value: service_message
---

# Service for discovery
apiVersion: v1
kind: Service
metadata:
  name: message-service
  namespace: message
spec:
  ports:
    - port: 9000
      protocol: TCP
  selector:
    app: message
---

# Expose this service to the api gateway
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: message
  namespace: message
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
    - host: api.fload.cf
      http:
        paths:
          - path: /message
            backend:
              serviceName: message-service
              servicePort: 9000
minikube ssh
sudo ip link set docker0 promisc on