Kubernetes 在我将GKE中的健康检查从HTTP更改为TCP后,GCloud中的健康检查将重置

Kubernetes 在我将GKE中的健康检查从HTTP更改为TCP后,GCloud中的健康检查将重置,kubernetes,google-cloud-platform,Kubernetes,Google Cloud Platform,我正在一个Kubernetes集群上工作,在那里我将服务从GCloud入口定向到我的服务。其中一个服务终结点未通过作为HTTP的运行状况检查,但将其作为TCP传递 当我将GCloud内的健康检查选项更改为TCP时,健康检查通过,我的端点工作,但几分钟后,GCloud上的健康检查将该端口重置为HTTP,健康检查再次失败,在我的端点上给我一个502响应 我不知道这是谷歌云内部的一个bug还是我在Kubernetes做错了什么。我已将YAML配置粘贴到此处: 名称空间 apiVersion: v1 k

我正在一个Kubernetes集群上工作,在那里我将服务从GCloud入口定向到我的服务。其中一个服务终结点未通过作为HTTP的运行状况检查,但将其作为TCP传递

当我将GCloud内的健康检查选项更改为TCP时,健康检查通过,我的端点工作,但几分钟后,GCloud上的健康检查将该端口重置为HTTP,健康检查再次失败,在我的端点上给我一个502响应

我不知道这是谷歌云内部的一个bug还是我在Kubernetes做错了什么。我已将YAML配置粘贴到此处:

名称空间

apiVersion: v1
kind: Namespace
metadata:
  name: parity
  labels:
    name: parity
apiVersion: v1
kind: Service
metadata:
  labels:
    app: parity
  name: parity
  namespace: parity
  annotations:
    cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
  selector:
    app: parity
  ports:
  - name: default
    protocol: TCP
    port: 80
    targetPort: 80
  - name: rpc-endpoint
    port: 8545
    protocol: TCP
    targetPort: 8545
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  type: LoadBalancer
storageclass

apiVersion: storage.k8s.io/v1
metadata:
  name: classic-ssd
  namespace: parity
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
  zones: us-central1-a
reclaimPolicy: Retain
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    name: ingress-parity
    namespace: parity
    annotations:
        #nginx.ingress.kubernetes.io/rewrite-target: /
        kubernetes.io/ingress.global-static-ip-name: cluster-1
spec:
    tls:
      secretName: tls-classic
      hosts:
        - www.redacted.com
    rules:
    - host: www.redacted.com
      http:
        paths:
        - path: /
          backend:
            serviceName: web
            servicePort: 8080
        - path: /rpc
          backend:
            serviceName: parity 
            servicePort: 8545
秘密

apiVersion: v1
kind: Secret
metadata:
    name: tls-secret 
    namespace: ingress-nginx 
data:
    tls.crt: ./config/redacted.crt
    tls.key: ./config/redacted.key
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: parity
  namespace: parity
  labels:
    app: parity
spec:
  replicas: 3 
  selector:
    matchLabels:
      app: parity
  serviceName: parity
  template:
    metadata:
      name: parity
      labels:
        app: parity
    spec:
      containers:
        - name: parity
          image: "etccoop/parity:latest"
          imagePullPolicy: Always
          args:
          - "--chain=classic"
          - "--jsonrpc-port=8545"
          - "--jsonrpc-interface=0.0.0.0"
          - "--jsonrpc-apis=web3,eth,net"
          - "--jsonrpc-hosts=all"
          ports:
            - containerPort: 8545
              protocol: TCP
              name: rpc-port
            - containerPort: 443
              protocol: TCP
              name: https
          readinessProbe:
            tcpSocket:
              port: 8545
            initialDelaySeconds: 650
          livenessProbe:
            tcpSocket:
              port: 8545
            initialDelaySeconds: 650
          volumeMounts:
            - name: parity-config
              mountPath: /parity-config
              readOnly: true
            - name: parity-data
              mountPath: /parity-data
      volumes:
      - name: parity-config
        secret:
          secretName: parity-config
  volumeClaimTemplates:
    - metadata:
        name: parity-data
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: "classic-ssd"
        resources:
          requests:
            storage: 50Gi
statefulset

apiVersion: v1
kind: Secret
metadata:
    name: tls-secret 
    namespace: ingress-nginx 
data:
    tls.crt: ./config/redacted.crt
    tls.key: ./config/redacted.key
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: parity
  namespace: parity
  labels:
    app: parity
spec:
  replicas: 3 
  selector:
    matchLabels:
      app: parity
  serviceName: parity
  template:
    metadata:
      name: parity
      labels:
        app: parity
    spec:
      containers:
        - name: parity
          image: "etccoop/parity:latest"
          imagePullPolicy: Always
          args:
          - "--chain=classic"
          - "--jsonrpc-port=8545"
          - "--jsonrpc-interface=0.0.0.0"
          - "--jsonrpc-apis=web3,eth,net"
          - "--jsonrpc-hosts=all"
          ports:
            - containerPort: 8545
              protocol: TCP
              name: rpc-port
            - containerPort: 443
              protocol: TCP
              name: https
          readinessProbe:
            tcpSocket:
              port: 8545
            initialDelaySeconds: 650
          livenessProbe:
            tcpSocket:
              port: 8545
            initialDelaySeconds: 650
          volumeMounts:
            - name: parity-config
              mountPath: /parity-config
              readOnly: true
            - name: parity-data
              mountPath: /parity-data
      volumes:
      - name: parity-config
        secret:
          secretName: parity-config
  volumeClaimTemplates:
    - metadata:
        name: parity-data
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: "classic-ssd"
        resources:
          requests:
            storage: 50Gi
服务

apiVersion: v1
kind: Namespace
metadata:
  name: parity
  labels:
    name: parity
apiVersion: v1
kind: Service
metadata:
  labels:
    app: parity
  name: parity
  namespace: parity
  annotations:
    cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
  selector:
    app: parity
  ports:
  - name: default
    protocol: TCP
    port: 80
    targetPort: 80
  - name: rpc-endpoint
    port: 8545
    protocol: TCP
    targetPort: 8545
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  type: LoadBalancer
入口

apiVersion: storage.k8s.io/v1
metadata:
  name: classic-ssd
  namespace: parity
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
  zones: us-central1-a
reclaimPolicy: Retain
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    name: ingress-parity
    namespace: parity
    annotations:
        #nginx.ingress.kubernetes.io/rewrite-target: /
        kubernetes.io/ingress.global-static-ip-name: cluster-1
spec:
    tls:
      secretName: tls-classic
      hosts:
        - www.redacted.com
    rules:
    - host: www.redacted.com
      http:
        paths:
        - path: /
          backend:
            serviceName: web
            servicePort: 8080
        - path: /rpc
          backend:
            serviceName: parity 
            servicePort: 8545
问题

apiVersion: v1
kind: Namespace
metadata:
  name: parity
  labels:
    name: parity
apiVersion: v1
kind: Service
metadata:
  labels:
    app: parity
  name: parity
  namespace: parity
  annotations:
    cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
  selector:
    app: parity
  ports:
  - name: default
    protocol: TCP
    port: 80
    targetPort: 80
  - name: rpc-endpoint
    port: 8545
    protocol: TCP
    targetPort: 8545
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  type: LoadBalancer
我已经修改了主机名等,但这是我的基本配置。我还从以下文档中运行了hello应用程序容器以进行调试:

这就是
/
上入口的端点指向
hello app
服务的8080端口。这很好,不是问题所在,只是为了澄清这里提到的

因此,这里的问题是,在使用GKE和我的ingress LoadBalancer在Google Cloud上创建我的集群(ingress文件中的
cluster-1
global static ip name)之后,然后在上面的文件中创建Kubernetes配置,当我转到谷歌计算引擎->健康检查->针对
/rpc
端点的特定健康检查时,谷歌云上的
/rpc
端点的健康检查失败

当我将健康检查编辑为不使用HTTP协议,而是使用TCP协议时,健康检查会传递给
/rpc
端点,然后我可以将其卷曲,它会返回正确的响应

问题是,几分钟后,相同的健康检查返回到HTTP协议,即使我将其编辑为TCP,然后健康检查失败,当我再次卷曲它时,得到502响应

在Kubernetes中创建入口之前,我不确定是否有办法将Google云健康检查配置附加到我的Kubernetes入口。也不知道为什么会被重置,也不知道这是谷歌云上的bug还是我在Kubernetes做错了什么。如果您注意到我的
statefolset
部署,我已指定
livenssprobe
readinessProbe
使用TCP检查端口8545

650秒的延迟是由于这里的车票问题,通过将延迟增加到600秒以上(以避免提及的比赛条件)解决了该问题:


我真的不知道为什么在我将Google云健康检查指定为TCP后,它会重置回HTTP。任何帮助都将不胜感激。

我找到了一个解决方案,在/healthz端点上的状态集上添加了一个用于健康检查的新容器,并将入口的健康检查配置为在kubernetes分配的8080端口上检查该端点,作为HTTP类型的健康检查,这使它能够工作

现在还不清楚为什么在使用TCP时会发生重置