Kubernetes NEG说豆荚是';不健康的;,但实际上豆荚是健康的

Kubernetes NEG说豆荚是';不健康的;,但实际上豆荚是健康的,kubernetes,google-cloud-platform,google-kubernetes-engine,kubernetes-health-check,Kubernetes,Google Cloud Platform,Google Kubernetes Engine,Kubernetes Health Check,我试图在GCP上应用gRPC负载平衡和入口,为此我引用了一个示例。该示例显示gRPC负载平衡通过两种方式工作(一种是使用特使侧车,另一种是HTTP mux,在同一Pod上处理gRPC/HTTP健康检查)。但是,特使代理示例不起作用 让我困惑的是,吊舱正在运行/健康(由kubectl description,kubectl日志确认) $kubectl获得吊舱 名称就绪状态重新启动 fe-deployment-757ffcbd57-4w446 2/2运行0 4M22秒 fe-deployment-7

我试图在GCP上应用gRPC负载平衡和入口,为此我引用了一个示例。该示例显示gRPC负载平衡通过两种方式工作(一种是使用特使侧车,另一种是HTTP mux,在同一Pod上处理gRPC/HTTP健康检查)。但是,特使代理示例不起作用

让我困惑的是,吊舱正在运行/健康(由
kubectl description
kubectl日志
确认)

$kubectl获得吊舱
名称就绪状态重新启动
fe-deployment-757ffcbd57-4w446 2/2运行0 4M22秒
fe-deployment-757ffcbd57-xrrm9 2/2运行0 4M22秒
$kubectl描述吊舱fe-deployment-757ffcbd57-4w446
名称:fe-deployment-757ffcbd57-4w446
名称空间:默认值
优先级:0
PriorityClassName:
节点:gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc/10.128.0.64
开始时间:2019年9月26日星期四16:15:18+0900
标签:app=fe
pod模板哈希=757ffcbd57
注释:kubernetes.io/limit-ranger:LimitRanger插件集:容器的cpu请求;容器fe容器的cpu请求
状态:正在运行
IP:10.56.1.29
控制人:ReplicaSet/fe-deployment-757ffcbd57
容器:
fe特使:
容器ID:docker://b4789909494f7eeb8d3af66cb59168e009c582d412d8ca683a7f435559989421
图片:特使代理/特使:最新
图像ID:docker-pullable://envoyproxy/envoy@sha256:9ef9c4fd6189fdb903929dc5aa0492a51d6783777de65e567382ac7d9a28106b
端口:8080/TCP
主机端口:0/TCP
命令:
/usr/本地/bin/特使
Args:
-c
/data/config/embasic.yaml
状态:正在运行
开始时间:2019年9月26日星期四16:15:19+0900
准备好了吗
重新启动计数:0
请求:
中央处理器:100米
活跃度:http get https://:fe/_ah/健康延迟=0s超时=1s周期=10s成功=1失败=3
准备就绪:http get https://:fe/_ah/运行状况延迟=0s超时=1s周期=10s成功=1失败=3
环境:
挂载:
/来自证书卷(rw)的数据/证书
/来自特使配置卷(rw)的数据/配置
/来自default-token-c7nqc(ro)的var/run/secrets/kubernetes.io/serviceCount
铁容器:
容器ID:docker://a533224d3ea8b5e4d5e268a616d73762b37df69f434342459f35caa8fac32dab
图片:salrashid123/grpc_only__后端
图像ID:docker-pullable://salrashid123/grpc_only_backend@sha256:ebfac594116445dd67aff7c9e7a619d73222b60947e46ef65ee6d918db3e1f4b
端口:50051/TCP
主机端口:0/TCP
命令:
/grpc_服务器
Args:
--GRP端口
:50051
--不安全
状态:正在运行
开始时间:2019年9月26日星期四16:15:20+0900
准备好了吗
重新启动计数:0
请求:
中央处理器:100米
环境:
挂载:
/来自default-token-c7nqc(ro)的var/run/secrets/kubernetes.io/serviceCount
条件:
类型状态
初始化为True
准备好了吗
集装箱准备好了吗
播客预定为真
卷数:
证书数量:
类型:Secret(由Secret填充的卷)
秘书名:fe secret
可选:false
配置卷:
类型:ConfigMap(由ConfigMap填充的卷)
姓名:地图
可选:false
default-token-c7nqc:
类型:Secret(由Secret填充的卷)
SecretName:default-token-c7nqc
可选:false
QoS等级:Burstable
节点选择器:
容差:node.kubernetes.io/未就绪:不执行300秒
node.kubernetes.io/不可访问:不执行300秒
活动:
从消息中键入原因年龄
----     ------     ----                   ----                                                          -------
正常计划的4m25s默认计划程序已成功将默认/fe-deployment-757ffcbd57-4w446分配给gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc
正常拉取4m25s kubelet,gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc容器映像“特使代理/特使:最新版本”已出现在机器上
正常创建的4m24s kubelet,gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc创建的容器
正常启动4m24s kubelet,gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc启动容器
正常拉取4m24s kubelet,gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc拉取映像“salrashid123/grpc仅后端”
正常拉取4m24s kubelet,gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc成功拉取图像“salrashid123/grpc_only_后端”
正常创建的4m24s kubelet,gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc创建的容器
正常启动4m23s kubelet,gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc启动容器
警告不健康4m10s(x2/4m20s)kubelet,gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc就绪探测失败:HTTP探测失败,状态代码:503
警告不健康4m9s(x2/4m19s)kubelet,gke-ingress-grpc-loadbal-default-pool-92d3aed5-l7vc活动性探测失败:HTTP探测失败,状态代码:503
$kubectl描述吊舱fe-deployment-757ffcbd57-xrrm9
名称:fe-deployment-757ffcbd57-xrrm9
名称空间:默认值
优先级:0
PriorityClassName:
节点:gke-ingress-grpc-loadbal-default-pool-92d3aed5-52l9/10.128.0.22
开始时间:2019年9月26日星期四16:15:18+0900
标签:app=fe
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-configmap
  labels:
    app: fe
data:
  config: |-
    ---
    admin:
      access_log_path: /dev/null
      address:
        socket_address:
          address: 127.0.0.1
          port_value: 9000
    node:
      cluster: service_greeter
      id: test-id
    static_resources:
      listeners:
      - name: listener_0
        address:
          socket_address: { address: 0.0.0.0, port_value: 8080 }
        filter_chains:
        - filters:
          - name: envoy.http_connection_manager
            config:
              stat_prefix: ingress_http
              codec_type: AUTO
              route_config:
                name: local_route
                virtual_hosts:
                - name: local_service
                  domains: ["*"]
                  routes:
                  - match:
                      path: "/echo.EchoServer/SayHello"
                    route: { cluster: local_grpc_endpoint  }
              http_filters:
              - name: envoy.lua
                config:
                  inline_code: |
                    package.path = "/etc/envoy/lua/?.lua;/usr/share/lua/5.1/nginx/?.lua;/etc/envoy/lua/" .. package.path
                    function envoy_on_request(request_handle)

                      if request_handle:headers():get(":path") == "/_ah/health" then
                        local headers, body = request_handle:httpCall(
                        "local_admin",
                        {
                          [":method"] = "GET",
                          [":path"] = "/clusters",
                          [":authority"] = "local_admin"
                        },"", 50)


                        str = "local_grpc_endpoint::127.0.0.1:50051::health_flags::healthy"
                        if string.match(body, str) then
                          request_handle:respond({[":status"] = "200"},"ok")
                        else
                          request_handle:logWarn("Envoy healthcheck failed")     
                          request_handle:respond({[":status"] = "503"},"unavailable")
                        end
                      end
                    end              
              - name: envoy.router
                typed_config: {}
          tls_context:
            common_tls_context:
              tls_certificates:
                - certificate_chain:
                    filename: "/data/certs/tls.crt"
                  private_key:
                    filename: "/data/certs/tls.key"
      clusters:
      - name: local_grpc_endpoint
        connect_timeout: 0.05s
        type:  STATIC
        http2_protocol_options: {}
        lb_policy: ROUND_ROBIN
        common_lb_config:
          healthy_panic_threshold:
            value: 50.0   
        health_checks:
          - timeout: 1s
            interval: 5s
            interval_jitter: 1s
            no_traffic_interval: 5s
            unhealthy_threshold: 1
            healthy_threshold: 3
            grpc_health_check:
              service_name: "echo.EchoServer"
              authority: "server.domain.com"
        hosts:
        - socket_address:
            address: 127.0.0.1
            port_value: 50051
      - name: local_admin
        connect_timeout: 0.05s
        type:  STATIC
        lb_policy: ROUND_ROBIN
        hosts:
        - socket_address:
            address: 127.0.0.1
            port_value: 9000
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: fe-deployment
  labels:
    app: fe
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: fe
    spec:
      containers:

      - name: fe-envoy
        image: envoyproxy/envoy:latest
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /_ah/health
            scheme: HTTPS
            port: fe
        readinessProbe:
          httpGet:
            path: /_ah/health
            scheme: HTTPS
            port: fe
        ports:
        - name: fe
          containerPort: 8080
          protocol: TCP               
        command: ["/usr/local/bin/envoy"]
        args: ["-c", "/data/config/envoy.yaml"]
        volumeMounts:
        - name: certs-volume
          mountPath: /data/certs
        - name: envoy-config-volume
          mountPath: /data/config

      - name: fe-container
        image: salrashid123/grpc_only_backend  # This runs gRPC secure/insecure server using port argument(:50051). Port 50051 is also exposed on Dockerfile.
        imagePullPolicy: Always         
        ports:
        - containerPort: 50051
          protocol: TCP                 
        command: ["/grpc_server"]
        args: ["--grpcport", ":50051", "--insecure"]

      volumes:
        - name: certs-volume
          secret:
            secretName: fe-secret
        - name: envoy-config-volume
          configMap:
             name: envoy-configmap
             items:
              - key: config
                path: envoy.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: fe-srv-ingress
  labels:
    type: fe-srv
  annotations:
    service.alpha.kubernetes.io/app-protocols: '{"fe":"HTTP2"}'
    cloud.google.com/neg: '{"ingress": true}'
spec:
  type: NodePort 
  ports:
  - name: fe
    port: 8080
    protocol: TCP
    targetPort: 8080       
  selector:
    app: fe
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: fe-ingress
  annotations:
    kubernetes.io/ingress.allow-http: "false"
spec:
  tls:
  - hosts:
    - server.domain.com
    secretName: fe-secret
  rules:
  - host: server.domain.com  
    http:
      paths:
      - path: /echo.EchoServer/*
        backend:
          serviceName: fe-srv-ingress
          servicePort: 8080