GCP上的Kubernetes-没有最低可用性/最低可用性错误

GCP上的Kubernetes-没有最低可用性/最低可用性错误,kubernetes,deployment,containers,Kubernetes,Deployment,Containers,我正在将无状态应用程序工作负载部署到GCP上的Kubernetes群集。它的目的是运行一系列批处理作业,因此它需要具有google存储的I/O和用于计算输出的临时磁盘空间 当容器部署时,它会失败,并出现一个MinimumReplicasUnavailable错误(下面日志的最后一部分) 我已经改变了POD的CPU、磁盘和内存大小、POD的数量,并尝试允许自动缩放-到目前为止都没有效果。这不是配额问题 我是不是错过了一个场景 我应该共享哪些特定日志或设置来帮助诊断问题 $kubectl get e

我正在将无状态应用程序工作负载部署到GCP上的Kubernetes群集。它的目的是运行一系列批处理作业,因此它需要具有google存储的I/O和用于计算输出的临时磁盘空间

当容器部署时,它会失败,并出现一个MinimumReplicasUnavailable错误(下面日志的最后一部分)

我已经改变了POD的CPU、磁盘和内存大小、POD的数量,并尝试允许自动缩放-到目前为止都没有效果。这不是配额问题

我是不是错过了一个场景

我应该共享哪些特定日志或设置来帮助诊断问题

$kubectl get events


LAST SEEN   TYPE      REASON                    OBJECT                                                    MESSAGE
30m         Normal    Starting                  node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Starting kubelet.
30m         Normal    NodeHasSufficientMemory   node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Node gke-risk-engine-cluster-default-pool-f8851fa1-0xvx status is now: NodeHasSufficientMemory
30m         Normal    NodeHasNoDiskPressure     node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Node gke-risk-engine-cluster-default-pool-f8851fa1-0xvx status is now: NodeHasNoDiskPressure
30m         Normal    NodeHasSufficientPID      node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Node gke-risk-engine-cluster-default-pool-f8851fa1-0xvx status is now: NodeHasSufficientPID
30m         Normal    NodeAllocatableEnforced   node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Updated Node Allocatable limit across pods
30m         Normal    NodeReady                 node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Node gke-risk-engine-cluster-default-pool-f8851fa1-0xvx status is now: NodeReady
30m         Normal    RegisteredNode            node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Node gke-risk-engine-cluster-default-pool-f8851fa1-0xvx event: Registered Node gke-risk-engine-cluster-default-pool-f8851fa1-0xvx in Controller
30m         Normal    Starting                  node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Starting kube-proxy.
30m         Warning   ContainerdStart           node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Starting containerd container runtime...
30m         Warning   DockerStart               node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Starting Docker Application Container Engine...
30m         Warning   KubeletStart              node/gke-risk-engine-cluster-default-pool-f8851fa1-0xvx   Started Kubernetes kubelet.
30m         Normal    Starting                  node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Starting kubelet.
30m         Normal    NodeHasSufficientMemory   node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Node gke-risk-engine-cluster-default-pool-f8851fa1-cwm2 status is now: NodeHasSufficientMemory
30m         Normal    NodeHasNoDiskPressure     node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Node gke-risk-engine-cluster-default-pool-f8851fa1-cwm2 status is now: NodeHasNoDiskPressure
30m         Normal    NodeHasSufficientPID      node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Node gke-risk-engine-cluster-default-pool-f8851fa1-cwm2 status is now: NodeHasSufficientPID
30m         Normal    NodeAllocatableEnforced   node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Updated Node Allocatable limit across pods
30m         Normal    NodeReady                 node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Node gke-risk-engine-cluster-default-pool-f8851fa1-cwm2 status is now: NodeReady
30m         Normal    Starting                  node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Starting kube-proxy.
30m         Normal    RegisteredNode            node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Node gke-risk-engine-cluster-default-pool-f8851fa1-cwm2 event: Registered Node gke-risk-engine-cluster-default-pool-f8851fa1-cwm2 in Controller
30m         Warning   ContainerdStart           node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Starting containerd container runtime...
30m         Warning   DockerStart               node/gke-risk-engine-cluster-default-pool-f8851fa1-cwm2   Starting Docker Application Container Engine...
$kubectl描述部署-A

Name:                   event-exporter-gke
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:34 +0000
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=event-exporter
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=event-exporter
                    version=v0.3.1
  Annotations:      components.gke.io/component-name: event-exporter
                    components.gke.io/component-version: 1.0.7
  Service Account:  event-exporter-sa
  Containers:
   event-exporter:
    Image:      gke.gcr.io/event-exporter:v0.3.3-gke.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /event-exporter
      -sink-opts=-stackdriver-resource-model=new -endpoint=https://logging.googleapis.com
    Environment:  <none>
    Mounts:       <none>
   prometheus-to-sd-exporter:
    Image:      gke.gcr.io/prometheus-to-sd:v0.10.0-gke.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /monitor
      --stackdriver-prefix=container.googleapis.com/internal/addons
      --api-override=https://monitoring.googleapis.com/
      --source=event_exporter:http://localhost:80?whitelisted=stackdriver_sink_received_entry_count,stackdriver_sink_request_count,stackdriver_sink_successfully_sent_entry_count
      --pod-id=$(POD_NAME)
      --namespace-id=$(POD_NAMESPACE)
      --node-name=$(NODE_NAME)
    Environment:
      POD_NAME:        (v1:metadata.name)
      POD_NAMESPACE:   (v1:metadata.namespace)
      NODE_NAME:       (v1:spec.nodeName)
    Mounts:           <none>
  Volumes:
   ssl-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   event-exporter-gke-8489df9489 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set event-exporter-gke-8489df9489 to 1
Name:                   fluentd-gke-scaler
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:37 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=fluentd-gke-scaler
Annotations:            components.gke.io/component-name: fluentd-scaler
                        components.gke.io/component-version: 1.0.1
                        deployment.kubernetes.io/revision: 1
Selector:               k8s-app=fluentd-gke-scaler
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=fluentd-gke-scaler
  Annotations:      components.gke.io/component-name: fluentd-scaler
                    components.gke.io/component-version: 1.0.1
  Service Account:  fluentd-gke-scaler
  Containers:
   fluentd-gke-scaler:
    Image:      k8s.gcr.io/fluentd-gcp-scaler:0.5.2
    Port:       <none>
    Host Port:  <none>
    Command:
      /scaler.sh
      --ds-name=fluentd-gke
      --scaling-policy=fluentd-gcp-scaling-policy
    Environment:
      CPU_REQUEST:     100m
      MEMORY_REQUEST:  200Mi
      CPU_LIMIT:       1
      MEMORY_LIMIT:    500Mi
    Mounts:            <none>
  Volumes:             <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   fluentd-gke-scaler-cd4d654d7 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set fluentd-gke-scaler-cd4d654d7 to 1
Name:                   kube-dns
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:34 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=kube-dns
                        kubernetes.io/cluster-service=true
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=kube-dns
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  0 max unavailable, 10% max surge
Pod Template:
  Labels:           k8s-app=kube-dns
  Annotations:      components.gke.io/component-name: kubedns
                    components.gke.io/component-version: 1.0.3
                    scheduler.alpha.kubernetes.io/critical-pod:
                    seccomp.security.alpha.kubernetes.io/pod: runtime/default
  Service Account:  kube-dns
  Containers:
   kubedns:
    Image:       gke.gcr.io/k8s-dns-kube-dns-amd64:1.15.13
    Ports:       10053/UDP, 10053/TCP, 10055/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      --domain=cluster.local.
      --dns-port=10053
      --config-dir=/kube-dns-config
      --v=2
    Limits:
      memory:  210Mi
    Requests:
      cpu:      100m
      memory:   70Mi
    Liveness:   http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:  http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Environment:
      PROMETHEUS_PORT:  10055
    Mounts:
      /kube-dns-config from kube-dns-config (rw)
   dnsmasq:
    Image:       gke.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.15.13
    Ports:       53/UDP, 53/TCP
    Host Ports:  0/UDP, 0/TCP
    Args:
      -v=2
      -logtostderr
      -configDir=/etc/k8s/dns/dnsmasq-nanny
      -restartDnsmasq=true
      --
      -k
      --cache-size=1000
      --no-negcache
      --dns-forward-max=1500
      --log-facility=-
      --server=/cluster.local/127.0.0.1#10053
      --server=/in-addr.arpa/127.0.0.1#10053
      --server=/ip6.arpa/127.0.0.1#10053
    Requests:
      cpu:        150m
      memory:     20Mi
    Liveness:     http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
   sidecar:
    Image:      gke.gcr.io/k8s-dns-sidecar-amd64:1.15.13
    Port:       10054/TCP
    Host Port:  0/TCP
    Args:
      --v=2
      --logtostderr
      --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
      --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
    Requests:
      cpu:        10m
      memory:     20Mi
    Liveness:     http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:       <none>
   prometheus-to-sd:
    Image:      gke.gcr.io/prometheus-to-sd:v0.4.2
    Port:       <none>
    Host Port:  <none>
    Command:
      /monitor
      --source=kubedns:http://localhost:10054?whitelisted=probe_kubedns_latency_ms,probe_kubedns_errors,dnsmasq_misses,dnsmasq_hits
      --stackdriver-prefix=container.googleapis.com/internal/addons
      --api-override=https://monitoring.googleapis.com/
      --pod-id=$(POD_NAME)
      --namespace-id=$(POD_NAMESPACE)
      --v=2
    Environment:
      POD_NAME:        (v1:metadata.name)
      POD_NAMESPACE:   (v1:metadata.namespace)
    Mounts:           <none>
  Volumes:
   kube-dns-config:
    Type:               ConfigMap (a volume populated by a ConfigMap)
    Name:               kube-dns
    Optional:           true
  Priority Class Name:  system-cluster-critical
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   kube-dns-7c976ddbdb (2/2 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set kube-dns-7c976ddbdb to 1
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set kube-dns-7c976ddbdb to 2
Name:                   kube-dns-autoscaler
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:34 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=kube-dns-autoscaler
                        kubernetes.io/cluster-service=true
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=kube-dns-autoscaler
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=kube-dns-autoscaler
  Annotations:      seccomp.security.alpha.kubernetes.io/pod: docker/default
  Service Account:  kube-dns-autoscaler
  Containers:
   autoscaler:
    Image:      gke.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1-gke.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /cluster-proportional-autoscaler
      --namespace=kube-system
      --configmap=kube-dns-autoscaler
      --target=Deployment/kube-dns
      --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true}}
      --logtostderr=true
      --v=2
    Requests:
      cpu:              20m
      memory:           10Mi
  Volumes:
   kube-dns-config:
    Type:               ConfigMap (a volume populated by a ConfigMap)
    Name:               kube-dns
    Optional:           true
  Priority Class Name:  system-cluster-critical
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   kube-dns-7c976ddbdb (2/2 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set kube-dns-7c976ddbdb to 1
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set kube-dns-7c976ddbdb to 2
Name:                   kube-dns-autoscaler
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:34 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=kube-dns-autoscaler
                        kubernetes.io/cluster-service=true
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=kube-dns-autoscaler
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=kube-dns-autoscaler
  Annotations:      seccomp.security.alpha.kubernetes.io/pod: docker/default
  Service Account:  kube-dns-autoscaler
  Containers:
   autoscaler:
    Image:      gke.gcr.io/cluster-proportional-autoscaler-amd64:1.7.1-gke.0
    Port:       <none>
    Host Port:  <none>
    Command:
      /cluster-proportional-autoscaler
      --namespace=kube-system
      --configmap=kube-dns-autoscaler
      --target=Deployment/kube-dns
      --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true}}
      --logtostderr=true
      --v=2
    Requests:
      cpu:              20m
      memory:           10Mi
    Environment:        <none>
    Mounts:             <none>
  Volumes:              <none>
  Priority Class Name:  system-cluster-critical
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   kube-dns-autoscaler-645f7d66cf (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set kube-dns-autoscaler-645f7d66cf to 1
Name:                   l7-default-backend
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:34 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=glbc
                        kubernetes.io/cluster-service=true
                        kubernetes.io/name=GLBC
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=glbc
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:       k8s-app=glbc
                name=glbc
  Annotations:  seccomp.security.alpha.kubernetes.io/pod: docker/default
  Containers:
   default-http-backend:
    Image:      k8s.gcr.io/ingress-gce-404-server-with-metrics-amd64:v1.6.0
    Port:       8080/TCP
    Host Port:  0/TCP
    Limits:
      cpu:     10m
      memory:  20Mi
    Requests:
      cpu:        10m
      memory:     20Mi
    Liveness:     http-get http://:8080/healthz delay=30s timeout=5s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   l7-default-backend-678889f899 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set l7-default-backend-678889f899 to 1
Name:                   metrics-server-v0.3.6
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:35 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        k8s-app=metrics-server
                        kubernetes.io/cluster-service=true
                        version=v0.3.6
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               k8s-app=metrics-server,version=v0.3.6
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=metrics-server
                    version=v0.3.6
  Annotations:      seccomp.security.alpha.kubernetes.io/pod: docker/default
  Service Account:  metrics-server
  Containers:
   metrics-server:
    Image:      k8s.gcr.io/metrics-server-amd64:v0.3.6
    Port:       443/TCP
    Host Port:  0/TCP
    Command:
      /metrics-server
      --metric-resolution=30s
      --kubelet-port=10255
      --deprecated-kubelet-completely-insecure=true
      --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
    Limits:
      cpu:     43m
      memory:  55Mi
    Requests:
      cpu:        43m
      memory:     55Mi
    Environment:  <none>
    Mounts:       <none>
   metrics-server-nanny:
    Image:      gke.gcr.io/addon-resizer:1.8.8-gke.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /pod_nanny
      --config-dir=/etc/config
      --cpu=40m
      --extra-cpu=0.5m
      --memory=35Mi
      --extra-memory=4Mi
      --threshold=5
      --deployment=metrics-server-v0.3.6
      --container=metrics-server
      --poll-period=300000
      --estimator=exponential
      --scale-down-delay=24h
      --minClusterSize=5
    Limits:
      cpu:     100m
      memory:  300Mi
    Requests:
      cpu:     5m
      memory:  50Mi
    Environment:
      MY_POD_NAME:        (v1:metadata.name)
      MY_POD_NAMESPACE:   (v1:metadata.namespace)
    Mounts:
      /etc/config from metrics-server-config-volume (rw)
  Volumes:
   metrics-server-config-volume:
    Type:               ConfigMap (a volume populated by a ConfigMap)
    Name:               metrics-server-config
    Optional:           false
  Priority Class Name:  system-cluster-critical
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   metrics-server-v0.3.6-64655c969 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set metrics-server-v0.3.6-69fbfcd8b9 to 1
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set metrics-server-v0.3.6-64655c969 to 1
  Normal  ScalingReplicaSet  40m   deployment-controller  Scaled down replica set metrics-server-v0.3.6-69fbfcd8b9 to 0
Name:                   stackdriver-metadata-agent-cluster-level
Namespace:              kube-system
CreationTimestamp:      Sun, 01 Nov 2020 00:03:34 +0000
Labels:                 addonmanager.kubernetes.io/mode=Reconcile
                        app=stackdriver-metadata-agent
                        kubernetes.io/cluster-service=true
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=stackdriver-metadata-agent,cluster-level=true
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 25% max surge
Pod Template:
  Labels:           app=stackdriver-metadata-agent
                    cluster-level=true
  Annotations:      components.gke.io/component-name: stackdriver-metadata-agent
                    components.gke.io/component-version: 1.1.3
  Service Account:  metadata-agent
  Containers:
   metadata-agent:
    Image:      gcr.io/stackdriver-agents/metadata-agent-go:1.2.0
    Port:       <none>
    Host Port:  <none>
    Args:
      -logtostderr
      -v=1
    Limits:
      cpu:     48m
      memory:  112Mi
    Requests:
      cpu:     48m
      memory:  112Mi
    Environment:
      CLUSTER_NAME:       risk-engine-cluster
      CLUSTER_LOCATION:   us-central1-a
      IGNORED_RESOURCES:  replicasets.v1.apps,replicasets.v1beta1.extensions
    Mounts:
      /etc/ssl/certs from ssl-certs (rw)
   metadata-agent-nanny:
    Image:      gke.gcr.io/addon-resizer:1.8.11-gke.1
    Port:       <none>
    Host Port:  <none>
    Command:
      /pod_nanny
      --config-dir=/etc/config
      --cpu=40m
      --extra-cpu=0.5m
      --memory=80Mi
      --extra-memory=2Mi
      --threshold=5
      --deployment=stackdriver-metadata-agent-cluster-level
      --container=metadata-agent
      --poll-period=300000
      --estimator=exponential
      --minClusterSize=16
      --use-metrics=true
    Limits:
      memory:  90Mi
    Requests:
      cpu:     50m
      memory:  90Mi
    Environment:
      MY_POD_NAME:        (v1:metadata.name)
      MY_POD_NAMESPACE:   (v1:metadata.namespace)
    Mounts:
      /etc/config from metadata-agent-config-volume (rw)
  Volumes:
   ssl-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  Directory
   metadata-agent-config-volume:
    Type:               ConfigMap (a volume populated by a ConfigMap)
    Name:               metadata-agent-config
    Optional:           false
  Priority Class Name:  system-cluster-critical
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   stackdriver-metadata-agent-cluster-level-5d547598f (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  42m   deployment-controller  Scaled up replica set stackdriver-metadata-agent-cluster-level-77cc557f5c to 1
  Normal  ScalingReplicaSet  37m   deployment-controller  Scaled up replica set stackdriver-metadata-agent-cluster-level-5d547598f to 1
  Normal  ScalingReplicaSet  37m   deployment-controller  Scaled down replica set stackdriver-metadata-agent-cluster-level-77cc557f5c to 0

Name:                   risk-engine
Namespace:              re1
CreationTimestamp:      Sun, 01 Nov 2020 00:03:56 +0000
Labels:                 app=risk-engine
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=risk-engine
Replicas:               3 desired | 3 updated | 3 total | 0 available | 3 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=risk-engine
  Containers:
   risk-engine-1:
    Image:        fordesmi/risk-engine:latest
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  <none>
NewReplicaSet:   risk-engine-5b6cb4fb9d (3/3 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  41m   deployment-controller  Scaled up replica set risk-engine-5b6cb4fb9d to 3
名称:事件导出器gke
名称空间:kube系统
CreationTimestamp:Sun,2020年11月1日00:03:34+0000
注释:deployment.kubernetes.io/revision:1
选择器:k8s app=事件导出器
副本:1个需要的副本| 1个更新的副本| 1个总数| 1个可用副本| 0个不可用副本
策略类型:RollingUpdate
最小就绪秒:0
滚动更新策略:最多25%不可用,最多25%喘振
Pod模板:
标签:k8s app=事件导出器
版本=v0.3.1
注释:components.gke.io/component-name:event exporter
components.gke.io/component-version:1.0.7
服务帐户:事件导出器sa
容器:
活动导出器:
图:gke.gcr.io/事件导出器:v0.3.3-gke.0
端口:
主机端口:
命令:
/事件导出器
-sink opts=-stackdriver资源模型=new-endpoint=https://logging.googleapis.com
环境:
挂载:
普罗米修斯致sd出口商:
图片:gke.gcr.io/prometheus至sd:v0.10.0-gke.0
端口:
主机端口:
命令:
/监视器
--stackdriver前缀=container.googleapis.com/internal/addons
--api覆盖=https://monitoring.googleapis.com/
--源=事件导出器:http://localhost:80?whitelisted=stackdriver_sink_received_entry_count,stackdriver\u sink\u request\u count,stackdriver\u sink\u成功发送\u entry\u count
--吊舱id=$(吊舱名称)
--名称空间id=$(POD_名称空间)
--节点名称=$(节点名称)
环境:
POD_名称:(v1:metadata.NAME)
POD_名称空间:(v1:metadata.NAMESPACE)
节点名称:(v1:spec.nodeName)
挂载:
卷数:
ssl证书:
类型:主机路径(裸主机目录卷)
路径:/etc/ssl/certs
主机路径类型:
条件:
类型状态原因
----           ------  ------
可用真最小值ReplicasAvailable
正在进行的True NewReplicateSetAvailable
旧复制集:
NewReplicaSet:event-exporter-gke-8489df9489(创建了1/1个副本)
活动:
从消息中键入原因年龄
----    ------             ----  ----                   -------
正常扩展副本集41m部署控制器将副本集事件导出器-gke-8489df9489扩展为1
名称:fluentd gke定标器
名称空间:kube系统
CreationTimestamp:Sun,2020年11月1日00:03:37+0000
标签:addonmanager.kubernetes.io/mode=Reconcile
k8s app=fluentd gke定标器
注释:components.gke.io/component-name:fluentd scaler
components.gke.io/component-version:1.0.1
deployment.kubernetes.io/revision:1
选择器:k8s app=fluentd gke定标器
副本:1个需要的副本| 1个更新的副本| 1个总数| 1个可用副本| 0个不可用副本
策略类型:RollingUpdate
最小就绪秒:0
滚动更新策略:最多25%不可用,最多25%喘振
Pod模板:
标签:k8s app=fluentd gke定标器
注释:components.gke.io/component-name:fluentd scaler
components.gke.io/component-version:1.0.1
服务帐户:fluentd gke scaler
容器:
fluentd gke定标器:
图像:k8s.gcr.io/fluentd gcp scaler:0.5.2
端口:
主机端口:
命令:
/scaler.sh
--ds名称=fluentd gke
--缩放策略=fluentd gcp缩放策略
环境:
CPU_请求:100m
内存请求:200Mi
CPU_限制:1
内存限制:500Mi
挂载:
卷数:
条件:
类型状态原因
----           ------  ------
可用真最小值ReplicasAvailable
正在进行的True NewReplicateSetAvailable
旧复制集:
NewReplicaSet:fluentd-gke-scaler-cd4d654d7(创建了1/1个副本)
活动:
从消息中键入原因年龄
----    ------             ----  ----                   -------
正常扩展复制集41m部署控制器将复制集fluentd-gke-scaler-cd4d654d7扩展为1
名称:kube dns
名称空间:kube系统
CreationTimestamp:Sun,2020年11月1日00:03:34+0000
标签:addonmanager.kubernetes.io/mode=Reconcile
k8s应用程序=kube dns
kubernetes.io/cluster service=true
注释:deployment.kubernetes.io/revision:1
选择器:k8s app=kube dns
副本:需要2份|更新2份|共2份|可用2份|不可用0份
策略类型:RollingUpdate
最小就绪秒:0
滚动更新策略:0最大不可用,10%最大喘振
Pod模板:
标签:k8s app=kube dns
注释:components.gke.io/component-name:kubedns
components.gke.io/component-version:1.0.3
scheduler.alpha.kubernetes.io/critical-pod:
seccomp.security.alpha.kubernetes.io/pod:runtime/default
服务帐户:kube dns
容器:
库贝德斯:
图片:gke.gcr.io/k