当使用GPU将tensorflow服务模型部署到kubernetes时,Pods会出现不可调度的错误

当使用GPU将tensorflow服务模型部署到kubernetes时,Pods会出现不可调度的错误,tensorflow,kubernetes,google-cloud-platform,nvidia,google-kubernetes-engine,Tensorflow,Kubernetes,Google Cloud Platform,Nvidia,Google Kubernetes Engine,在使用GPU部署用于预测的对象检测模型后,我遇到两个错误: 1.PodUnschedulable无法安排POD:nvidia不足 2.PodUnschedulable无法调度POD:com/gpu 我有两个节点池。其中一个配置为启用特斯拉K80 GPU和自动缩放功能。当我使用ksonnet应用程序部署服务组件时(在此处描述: 这是kubectl descripe pods命令的输出: Name: xyz-v1-5c5b57cf9c-kvjxn Namespace:

在使用GPU部署用于预测的对象检测模型后,我遇到两个错误:

1.PodUnschedulable无法安排POD:nvidia不足

2.PodUnschedulable无法调度POD:com/gpu

我有两个节点池。其中一个配置为启用特斯拉K80 GPU和自动缩放功能。当我使用ksonnet应用程序部署服务组件时(在此处描述:

这是
kubectl descripe pods
命令的输出:

  Name:           xyz-v1-5c5b57cf9c-kvjxn
  Namespace:      default
  Node:           <none>
  Labels:         app=xyz
                  pod-template-hash=1716137957
                  version=v1
  Annotations:    <none>
  Status:         Pending
  IP:             
  Controlled By:  ReplicaSet/xyz-v1-5c5b57cf9c
  Containers:
    aadhar:
      Image:      tensorflow/serving:1.11.1-gpu
      Port:       9000/TCP
      Host Port:  0/TCP
      Command:
        /usr/bin/tensorflow_model_server
      Args:
        --port=9000
        --model_name=xyz
        --model_base_path=gs://xyz_kuber_app-xyz-identification/export/
      Limits:
        cpu:             4
        memory:          4Gi
        nvidia.com/gpu:  1
      Requests:
        cpu:             1
        memory:          1Gi
        nvidia.com/gpu:  1
      Environment:       <none>
      Mounts:
        /var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro)
    aadhar-http-proxy:
      Image:      gcr.io/kubeflow-images-public/tf-model-server-http-proxy:v20180606-9dfda4f2
      Port:       8000/TCP
      Host Port:  0/TCP
      Command:
        python
        /usr/src/app/server.py
        --port=8000
        --rpc_port=9000
        --rpc_timeout=10.0
      Limits:
        cpu:     1
        memory:  1Gi
      Requests:
        cpu:        500m
        memory:     500Mi
      Environment:  <none>
      Mounts:
        /var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro)
  Conditions:
    Type           Status
    PodScheduled   False 
  Volumes:
    default-token-b6dpn:
      Type:        Secret (a volume populated by a Secret)
      SecretName:  default-token-b6dpn
      Optional:    false
  QoS Class:       Burstable
  Node-Selectors:  <none>
  Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                   node.kubernetes.io/unreachable:NoExecute for 300s
                   nvidia.com/gpu:NoSchedule
  Events:
    Type     Reason             Age                   From                Message
    ----     ------             ----                  ----                -------
    Warning  FailedScheduling   20m (x5 over 21m)     default-scheduler   0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were unschedulable.
    Warning  FailedScheduling   20m (x2 over 20m)     default-scheduler   0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were not ready, 1 node(s) were out of disk space, 1 node(s) were unschedulable.
    Warning  FailedScheduling   16m (x9 over 19m)     default-scheduler   0/1 nodes are available: 1 Insufficient nvidia.com/gpu.
    Normal   NotTriggerScaleUp  15m (x26 over 20m)    cluster-autoscaler  pod didn't trigger scale-up (it wouldn't fit if a new node is added)
    Warning  FailedScheduling   2m42s (x54 over 23m)  default-scheduler   0/2 nodes are available: 2 Insufficient nvidia.com/gpu.
    Normal   TriggeredScaleUp   13s                   cluster-autoscaler  pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/xyz-identification/zones/us-central1-a/instanceGroups/gke-kuberflow-xyz-pool-1-9753107b-grp 1->2 (max: 10)}]


  Name:           mnist-deploy-gcp-b4dd579bf-sjwj7
  Namespace:      default
  Node:           gke-kuberflow-xyz-default-pool-ab1fa086-w6q3/10.128.0.8
  Start Time:     Thu, 14 Feb 2019 14:44:08 +0530
  Labels:         app=xyz-object
                  pod-template-hash=608813569
                  version=v1
  Annotations:    sidecar.istio.io/inject: 
  Status:         Running
  IP:             10.36.4.18
  Controlled By:  ReplicaSet/mnist-deploy-gcp-b4dd579bf
  Containers:
    xyz-object:
      Container ID:  docker://921717d82b547a023034e7c8be78216493beeb55dca57f4eddb5968122e36c16
      Image:         tensorflow/serving:1.11.1
      Image ID:      docker-pullable://tensorflow/serving@sha256:a01c6475c69055c583aeda185a274942ced458d178aaeb84b4b842ae6917a0bc
      Ports:         9000/TCP, 8500/TCP
      Host Ports:    0/TCP, 0/TCP
      Command:
        /usr/bin/tensorflow_model_server
      Args:
        --port=9000
        --rest_api_port=8500
        --model_name=xyz-object
        --model_base_path=gs://xyz_kuber_app-xyz-identification/export
        --monitoring_config_file=/var/config/monitoring_config.txt
      State:          Running
        Started:      Thu, 14 Feb 2019 14:48:21 +0530
      Last State:     Terminated
        Reason:       Error
        Exit Code:    137
        Started:      Thu, 14 Feb 2019 14:45:58 +0530
        Finished:     Thu, 14 Feb 2019 14:48:21 +0530
      Ready:          True
      Restart Count:  1
      Limits:
        cpu:     4
        memory:  4Gi
      Requests:
        cpu:     1
        memory:  1Gi
      Liveness:  tcp-socket :9000 delay=30s timeout=1s period=30s #success=1 #failure=3
      Environment:
        GOOGLE_APPLICATION_CREDENTIALS:  /secret/gcp-credentials/user-gcp-sa.json
      Mounts:
        /secret/gcp-credentials from gcp-credentials (rw)
        /var/config/ from config-volume (rw)
        /var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro)
  Conditions:
    Type           Status
    Initialized    True 
    Ready          True 
    PodScheduled   True 
  Volumes:
    config-volume:
      Type:      ConfigMap (a volume populated by a ConfigMap)
      Name:      mnist-deploy-gcp-config
      Optional:  false
    gcp-credentials:
      Type:        Secret (a volume populated by a Secret)
      SecretName:  user-gcp-sa
      Optional:    false
    default-token-b6dpn:
      Type:        Secret (a volume populated by a Secret)
      SecretName:  default-token-b6dpn
      Optional:    false
  QoS Class:       Burstable
  Node-Selectors:  <none>
  Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                   node.kubernetes.io/unreachable:NoExecute for 300s
  Events:          <none>
我是库伯内特斯的新手,无法理解这里出了什么问题

更新:我确实有一个额外的pod正在运行,我在@Paul Annett指出后关闭了它,但我仍然有相同的错误

Name:           aadhar-v1-5c5b57cf9c-q8cd8
Namespace:      default
Node:           <none>
Labels:         app=aadhar
                pod-template-hash=1716137957
                version=v1
Annotations:    <none>
Status:         Pending
IP:             
Controlled By:  ReplicaSet/aadhar-v1-5c5b57cf9c
Containers:
  aadhar:
    Image:      tensorflow/serving:1.11.1-gpu
    Port:       9000/TCP
    Host Port:  0/TCP
    Command:
      /usr/bin/tensorflow_model_server
    Args:
      --port=9000
      --model_name=aadhar
      --model_base_path=gs://xyz_kuber_app-xyz-identification/export/
    Limits:
      cpu:             4
      memory:          4Gi
      nvidia.com/gpu:  1
    Requests:
      cpu:             1
      memory:          1Gi
      nvidia.com/gpu:  1
    Environment:       <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro)
  aadhar-http-proxy:
    Image:      gcr.io/kubeflow-images-public/tf-model-server-http-proxy:v20180606-9dfda4f2
    Port:       8000/TCP
    Host Port:  0/TCP
    Command:
      python
      /usr/src/app/server.py
      --port=8000
      --rpc_port=9000
      --rpc_timeout=10.0
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:        500m
      memory:     500Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-b6dpn (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  default-token-b6dpn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-b6dpn
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
                 nvidia.com/gpu:NoSchedule
Events:
  Type     Reason            Age                    From                Message
  ----     ------            ----                   ----                -------
  Normal   TriggeredScaleUp  3m3s                   cluster-autoscaler  pod triggered scale-up: [{https://content.googleapis.com/compute/v1/projects/xyz-identification/zones/us-central1-a/instanceGroups/gke-kuberflow-xyz-pool-1-9753107b-grp 0->1 (max: 10)}]
  Warning  FailedScheduling  2m42s (x2 over 2m42s)  default-scheduler   0/2 nodes are available: 1 Insufficient nvidia.com/gpu, 1 node(s) were not ready, 1 node(s) were out of disk space.
  Warning  FailedScheduling  42s (x10 over 3m45s)   default-scheduler   0/2 nodes are available: 2 Insufficient nvidia.com/gpu.
看起来nvidia驱动程序安装程序有问题

更新3:添加了nvidia驱动程序安装程序日志。描述pod:
kubectl描述pod nvidia-driver-installer-p8qqj-n=kube系统

Name:           nvidia-driver-installer-p8qqj
Namespace:      kube-system
Node:           gke-kuberflow-aadhaar-pool-2-10d7e787-66n3/10.128.0.30
Start Time:     Fri, 15 Feb 2019 11:22:42 +0530
Labels:         controller-revision-hash=1137413470
                k8s-app=nvidia-driver-installer
                name=nvidia-driver-installer
                pod-template-generation=1
Annotations:    <none>
Status:         Pending
IP:             10.36.5.4
Controlled By:  DaemonSet/nvidia-driver-installer
Init Containers:
  nvidia-driver-installer:
    Container ID:   docker://a0b18bc13dad0d470b601ad2cafdf558a192b3a5d9ace264fd22d5b3e6130241
    Image:          gke-nvidia-installer:fixed
    Image ID:       docker-pullable://gcr.io/cos-cloud/cos-gpu-installer@sha256:e7bf3b4c77ef0d43fedaf4a244bd6009e8f524d0af4828a0996559b7f5dca091
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    32
      Started:      Fri, 15 Feb 2019 13:06:04 +0530
      Finished:     Fri, 15 Feb 2019 13:06:33 +0530
    Ready:          False
    Restart Count:  23
    Requests:
      cpu:        150m
    Environment:  <none>
    Mounts:
      /boot from boot (rw)
      /dev from dev (rw)
      /root from root-mount (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-n5t8z (ro)
Containers:
  pause:
    Container ID:   
    Image:          gcr.io/google-containers/pause:2.0
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-n5t8z (ro)
Conditions:
  Type           Status
  Initialized    False 
  Ready          False 
  PodScheduled   True 
Volumes:
  dev:
    Type:          HostPath (bare host directory volume)
    Path:          /dev
    HostPathType:  
  boot:
    Type:          HostPath (bare host directory volume)
    Path:          /boot
    HostPathType:  
  root-mount:
    Type:          HostPath (bare host directory volume)
    Path:          /
    HostPathType:  
  default-token-n5t8z:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-n5t8z
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason   Age                     From                                                 Message
  ----     ------   ----                    ----                                                 -------
  Warning  BackOff  3m36s (x437 over 107m)  kubelet, gke-kuberflow-aadhaar-pool-2-10d7e787-66n3  Back-off restarting failed container

问题似乎在于运行pod的资源不可用。pod包含两个容器,需要最少1.5Gi内存和1.5CPU,最多5GB内存和5CPU

控制器无法识别满足运行pod所需资源的节点,因此无法对其进行调度

查看是否可以减少与其中一个节点匹配的资源限制。我还从日志中看到一个节点的磁盘空间不足。检查(kubectl Descripte po)报告的问题,并对这些项目采取措施

    Limits:
      cpu:             4
      memory:          4Gi
      nvidia.com/gpu:  1
    Requests:
      cpu:             1
      memory:          1Gi
      nvidia.com/gpu:  1
我看到pod正在使用节点关联

      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: cloud.google.com/gke-accelerator
                operator: Exists
你能检查一下pod部署的节点是否有下面的标签吗

cloud.google.com/gke-accelerator

英伟达:“删除NoDeDigy部分,看看POD是否部署,并显示运行

> P>在删除英伟达POD后删除并重新创建节点并安装Nvidia驱动程序和插件。在第一次尝试中没有发生。”/P>看起来您已经有2个其他的POD运行WH了。我已经申请了GPU,现在已经没有了。如果你启用了自动缩放功能,那么缩放器可能就是原因。你的群集中还运行着什么?我还有一个pod(mnist-deploy-gcp-b4dd579bf-sjwj7)正在运行我正在进行的实验。我已经停止了。但是我仍然得到相同的错误。在这种情况下,自动缩放是如何导致问题的?自动缩放可能会导致问题-如果它将pod缩放到2个副本,将需要2个GPU,依此类推。我尝试了这个方法,但没有解决问题。但是,我使用了
kubectl get pods-n=kube system
命令,并且nvidia驱动程序似乎存在初始化问题。我已在问题中添加了输出。我尚未使用nvidia驱动程序,可能无法帮助您。您可以共享nvidia驱动程序安装程序pod的日志吗?熟悉nvidia驱动程序的人可能会帮助您我已编辑了问题aND添加了英伟达驱动程序安装程序日志。看起来init容器反复失败。从日志中我看到(图像:GKNVIDIA安装程序:固定)。这是正确的吗?它从哪里拉出来?它显示图像名和标签。
    Limits:
      cpu:             4
      memory:          4Gi
      nvidia.com/gpu:  1
    Requests:
      cpu:             1
      memory:          1Gi
      nvidia.com/gpu:  1
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:        500m
      memory:     500Mi
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: cloud.google.com/gke-accelerator
                operator: Exists
cloud.google.com/gke-accelerator