Kubernetes本地存储PVC挂起处于挂起状态-如何修复?

Kubernetes本地存储PVC挂起处于挂起状态-如何修复?,kubernetes,Kubernetes,我正在寻找有关如何在kubernetes中正确使用本地存储PVC的帮助 我们在Ubuntu上配置了一个kubespray集群,并启用了本地存储配置器 我们尝试部署一个使用本地存储provisioner的有状态集,如下所示: apiVersion: apps/v1 kind: StatefulSet metadata: namespace: ps name: ps-r spec: selector: matchLabels: infrastructure: ps

我正在寻找有关如何在kubernetes中正确使用本地存储PVC的帮助

我们在Ubuntu上配置了一个kubespray集群,并启用了本地存储配置器

我们尝试部署一个使用本地存储provisioner的有状态集,如下所示:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  namespace: ps
  name: ps-r
spec:
  selector:
    matchLabels:
      infrastructure: ps
      application: redis
      environment: staging
  serviceName: hl-ps-redis
  replicas: 1
  template:
    metadata:
      namespace: ps
      labels:
        infrastructure: ps
        application: redis
        environment: staging
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: ps-redis
          image: 1234567890.dkr.ecr.us-west-2.amazonaws.com/redis:latest
          ports:
            - containerPort: 6379
              protocol: TCP
              name: redis
          volumeMounts:
            - name: ps-redis-redis
              mountPath: /data
  volumeClaimTemplates:
    - metadata:
        namespace: project-stock
        name: ps-redis-redis
        labels:
          infrastructure: ps
          application: redis
          environment: staging
      spec:
        storageClassName: local-storage
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 1Gi
正在创建PVC,但处于挂起状态:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ps-redis-redis-ps-r-0
  namespace: project-stock
  selfLink: >-
    /api/v1/namespaces/project-stock/persistentvolumeclaims/ps-redis-redis-ps-r-0
  uid: 2fac22e3-c3dc-4cbf-aeed-491f12b430e8
  resourceVersion: '384774'
  creationTimestamp: '2020-11-10T08:25:39Z'
  labels:
    application: redis
    environment: staging
    infrastructure: ps
  finalizers:
    - kubernetes.io/pvc-protection
  managedFields:
    - manager: kube-controller-manager
      operation: Update
      apiVersion: v1
      time: '2020-11-10T08:25:39Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:labels':
            .: {}
            'f:application': {}
            'f:environment': {}
            'f:infrastructure': {}
        'f:spec':
          'f:accessModes': {}
          'f:resources':
            'f:requests':
              .: {}
              'f:storage': {}
          'f:storageClassName': {}
          'f:volumeMode': {}
        'f:status':
          'f:phase': {}
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-storage
  volumeMode: Filesystem
status:
  phase: Pending
存储类别:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage
  selfLink: /apis/storage.k8s.io/v1/storageclasses/local-storage
  uid: c29adff6-a8a2-4705-bb3b-155e1f7c13a3
  resourceVersion: '1892'
  creationTimestamp: '2020-11-09T12:09:56Z'
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-storage"},"provisioner":"kubernetes.io/no-provisioner","volumeBindingMode":"WaitForFirstConsumer"}
  managedFields:
    - manager: kubectl
      operation: Update
      apiVersion: storage.k8s.io/v1
      time: '2020-11-09T12:09:56Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:kubectl.kubernetes.io/last-applied-configuration': {}
        'f:provisioner': {}
        'f:reclaimPolicy': {}
        'f:volumeBindingMode': {}
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
pod未启动:0/2个节点可用:1个cpu不足,1个节点未找到可用的永久卷进行绑定


我们做错了什么?

我们最终解决了这个任务,使用它可以更容易地进行正确配置。

因为您只有一个副本,所以它选择了一个节点在这里创建所有内容,它创建了pvc,但正如错误中提到的,第一个节点没有更多的cpu在这里部署pod,pod无法部署在第二个节点上,因为没有pvc。如果第二个节点上有更多的资源,请尝试在此处部署所有资源,您可以使用nodeAffinity,这是一个错误。您还可以增加第一个节点上的cpu,它应该可以工作。如果这能回答您的问题,请告诉我。CPU不足的节点是主节点。PV根本没有创建。PVC已创建,但处于挂起状态,未分配给任何节点。这就是为什么您必须向主节点添加更多cpu或在工作节点上部署所有这些依赖项(pod、pv、PVC)。如果您的CPU不足,您将无法在主节点上部署该pod。由于污染,它无法在主节点上部署。将节点关联添加到worker上部署也没有帮助。这完全相同。我想我们必须手动装载本地存储使用的操作系统卷。我们切换到本地路径供应器,它现在可以正常工作了。