Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/unity3d/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kubernetes Deployment.apps无效:spec.template.spec.containers[0]。volumeMounts[1]。名称:找不到:";“数据”;_Kubernetes - Fatal编程技术网

Kubernetes Deployment.apps无效:spec.template.spec.containers[0]。volumeMounts[1]。名称:找不到:";“数据”;

Kubernetes Deployment.apps无效:spec.template.spec.containers[0]。volumeMounts[1]。名称:找不到:";“数据”;,kubernetes,Kubernetes,我正在部署一个名为soa伊利丹中心服务的应用程序,在kubernetes版本v1.16.0中有一个持久卷。应用yaml时,会出现以下错误: Deployment.apps "soa-illidan-hub-service" is invalid: spec.template.spec.containers[0].volumeMounts[1].name: Not found: "data" 这是我的yaml文件: kind: Deployment api

我正在部署一个名为
soa伊利丹中心服务的应用程序
,在kubernetes版本
v1.16.0
中有一个持久卷。应用yaml时,会出现以下错误:

Deployment.apps "soa-illidan-hub-service" is invalid: spec.template.spec.containers[0].volumeMounts[1].name: Not found: "data"
这是我的yaml文件:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: soa-illidan-hub-service
  namespace: dabai-pro
  selfLink: /apis/apps/v1/namespaces/dabai-pro/deployments/soa-illidan-hub-service
  uid: 01a06200-f8d4-4d60-bd79-a7acf76d0a30
  resourceVersion: '6232127'
  generation: 62
  creationTimestamp: '2020-06-08T01:42:11Z'
  labels:
    k8s-app: soa-illidan-hub-service
  annotations:
    deployment.kubernetes.io/revision: '52'
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: soa-illidan-hub-service
  template:
    metadata:
      name: soa-illidan-hub-service
      creationTimestamp: null
      labels:
        k8s-app: soa-illidan-hub-service
      annotations:
        kubectl.kubernetes.io/restartedAt: '2020-07-09T17:41:29+08:00'
    spec:
      volumes:
        - name: agent
          emptyDir: {}
      initContainers:
        - name: init-agent
          image: 'harbor.google.net/miaoyou/dabai-pro/skywalking-agent:6.5.0'
          command:
            - sh
            - '-c'
            - >-
              set -ex;mkdir -p /skywalking/agent;cp -r /opt/skywalking/agent/*
              /skywalking/agent;
          resources: {}
          volumeMounts:
            - name: agent
              mountPath: /skywalking/agent
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      containers:
        - name: soa-illidan-hub-service
          image: >-
            harbor.google.net/miaoyou/dabai-pro/soa-illidan-hub-service@sha256:4ac4c6ddceac3fde05e95219b20414fb39ad81a4f789df0fbf97196b72c9e6f0
          env:
            - name: SKYWALKING_ADDR
              value: 'dabai-skywalking-skywalking-oap.apm.svc.cluster.local:11800'
            - name: APOLLO_META
              valueFrom:
                configMapKeyRef:
                  name: pro-config
                  key: apollo.meta
            - name: ENV
              valueFrom:
                configMapKeyRef:
                  name: pro-config
                  key: env
          resources: {}
          volumeMounts:
            - name: agent
              mountPath: /opt/skywalking/agent
            - name: data
              mountPath: /var/export/data
          livenessProbe:
            httpGet:
              path: /actuator/liveness
              port: 11024
              scheme: HTTP
            initialDelaySeconds: 120
            timeoutSeconds: 60
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /actuator/health
              port: 11024
              scheme: HTTP
            initialDelaySeconds: 120
            timeoutSeconds: 60
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
          securityContext:
            privileged: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      imagePullSecrets:
        - name: harbor-regcred
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  volumeClaimTemplates:
    - metadata:
        name: data
        creationTimestamp: null
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        volumeMode: Filesystem
  progressDeadlineSeconds: 600
要添加PV,请添加volumeClaimTemplates配置:

 volumeClaimTemplates:
        - metadata:
            name: data
            creationTimestamp: null
          spec:
            accessModes:
              - ReadWriteOnce
            resources:
              requests:
                storage: 10Gi
            volumeMode: Filesystem
我在我的播客中使用这个音量,如下所示:

volumeMounts:
   - name: data
      mountPath: /var/export/data

我是否遗漏了什么?我应该如何解决此问题?

volumeClaimTemplates
仅适用于
statefulset

kubeclt explain statefulset.spec.volumeClaimTemplates
KIND:     StatefulSet
VERSION:  apps/v1

RESOURCE: volumeClaimTemplates <[]Object>

DESCRIPTION:
     volumeClaimTemplates is a list of claims that pods are allowed to
     reference. The StatefulSet controller is responsible for mapping network
     identities to claims in a way that maintains the identity of a pod. Every
     claim in this list must have at least one matching (by name) volumeMount in
     one container in the template. A claim in this list takes precedence over
     any volumes in the template, with the same name.

     PersistentVolumeClaim is a user's request for and claim to a persistent
     volume

FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata <Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec <Object>
     Spec defines the desired characteristics of a volume requested by a pod
     author. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

   status   <Object>
     Status represents the current information/status of a persistent volume
     claim. Read-only. More info:
     https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims

因此,您不能将
volumeClaimTemplates
用于
部署

,我相信您的部署定义就是问题所在

kubectl explain deployment.spec.volumeClaimTemplates
error: field "volumeClaimTemplates" does not exist
查看k8s文档,我发现:

基本上,您需要在容器下定义
volumeMounts
,并将该
volumeMount
引用到
volumes
部分下的有效卷

为了突出显示,名称应该匹配,否则,它也将失败

kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {}