Kubernetes 为什么我会得到;未绑定立即持久卷目标“;在Minikube?

Kubernetes 为什么我会得到;未绑定立即持久卷目标“;在Minikube?,kubernetes,minikube,Kubernetes,Minikube,我得到“pod已解除立即PersistentVolumeClaims的绑定”,我不知道为什么。我在macOS上运行minikube v0.34.1。 以下是配置: es-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: elasticsearch spec: capacity: storage: 400Mi accessModes: - ReadWriteOnce hostPath:

我得到“pod已解除立即PersistentVolumeClaims的绑定”,我不知道为什么。我在macOS上运行minikube v0.34.1。 以下是配置:

es-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: elasticsearch
spec:
  capacity:
    storage: 400Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/elasticsearch/"
es-statefulset.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
        - name: elasticsearch
          image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3
          resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
          ports:
            - containerPort: 9200
              name: rest
              protocol: TCP
            - containerPort: 9300
              name: inter-node
              protocol: TCP
          volumeMounts:
            - name: data
              mountPath: /usr/share/elasticsearch/data
          env:
            - name: cluster.name
              value: k8s-logs
            - name: node.name
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: discovery.zen.ping.unicast.hosts
              value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
            - name: discovery.zen.minimum_master_nodes
              value: "2"
            - name: ES_JAVA_OPTS
              value: "-Xms256m -Xmx256m"
      initContainers:
        - name: fix-permissions
          image: busybox
          command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
          securityContext:
            privileged: true
          volumeMounts:
            - name: data
              mountPath: /usr/share/elasticsearch/data
        - name: increase-vm-max-map
          image: busybox
          command: ["sysctl", "-w", "vm.max_map_count=262144"]
          securityContext:
            privileged: true
        - name: increase-fd-ulimit
          image: busybox
          command: ["sh", "-c", "ulimit -n 65536"]
          securityContext:
            privileged: true
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: [ "ReadWriteOnce" ]
        storageClassName: "standard"
        resources:
          requests:
            storage: 100Mi
es-svc.yaml

kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node

为了使卷访问我的多个播客,accessModes必须是“ReadWriteMany”。此外,如果每个pod想要拥有自己的目录,则需要使用子路径

问题已在评论部分@Michael Böckling中解决。这是进一步的信息


您可以使用环境变量作为子路径名称。像这样:

env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
volumeMounts:
    - name: workdir1
      mountPath: /logs
      subPathExpr: $(POD_NAME)

您好,您能运行
kubectl description pvc$pvc_NAME
这是输出:
kubectl get po
显示:
es-cluster-0 0/1 crashloopback 6 20m es-cluster-1 1/1运行6 19m es-cluster-2 0/1 crashloopback 6 19m
,因此一个pod成功,其他的没有。pvc似乎与pv绑定,我很想知道kubectl descripve pod$pod_NAME和kubectl get pod$pod_NAME-o yaml如何在pod内安装卷:
env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
volumeMounts:
    - name: workdir1
      mountPath: /logs
      subPathExpr: $(POD_NAME)