Kubernetes GKE持久卷不保存数据
我已经为我在GKE中开发的应用程序创建了一个持久卷和卷声明。声明和存储似乎设置正确,但是,如果pod重新启动,数据不会持久。我可以在开始时保存数据,并且可以在pod中看到文件,但重新启动后文件会消失 我以前问过这个问题,但没有包括我的.yaml文件,因此收到了一种通用的答案,所以我决定重新发布.yaml文件,希望有人能看看它们,告诉我哪里出了问题。从我所看到的一切来看,问题似乎出在持久卷上,因为声明与其他人的声明完全相同Kubernetes GKE持久卷不保存数据,kubernetes,google-kubernetes-engine,Kubernetes,Google Kubernetes Engine,我已经为我在GKE中开发的应用程序创建了一个持久卷和卷声明。声明和存储似乎设置正确,但是,如果pod重新启动,数据不会持久。我可以在开始时保存数据,并且可以在pod中看到文件,但重新启动后文件会消失 我以前问过这个问题,但没有包括我的.yaml文件,因此收到了一种通用的答案,所以我决定重新发布.yaml文件,希望有人能看看它们,告诉我哪里出了问题。从我所看到的一切来看,问题似乎出在持久卷上,因为声明与其他人的声明完全相同 apiVersion: apps/v1 kind: Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: prod-api-meta-uploads-k8s
namespace: default
resourceVersion: "4500192"
selfLink: /apis/apps/v1/namespaces/default/deployments/prod-api-meta-uploads-k8s
uid: *******
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: prod-api-meta-uploads-k8s
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
gcb-build-id: *****
gcb-trigger-id:****
creationTimestamp: null
labels:
app: prod-api-meta-uploads-k8s
app.kubernetes.io/managed-by: gcp-cloud-build-deploy
app.kubernetes.io/name: prod-api-meta-uploads-k8s
app.kubernetes.io/version: becdb864864f25d2dcde2e62a2f70501cfd09f19
spec:
containers:
- image: bitbucket.org/api-meta-uploads-k8s@sha256:7766413c0d
imagePullPolicy: IfNotPresent
name: prod-api-meta-uploads-k8s-sha256-1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /uploads/profileImages
name: uploads-volume-prod
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: uploads-volume-prod
persistentVolumeClaim:
claimName: my-disk-claim-1
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-09-08T21:00:40Z"
lastUpdateTime: "2020-09-10T04:54:27Z"
message: ReplicaSet "prod-api-meta-uploads-k8s-5c8f66f886" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2020-09-10T06:49:41Z"
lastUpdateTime: "2020-09-10T06:49:41Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 36
readyReplicas: 1
replicas: 1
updatedReplicas: 1
**批量索赔
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2020-09-09T16:12:51Z"
finalizers:
- kubernetes.io/pvc-protection
name: uploads-volume-prod
namespace: default
resourceVersion: "4157429"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/uploads-volume-prod
uid: f93e6134
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: standard
volumeMode: Filesystem
volumeName: pvc-f93e6
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 30Gi
phase: Bound
***聚氯乙烯
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers:
- kubernetes.io/pvc-protection
name: my-disk-claim-1
namespace: default
resourceVersion: "4452471"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/my-disk-claim-1
uid: d533702b
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: fast
volumeMode: Filesystem
volumeName: pvc-d533702b
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
phase: Bound
在使用时,您不需要像GKE可以使用的那样,在关系1:1中手动准备PersistentVolume
和PersistentVolumeClaim
。
这是一个很好的描述
当管理员创建的静态PV中没有一个与用户的PersistentVolumeClaim匹配时,集群可能会尝试动态地为PVC提供一个专门的卷
在GKE
中,开头至少有一个名为standard
。名称旁边还有(默认)
$ kubectl get sc
NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 110m
这意味着如果您不在PersistentVolumeClaim
中指定storageClassName
,它将使用设置为默认值的storageclass
。在YAML中,我可以看到您使用了storageClassName:standard
。如果选中此storageclass
,您将看到设置为delete
的内容。以下输出:
$ kubectl describe sc standard
Name: standard
IsDefaultClass: Yes
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/gce-pd
Parameters: type=pd-standard
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
此外,正如您指定的revisionHistoryLimit:10
,10次重新启动后将重新创建pod,在这种情况下pod
,pv
和pvc
将在回收策略
设置为删除时被删除
解决方案
作为最简单的解决方案,您应该使用与Delete
不同的回收策略创建新的StorageClass
,并在PVC
中使用它
$ kubectl get sc,pv,pvc -A
NAME PROVISIONER AGE
storageclass.storage.k8s.io/another-storageclass kubernetes.io/gce-pd 53s
storageclass.storage.k8s.io/standard (default) kubernetes.io/gce-pd 130m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-67c35c06-3f38-4f55-98c8-6b2b41ae5313 1Gi RWO Retain Bound tst-dev/pvc-1 another-storageclass 43s
persistentvolume/pvc-be30a43f-e96c-4c9f-8863-464823899a8f 1Gi RWO Retain Bound tst-stage/pvc-2 another-storageclass 42s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
tst-dev persistentvolumeclaim/pvc-1 Bound pvc-67c35c06-3f38-4f55-98c8-6b2b41ae5313 1Gi RWO another-storageclass 46s
tst-stage persistentvolumeclaim/pvc-2 Bound pvc-be30a43f-e96c-4c9f-8863-464823899a8f 1Gi RWO another-storageclass 45s
$ kubectl delete pvc pvc-1 -n tst-dev
persistentvolumeclaim "pvc-1" deleted
user@cloudshell:~ (project)$ kubectl delete pvc pvc-2 -n tst-stage
persistentvolumeclaim "pvc-2" deleted
user@cloudshell:~ (project)$ kubectl get sc,pv,pvc -A
NAME PROVISIONER AGE
storageclass.storage.k8s.io/another-storageclass kubernetes.io/gce-pd 7m49s
storageclass.storage.k8s.io/standard (default) kubernetes.io/gce-pd 137m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-67c35c06-3f38-4f55-98c8-6b2b41ae5313 1Gi RWO Retain Released tst-dev/pvc-1 another-storageclass 7m38s
persistentvolume/pvc-be30a43f-e96c-4c9f-8863-464823899a8f 1Gi RWO Retain Released tst-stage/pvc-2 another-storageclass 7m37s
这是由回收策略
设置为删除
引起的。我将在一段时间内为您准备详细的答案。我已经根据您的YAML尝试了一些场景。您使用了多少个节点ReadWriteOnce
意味着来自一个节点的所有POD都能够装载
。能否提供kubectl get po、pv、pvc-A的产量?您能提供您的storageclass
YAML吗?使用命令:kubectl patch pv-p'{“spec”:{“persistentVolumereclaincy”:“Retain”}并以我的pv的名称替换,我将策略更改为Retain。不幸的是,在阅读了更多关于保留策略的内容后,我认为这可能是错误的。这些策略适用于删除索赔时数据发生的情况。我的问题是,首先我不能保存任何数据。我不担心如果pvc被删除会发生什么。只在最初保存它。我很感激你的详细回答,但我认为它不适用于这里。在你提到的原始问题中,我可以最初保存数据,我可以在pod中看到文件,但重新启动后文件会消失。
能否请你提供kubectl get pv,pvc-A
的输出?一个PVCstorageclass
是标准的
,第二个是快速的
,应该绑定哪一个?您没有使用PV
,而是动态设置并仅创建了PVC
?很抱歉描述不好。可以更清楚地说,我可以将数据插入pod,但重启pod时它不会恢复。我真的很感激你的详细回答,希望能帮助其他人。
$ kubectl get sc,pv,pvc -A
NAME PROVISIONER AGE
storageclass.storage.k8s.io/another-storageclass kubernetes.io/gce-pd 53s
storageclass.storage.k8s.io/standard (default) kubernetes.io/gce-pd 130m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-67c35c06-3f38-4f55-98c8-6b2b41ae5313 1Gi RWO Retain Bound tst-dev/pvc-1 another-storageclass 43s
persistentvolume/pvc-be30a43f-e96c-4c9f-8863-464823899a8f 1Gi RWO Retain Bound tst-stage/pvc-2 another-storageclass 42s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
tst-dev persistentvolumeclaim/pvc-1 Bound pvc-67c35c06-3f38-4f55-98c8-6b2b41ae5313 1Gi RWO another-storageclass 46s
tst-stage persistentvolumeclaim/pvc-2 Bound pvc-be30a43f-e96c-4c9f-8863-464823899a8f 1Gi RWO another-storageclass 45s
$ kubectl delete pvc pvc-1 -n tst-dev
persistentvolumeclaim "pvc-1" deleted
user@cloudshell:~ (project)$ kubectl delete pvc pvc-2 -n tst-stage
persistentvolumeclaim "pvc-2" deleted
user@cloudshell:~ (project)$ kubectl get sc,pv,pvc -A
NAME PROVISIONER AGE
storageclass.storage.k8s.io/another-storageclass kubernetes.io/gce-pd 7m49s
storageclass.storage.k8s.io/standard (default) kubernetes.io/gce-pd 137m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-67c35c06-3f38-4f55-98c8-6b2b41ae5313 1Gi RWO Retain Released tst-dev/pvc-1 another-storageclass 7m38s
persistentvolume/pvc-be30a43f-e96c-4c9f-8863-464823899a8f 1Gi RWO Retain Released tst-stage/pvc-2 another-storageclass 7m37s