Kubernetes PersistentVolume不使用本地主机路径
我想(暂时)使用本地主机绑定目录来持久化SonarQube的应用程序状态。下面我将描述我是如何在一个自托管Kubernetes(1.11.3)集群中实现这一点的 我遇到的问题是,尽管一切正常,Kubernetes并没有使用主机路径来持久化数据(Kubernetes PersistentVolume不使用本地主机路径,kubernetes,Kubernetes,我想(暂时)使用本地主机绑定目录来持久化SonarQube的应用程序状态。下面我将描述我是如何在一个自托管Kubernetes(1.11.3)集群中实现这一点的 我遇到的问题是,尽管一切正常,Kubernetes并没有使用主机路径来持久化数据(/opt/sonarqube/postgresql)。当docker检查SonarQube容器时,它使用下面的绑定 如何使用主机装载路径进行装载? "Binds": [ "/var/lib/kubelet/pods/
/opt/sonarqube/postgresql
)。当docker检查SonarQube容器时,它使用下面的绑定
如何使用主机装载路径进行装载?
"Binds": [
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volume-subpaths/sonarqube-pv-postgresql/sonarqube/0:/opt/sonarqube/conf",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volumes/kubernetes.io~configmap/startup:/tmp-script/:ro",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volume-subpaths/sonarqube-pv-postgresql/sonarqube/2:/opt/sonarqube/data",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volume-subpaths/sonarqube-pv-postgresql/sonarqube/3:/opt/sonarqube/extensions",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/volumes/kubernetes.io~secret/default-token-zrjdj:/var/run/secrets/kubernetes.io/serviceaccount:ro",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/etc-hosts:/etc/hosts",
"/var/lib/kubelet/pods/49666f05-dad0-11e8-95cd-666c474c0e54/containers/sonarqube/95053a5c:/dev/termination-log"
]
以下是我设置应用程序的步骤
我创建了一个StorageClass
来创建用于装载本地路径的PV:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage-nowait
provisioner: kubernetes.io/no-provisioner
然后,我创建了两个PV,用于以下情况:
apiVersion: v1
kind: PersistentVolume
metadata:
name: sonarqube-pv-postgresql
labels:
type: local
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
hostPath:
path: /opt/sonarqube/postgresql
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- myhost
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
labels:
vol=myvolume
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
我用这个额外的配置启动了SonarQube掌舵图,以使用我刚刚创建的PVs
image:
tag: 7.1
persistence:
enabled: true
storageClass: local-storage
accessMode: ReadWriteOnce
size: 10Gi
postgresql:
persistence:
enabled: true
storageClass: local-storage
accessMode: ReadWriteOnce
size: 10Gi
如果你看到文件
- HostPath(仅限单节点测试–本地存储不受任何支持,并且在多节点群集中不起作用)
local
:
apiVersion: v1
kind: PersistentVolume
metadata:
name: sonarqube-pv-postgresql
labels:
type: local
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
hostPath:
path: /opt/sonarqube/postgresql
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- myhost
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
labels:
vol=myvolume
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
然后您必须创建相应的PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 100Gi
storageClassName: local-storage
selector:
matchLabels:
vol: "myvolume"
然后在pod规范中:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: myclaim
如果您不关心在任何节点上登录以及每个节点中具有不同的数据,也可以直接在pod规范中使用hostPath
:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: DirectoryOrCreate
如果你看到文件
- HostPath(仅限单节点测试–本地存储不受任何支持,并且在多节点群集中不起作用)
local
:
apiVersion: v1
kind: PersistentVolume
metadata:
name: sonarqube-pv-postgresql
labels:
type: local
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
hostPath:
path: /opt/sonarqube/postgresql
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- myhost
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
labels:
vol=myvolume
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
然后您必须创建相应的PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 100Gi
storageClassName: local-storage
selector:
matchLabels:
vol: "myvolume"
然后在pod规范中:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: myclaim
如果您不关心在任何节点上登录以及每个节点中具有不同的数据,也可以直接在pod规范中使用hostPath
:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: DirectoryOrCreate
嗨@Rico,我知道这有点老了,但我正在努力理解并找到好医生。在使用本地卷的过程中,我们需要添加作为主机名的节点关联。那么,本地卷将如何扩展到多节点集群?我们是否需要制作尽可能多的本地pv,就像静态节点数量一样?或者不需要节点关联?它不适用于多节点。基本上,每个节点上的所有本地卷都是不同的。确定。所以,我只想确认一下,我需要为n个节点创建n个本地卷?是的,但它们将是不同的VolumeShanks,我希望它们都可以链接到相同的存储类,因为k8s/pod调度将选择在卷关联中定义了节点名的存储类与此节点相同。Hi@Rico,我知道这有点老了,但我正在努力理解并找到好的医生。在使用本地卷的过程中,我们需要添加作为主机名的节点关联。那么,本地卷将如何扩展到多节点集群?我们是否需要制作尽可能多的本地pv,就像静态节点数量一样?或者不需要节点关联?它不适用于多节点。基本上,每个节点上的所有本地卷都是不同的。确定。所以,为了确认一下,我需要为n个节点创建n个本地卷?是的,但它们将是不同的VolumeShanks,我希望它们都可以链接到相同的存储类,因为k8s/pod调度将选择一个在卷关联中定义了与此节点相同的节点名的存储类。