elasticsearch Kubernetes持久卷在GCE上不起作用
我正在尝试使我的弹性搜索吊舱持久化,以便在部署或重新创建吊舱时保留数据。弹性搜索是Grailog2设置的一部分 设置好所有内容后,我向Graylog发送了一些日志,我可以看到它们出现在仪表板上。然而,我删除了elasticsearch pod,在它被重新创建之后,所有数据都丢失在Graylog仪表板上 我正在使用GCE 以下是我的持久卷配置:
elasticsearch Kubernetes持久卷在GCE上不起作用,
elasticsearch,docker,kubernetes,google-cloud-platform,persistent-storage,
elasticsearch,Docker,Kubernetes,Google Cloud Platform,Persistent Storage,我正在尝试使我的弹性搜索吊舱持久化,以便在部署或重新创建吊舱时保留数据。弹性搜索是Grailog2设置的一部分 设置好所有内容后,我向Graylog发送了一些日志,我可以看到它们出现在仪表板上。然而,我删除了elasticsearch pod,在它被重新创建之后,所有数据都丢失在Graylog仪表板上 我正在使用GCE 以下是我的持久卷配置: kind: PersistentVolume apiVersion: v1 metadata: name: elastic-pv labels:
kind: PersistentVolume
apiVersion: v1
metadata:
name: elastic-pv
labels:
type: gcePD
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
fsType: ext4
pdName: elastic-pv-disk
永久卷声明配置:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elastic-pvc
labels:
type: gcePD
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
以下是我的elasticsearch部署:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elastic-deployment
spec:
replicas: 1
template:
metadata:
labels:
type: elasticsearch
spec:
containers:
- name: elastic-container
image: gcr.io/project/myelasticsearch:v1
imagePullPolicy: Always
ports:
- containerPort: 9300
name: first-port
protocol: TCP
- containerPort: 9200
name: second-port
protocol: TCP
volumeMounts:
- name: elastic-pd
mountPath: /data/db
volumes:
- name: elastic-pd
persistentVolumeClaim:
claimName: elastic-pvc
kubectl描述pod的输出
Name: elastic-deployment-1423685295-jt6x5
Namespace: default
Node: gke-sd-logger-default-pool-2b3affc0-299k/10.128.0.6
Start Time: Tue, 09 May 2017 22:59:59 +0500
Labels: pod-template-hash=1423685295
type=elasticsearch
Status: Running
IP: 10.12.0.11
Controllers: ReplicaSet/elastic-deployment-1423685295
Containers:
elastic-container:
Container ID: docker://8774c747e2a56363f657a583bf5c2234ed2cff64dc21b6319fc53fdc5c1a6b2b
Image: gcr.io/thematic-flash-786/myelasticsearch:v1
Image ID: docker://sha256:7c25be62dbad39c07c413888e275ae419a66070d37e0d98bf5008e15d7720eec
Ports: 9300/TCP, 9200/TCP
Requests:
cpu: 100m
State: Running
Started: Tue, 09 May 2017 23:02:11 +0500
Ready: True
Restart Count: 0
Volume Mounts:
/data/db from elastic-pd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qtdbb (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
elastic-pd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elastic-pvc
ReadOnly: false
default-token-qtdbb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qtdbb
QoS Class: Burstable
Tolerations: <none>
No events.
kubectl描述pvc的输出:
Name: elastic-pvc
Namespace: default
StorageClass:
Status: Bound
Volume: elastic-pv
Labels: type=gcePD
Capacity: 200Gi
Access Modes: RWO
No events.
确认真实磁盘存在:
持久卷不持久的原因是什么?在官方图像中,Elasticsearch数据存储在
/usr/share/Elasticsearch/data
而不是/data/db
。似乎您需要将挂载更新为/usr/share/elasticsearch/data
,以获取存储在持久卷上的数据。在官方图像中,elasticsearch数据存储在/usr/share/elasticsearch/data
而不是/data/db
。似乎您需要将装载更新为/usr/share/elasticsearch/data
,以获得存储在持久卷上的数据。想到两件事。GCP中是否已经存在弹性pv盘?是否有其他卷可能与声明选择器(200Gi)匹配?显示kubectl descripe
、kubectl descripe
和kubectl descripe
@AndyShinn的输出可能会有所帮助。请查看更新。我一直在胡思乱想,而且只有一个PVIs/data/db
ES实际存储数据的正确位置?您是否尝试过kubectl exec-it bash
进入容器并确认ES数据正在到达那里?我想到了两件事。GCP中是否已经存在弹性pv盘?是否有其他卷可能与声明选择器(200Gi)匹配?显示kubectl descripe
、kubectl descripe
和kubectl descripe
@AndyShinn的输出可能会有所帮助。请查看更新。我一直在胡思乱想,而且只有一个PVIs/data/db
ES实际存储数据的正确位置?您是否尝试kubectl exec-it bash
进入容器并确认ES数据已到达?当我使用/usr/share/elasticsearch/data作为装载路径时,容器由于某种原因未运行这是我使用kubectl descripe pod时遇到的错误警告后退4s kubelet,gke-standard-cluster-1-default-pool-8e52b876-r0xq后退重新启动失败的容器当我使用/usr/share/elasticsearch/data此作为装载路径时,容器由于某些原因未运行这是我使用kubectl Descripte pod警告时收到的错误后退4s kubelet,gke-standard-cluster-1-default-pool-8e52b876-r0xq后退重新启动失败的容器
Name: elastic-pvc
Namespace: default
StorageClass:
Status: Bound
Volume: elastic-pv
Labels: type=gcePD
Capacity: 200Gi
Access Modes: RWO
No events.