elasticsearch 在kubernetes部署麋鹿堆栈时,掌舵卷绑定错误,elasticsearch,kubernetes,kubernetes-helm,elasticsearch,Kubernetes,Kubernetes Helm" /> elasticsearch 在kubernetes部署麋鹿堆栈时,掌舵卷绑定错误,elasticsearch,kubernetes,kubernetes-helm,elasticsearch,Kubernetes,Kubernetes Helm" />

elasticsearch 在kubernetes部署麋鹿堆栈时,掌舵卷绑定错误

elasticsearch 在kubernetes部署麋鹿堆栈时,掌舵卷绑定错误,elasticsearch,kubernetes,kubernetes-helm,elasticsearch,Kubernetes,Kubernetes Helm,我正试图用海图在库伯内特斯星系群中部署麋鹿群。当我发射时 helm安装麋鹿堆叠稳定/弹性堆叠 我收到以下信息: NAME: elk-stack LAST DEPLOYED: Mon Aug 24 07:30:31 2020 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: The elasticsearch cluster and associated extras have been installed. Kibana can be

我正试图用海图在库伯内特斯星系群中部署麋鹿群。当我发射时

helm安装麋鹿堆叠稳定/弹性堆叠

我收到以下信息:

NAME: elk-stack LAST DEPLOYED: Mon Aug 24 07:30:31 2020 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: The elasticsearch cluster and associated extras have been installed. Kibana can be accessed: * Within your cluster, at the following DNS name at port 9200: elk-stack-elastic-stack.default.svc.cluster.local * From outside the cluster, run these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=elastic-stack,release=elk-stack" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:5601 to use Kibana" kubectl port-forward --namespace default $POD_NAME 5601:5601 对于logstash吊舱:

Warning FailedScheduling 7m53s default-scheduler running "VolumeBinding" filter plugin for pod "elk-stack-logstash-0": pod has unbound immediate PersistentVolumeClaims 存储类
slow
和持久卷声明
claim1
是我的实验。我使用
kubectl create
和一个yaml文件创建它们,其他文件是由helm自动创建的(我想)

kubectl get pvc data-elk-stack-elasticsearch-master-0-o yaml的输出

apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: "2020-08-24T07:30:38Z" finalizers: - kubernetes.io/pvc-protection labels: app: elasticsearch release: elk-stack managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:app: {} f:release: {} f:spec: f:accessModes: {} f:resources: f:requests: .: {} f:storage: {} f:volumeMode: {} f:status: f:phase: {} manager: kube-controller-manager operation: Update time: "2020-08-24T07:30:38Z" name: data-elk-stack-elasticsearch-master-0 namespace: default resourceVersion: "201123" selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-elk-stack-elasticsearch-master-0 uid: de58f769-f9a7-41ad-a449-ef16d4b72bc6 spec: accessModes: - ReadWriteOnce resources: requests: storage: 4Gi volumeMode: Filesystem status: phase: Pending 版本:v1 种类:PersistentVolumeClaim 元数据: creationTimestamp:“2020-08-24T07:30:38Z” 定稿人: -kubernetes.io/pvc-protection 标签: 应用程序:elasticsearch 释放:麋鹿堆 管理领域: -版本:v1 fieldsType:FieldsV1 字段v1: f:元数据: f:标签: .: {} f:app:{} f:发布:{} f:规格: f:accessModes:{} f:资源: f:请求: .: {} f:存储:{} f:volumeMode:{} f:状况: f:阶段:{} 经理:kube控制器经理 操作:更新 时间:“2020-08-24T07:30:38Z” 名称:data-elk-stack-elasticsearch-master-0 名称空间:默认值 资源版本:“201123” selfLink:/api/v1/namespace/default/persistentvolumeclaims/data-elk-stack-elasticsearch-master-0 uid:de58f769-f9a7-41ad-a449-ef16d4b72bc6 规格: 访问模式: -读写 资源: 请求: 存储:4Gi volumeMode:文件系统 地位: 阶段:待定
有人能帮我解决这个问题吗?提前感谢。

pod处于挂起状态的原因低于PVC处于挂起状态,因为未创建相应的PVs

data-elk-stack-elasticsearch-master-0
data-elk-stack-logstash-0
data-elk-stack-elasticsearch-data-0
既然你提到了这是为了当地的发展,你可以使用光伏发电量。因此,使用下面的示例PV为每个挂起的PV创建PV。因此,您将总共创建3个PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: elk-master
  labels:
    type: local
spec:
  capacity:
    storage: 4Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elk-logstash
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elk-data
  labels:
    type: local
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

我想这是一个与音量有关的问题,但我不知道如何解决。编辑问题以添加
kubectl get pv,pvc,sc-a的输出
好的,我会的,谢谢。问题是pvc正在等待。您是在GKE上并打算使用GCE-PD进行存储,还是想使用
hostPath
volume?这是一个用于开发和测试目的的本地集群,因此我想使用hostpathk,我创建了三个大小正确的pv,但POD仍在等待中。我也尝试删除头盔部署并重新执行,但没有任何更改。elk-data pv状态绑定到一个名为任务pv声明的pvc(我以前创建的pvc),elk logstash和elk master状态可用。不,我有相同的问题,吊舱仍挂起所有吊舱和pvc仍挂起?
kubectl get pv,pvc,sc-A的当前输出是什么?
我已经用命令的输出创建了一个答案 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/elasticsearch-data 10Gi RWO Retain Bound default/elasticsearch-data manual 16d NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/claim1 Pending slow 64m default persistentvolumeclaim/data-elk-stack-elasticsearch-data-0 Pending 120m default persistentvolumeclaim/data-elk-stack-elasticsearch-master-0 Pending 120m default persistentvolumeclaim/data-elk-stack-logstash-0 Pending 120m default persistentvolumeclaim/elasticsearch-data Bound elasticsearch-data 10Gi RWO manual 16d default persistentvolumeclaim/elasticsearch-data-elasticsearch-data-0 Pending 17d default persistentvolumeclaim/elasticsearch-data-elasticsearch-data-1 Pending 17d default persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0 Pending 16d default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0 Pending 17d default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-1 Pending 17d default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 Pending 16d NAMESPACE NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE storageclass.storage.k8s.io/slow (default) kubernetes.io/gce-pd Delete Immediate false 66m apiVersion: v1 kind: PersistentVolumeClaim metadata: creationTimestamp: "2020-08-24T07:30:38Z" finalizers: - kubernetes.io/pvc-protection labels: app: elasticsearch release: elk-stack managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:app: {} f:release: {} f:spec: f:accessModes: {} f:resources: f:requests: .: {} f:storage: {} f:volumeMode: {} f:status: f:phase: {} manager: kube-controller-manager operation: Update time: "2020-08-24T07:30:38Z" name: data-elk-stack-elasticsearch-master-0 namespace: default resourceVersion: "201123" selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-elk-stack-elasticsearch-master-0 uid: de58f769-f9a7-41ad-a449-ef16d4b72bc6 spec: accessModes: - ReadWriteOnce resources: requests: storage: 4Gi volumeMode: Filesystem status: phase: Pending
data-elk-stack-elasticsearch-master-0
data-elk-stack-logstash-0
data-elk-stack-elasticsearch-data-0
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elk-master
  labels:
    type: local
spec:
  capacity:
    storage: 4Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elk-logstash
  labels:
    type: local
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elk-data
  labels:
    type: local
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"