elasticsearch K8s Elasticsearch与filebeat保持';未准备好';重新启动后,elasticsearch,kubernetes,kubernetes-helm,filebeat,elasticsearch,Kubernetes,Kubernetes Helm,Filebeat" /> elasticsearch K8s Elasticsearch与filebeat保持';未准备好';重新启动后,elasticsearch,kubernetes,kubernetes-helm,filebeat,elasticsearch,Kubernetes,Kubernetes Helm,Filebeat" />

elasticsearch K8s Elasticsearch与filebeat保持';未准备好';重新启动后

elasticsearch K8s Elasticsearch与filebeat保持';未准备好';重新启动后,elasticsearch,kubernetes,kubernetes-helm,filebeat,elasticsearch,Kubernetes,Kubernetes Helm,Filebeat,我正在经历一个不太容易理解的情况 环境 两个使用azure centos 8.2(2vcpu,16G ram)的专用节点,而不是AKS 1个主节点,1个工作节点 kubernetes v1.19.3 头盔v2.16.12 舵图弹性() 在第一时间,它的工作与以下安装良好 ## elasticsearch, filebeat # kubectl apply -f pv.yaml # helm install -f values.yaml --name elasticsearch elas

我正在经历一个不太容易理解的情况

  • 环境
    • 两个使用azure centos 8.2(2vcpu,16G ram)的专用节点,而不是AKS
    • 1个主节点,1个工作节点
    • kubernetes v1.19.3
    • 头盔v2.16.12
    • 舵图弹性()
在第一时间,它的工作与以下安装良好

## elasticsearch, filebeat
# kubectl apply -f pv.yaml
# helm install -f values.yaml --name elasticsearch elastic/elasticsearch
# helm install --name filebeat --version 7.9.3 elastic/filebeat
卷曲弹性耳芯片:9200卷曲弹性耳芯片:9200/\u cat/索引 显示正确的值

但在重新启动工作节点后,它只是保持0/1就绪而不工作

名称就绪状态重新启动
elasticsearch-master-0 0/1运行10 71m

filebeat-filebeat-67qm2 0/1运行40m

在这种情况下,在删除/mnt/data/nodes并重新启动之后 那就好了

我想elasticsearch吊舱没有什么特别的

#describe
{"type": "server", "timestamp": "2020-10-26T07:49:49,708Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0", "message": "Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[filebeat-7.9.3-2020.10.26-000001][0]]]).", "cluster.uuid": "sWUAXJG9QaKyZDe0BLqwSw", "node.id": "ztb35hToRf-2Ahr7olympw"  }

#logs
  Normal   SandboxChanged          4m4s (x3 over 4m9s)   kubelet          Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled                  4m3s                  kubelet          Container image "docker.elastic.co/elasticsearch/elasticsearch:7.9.3" already present on machine
  Normal   Created                 4m1s                  kubelet          Created container configure-sysctl
  Normal   Started                 4m1s                  kubelet          Started container configure-sysctl
  Normal   Pulled                  3m58s                 kubelet          Container image "docker.elastic.co/elasticsearch/elasticsearch:7.9.3" already present on machine
  Normal   Created                 3m58s                 kubelet          Created container elasticsearch
  Normal   Started                 3m57s                 kubelet          Started container elasticsearch
  Warning  Unhealthy               91s (x14 over 3m42s)  kubelet          Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )

#events
6m1s        Normal    Pulled                    pod/elasticsearch-master-0                     Container image "docker.elastic.co/elasticsearch/elasticsearch:7.9.3" already present on machine
6m1s        Normal    Pulled                    pod/filebeat-filebeat-67qm2                    Container image "docker.elastic.co/beats/filebeat:7.9.3" already present on machine
5m59s       Normal    Started                   pod/elasticsearch-master-0                     Started container configure-sysctl
5m59s       Normal    Created                   pod/elasticsearch-master-0                     Created container configure-sysctl
5m59s       Normal    Created                   pod/filebeat-filebeat-67qm2                    Created container filebeat
5m58s       Normal    Started                   pod/filebeat-filebeat-67qm2                    Started container filebeat
5m56s       Normal    Created                   pod/elasticsearch-master-0                     Created container elasticsearch
5m56s       Normal    Pulled                    pod/elasticsearch-master-0                     Container image "docker.elastic.co/elasticsearch/elasticsearch:7.9.3" already present on machine
5m55s       Normal    Started                   pod/elasticsearch-master-0                     Started container elasticsearch
61s         Warning   Unhealthy                 pod/filebeat-filebeat-67qm2                    Readiness probe failed: elasticsearch: http://elasticsearch-master:9200...
  parse url... OK
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 10.97.133.135
    dial up... ERROR dial tcp 10.97.133.135:9200: connect: connection refused
59s         Warning   Unhealthy                 pod/elasticsearch-master-0                     Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )
/mnt/数据路径的周数为1000:1000

如果只有elastisearch而没有filebeat,重新启动没有问题

我完全搞不懂这个(

我错过了什么


  • 亚马尔光伏酒店
  • 价值观
  • 问题 在单个副本群集上运行elasticsearch时存在问题

    Warning  Unhealthy               91s (x14 over 3m42s)  kubelet          Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
    Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )
    
    解决方案 正如@adinhodovic所提到的

    如果运行单个副本群集,请添加以下helm值:

    对于单个副本群集,您的状态永远不会变为绿色

    Warning  Unhealthy               91s (x14 over 3m42s)  kubelet          Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
    Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )
    
    以下值应起作用:


    你能检查一下filebeat和elasticsearch播客中是否有任何带有
    kubectl日志的内容吗?另外,你能添加filebeat播客
    kubectl descripe
    的输出吗?如果你将
    persistentVolumeClaImplicy:Retain
    更改为
    persistentVolumeClaImplic,你能检查它是否有效吗cy:Recycle
    ?@Jakub Hi,谢谢你的回复。Recycle value也会返回相同的结果。:(我附上了一些关于kubectl Descripte、kubectl日志和事件的信息。这是'Retain'PV的结果。后缀ready0文件表示就绪0/1,重新启动后状态运行,否则表示目前工作正常(就绪1/1,运行状态)。在filebeat日志中,法兰绒CNI出现问题,
    networkPlugin CNI未能设置pod xxx网络:open/run/flannel/subset.env:没有这样的文件或目录
    。您能告诉我您的法兰绒pod是否已启动并运行吗?另外,您能用
    journalctl-u kubel检查kubelet日志中是否有任何内容吗et
    ?第二件事是准备就绪探测,在这方面有一个解决方法。你能试试看它是否有效吗?你说的关于networkPlugin的错误来自重新启动操作。作为你的github问题链接,我尝试了“clusterHealthCheckParams:“等待状态=黄色&超时=1s”'有效!!当副本:1 minimumMasterNodes:1非常感谢@Jakub:)乐意帮助时,可能会出现此症状。我已发布了一个包含这些信息的答案。如果此答案或任何其他答案解决了您的问题,请将其标记为已接受,或根据向上投票。
    Warning  Unhealthy               91s (x14 over 3m42s)  kubelet          Readiness probe failed: Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )
    Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )
    
    clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s"
    
    replicas: 1
    minimumMasterNodes: 1
    clusterHealthCheckParams: 'wait_for_status=yellow&timeout=1s'