Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/EmptyTag/131.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
<img src="//i.stack.imgur.com/RUiNP.png" height="16" width="18" alt="" class="sponsor tag img">elasticsearch &引用;Kibana服务器还没有准备好”;从OpenDistro docker映像运行时_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch_Kibana_Elasticsearch Opendistro - Fatal编程技术网 elasticsearch &引用;Kibana服务器还没有准备好”;从OpenDistro docker映像运行时,elasticsearch,kibana,elasticsearch-opendistro,elasticsearch,Kibana,Elasticsearch Opendistro" /> elasticsearch &引用;Kibana服务器还没有准备好”;从OpenDistro docker映像运行时,elasticsearch,kibana,elasticsearch-opendistro,elasticsearch,Kibana,Elasticsearch Opendistro" />

elasticsearch &引用;Kibana服务器还没有准备好”;从OpenDistro docker映像运行时

elasticsearch &引用;Kibana服务器还没有准备好”;从OpenDistro docker映像运行时,elasticsearch,kibana,elasticsearch-opendistro,elasticsearch,Kibana,Elasticsearch Opendistro,我使用以下docker compose运行elasticsearch集群和kibana: services: odfe-node1: image: amazon/opendistro-for-elasticsearch:1.3.0 container_name: odfe-node1 environment: - cluster.name=odfe-cluster - node.name=odfe-node1 - discovery

我使用以下docker compose运行elasticsearch集群和kibana:

services:
  odfe-node1:
    image: amazon/opendistro-for-elasticsearch:1.3.0
    container_name: odfe-node1
    environment:
      - cluster.name=odfe-cluster
      - node.name=odfe-node1
      - discovery.seed_hosts=odfe-node1,odfe-node2
      - cluster.initial_master_nodes=odfe-node1,odfe-node2
      - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
        hard: 65536
    volumes:
      - odfe-data1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    networks:
      - odfe-net
  odfe-node2:
    image: amazon/opendistro-for-elasticsearch:1.3.0
    container_name: odfe-node2
    environment:
      - cluster.name=odfe-cluster
      - node.name=odfe-node2
      - discovery.seed_hosts=odfe-node1,odfe-node2
      - cluster.initial_master_nodes=odfe-node1,odfe-node2
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - odfe-data2:/usr/share/elasticsearch/data
    networks:
      - odfe-net
  kibana:
    image: amazon/opendistro-for-elasticsearch-kibana:1.3.0
    container_name: odfe-kibana
    ports:
      - 5601:5601
    expose:
      - "5601"
    environment:
      ELASTICSEARCH_URL: https://odfe-node1:9200
      ELASTICSEARCH_HOSTS: https://odfe-node1:9200
      LOGGING_VERBOSE: "true"
    networks:
      - odfe-net

volumes:
  odfe-data1:
  odfe-data2:

networks:
  odfe-net:
日志中没有错误,弹性集群运行良好——我可以查询和提交文档;但是,当我试图通过在浏览器中转到来加载Kibana时,我在浏览器和日志中都会收到
Kibana服务器尚未准备就绪的消息


有什么问题吗?

结果是我不得不为Docker服务(设置->高级)分配更多内存,Kibana现在按预期启动了

你等了多长时间了?@IanGabes它已经运行了50分钟,但仍然给出了与我所问的相同的信息,因为我肯定看到Kibana需要10到15分钟才能启动第一次启动时“优化”本身,但50分钟太长了(可能)。我在这里谈到的主线kibana上遇到了类似的问题:@IanGabes发现Docker没有足够的内存分配给它。我给了它4GB的RAM,事情开始工作了,很高兴你找到了它!