Docker compose Kibana无法使用docker compose在docker网络中与ElasticSearch对话

Docker compose Kibana无法使用docker compose在docker网络中与ElasticSearch对话,docker-compose,docker-networking,docker-network,Docker Compose,Docker Networking,Docker Network,我有一个docker compose配置,Kibana无法访问ElasticSearch: {"type":"log","@timestamp":"2019-09-09T22:34:32Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: http://0.0.0.0:9200/"} {"type":"log","@timestamp":"2019-09-09T2

我有一个docker compose配置,Kibana无法访问ElasticSearch:

{"type":"log","@timestamp":"2019-09-09T22:34:32Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
{"type":"log","@timestamp":"2019-09-09T22:34:32Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
{"type":"log","@timestamp":"2019-09-09T22:34:34Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
{"type":"log","@timestamp":"2019-09-09T22:34:34Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
{"type":"log","@timestamp":"2019-09-09T22:34:34Z","tags":["warning","task_manager"],"pid":6,"message":"PollError No Living connections"}
{"type":"log","@timestamp":"2019-09-09T22:34:34Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: http://0.0.0.0:9200/"}
{"type":"log","@timestamp":"2019-09-09T22:34:34Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
这是我的docker-compose.yml文件:

version: '2.2'
services:
  kibana:
    image: docker.elastic.co/kibana/kibana:7.3.1
    environment:
      ELASTICSEARCH_HOSTS: http://0.0.0.0:9200
    networks:
      - esnet
    ports:
      - 5601:5601
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
    container_name: es01
    environment:
      - node.name=es01
      - discovery.seed_hosts=es02
      - cluster.initial_master_nodes=es01,es02
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9300:9300
    networks:
      - esnet
  es02:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
    container_name: es02
    environment:
      - node.name=es02
      - discovery.seed_hosts=es01
      - cluster.initial_master_nodes=es01,es02
      - cluster.name=docker-cluster
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata02:/usr/share/elasticsearch/data
    networks:
      - esnet

volumes:
  esdata01:
    driver: local
  esdata02:
    driver: local

networks:
  esnet:
有人知道为什么Kibana不能和ElasticSearch通话吗?也许我应该用这个:

ELASTICSEARCH_HOSTS: http://esnet:9200
与此相反:

ELASTICSEARCH_HOSTS: http://0.0.0.0:9200
?


非常感谢任何帮助,我必须添加一些文本以使问题完整。tyvm

好的,他们无法连接的一个原因是ES容器因以下错误而死亡:

{"type": "server", "timestamp": "2019-09-09T22:42:15,440+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "es01",  "message": "initialized"  }
{"type": "server", "timestamp": "2019-09-09T22:42:15,440+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "es01",  "message": "starting ..."  }
{"type": "server", "timestamp": "2019-09-09T22:42:15,667+0000", "level": "INFO", "component": "o.e.t.TransportService", "cluster.name": "docker-cluster", "node.name": "es01",  "message": "publish_address {172.21.0.2:9300}, bound_addresses {0.0.0.0:9300}"  }
{"type": "server", "timestamp": "2019-09-09T22:42:15,675+0000", "level": "INFO", "component": "o.e.b.BootstrapChecks", "cluster.name": "docker-cluster", "node.name": "es01",  "message": "bound or publishing to a non-loopback address, enforcing bootstrap checks"  }
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
{"type": "server", "timestamp": "2019-09-09T22:42:15,727+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "es01",  "message": "stopping ..."  }
{"type": "server", "timestamp": "2019-09-09T22:42:15,869+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "es01",  "message": "stopped"  }
{"type": "server", "timestamp": "2019-09-09T22:42:15,869+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "es01",  "message": "closing ..."  }
{"type": "server", "timestamp": "2019-09-09T22:42:15,896+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "es01",  "message": "closed"  }
{"type": "server", "timestamp": "2019-09-09T22:42:15,904+0000", "level": "INFO", "component": "o.e.x.m.p.NativeController", "cluster.name": "docker-cluster", "node.name": "es01",  "message": "Native controller process has stopped - no new native processes can be started"  }
所以这至少可以解释解决问题的第一个地方

看起来答案如下:

sudo sysctl -w vm.max_map_count=262144