Logstash和Kibana无法在Docker中看到Elasticsearch容器
我在这里使用回购协议: CentOS 8 麋鹿7.4.0版 docker compose版本1.24.1 Docker版本18.06.3-ce 当我提起集装箱时,Elasticsearch装载得很好。 装载后,Kibana和Logstash容器启动。但一旦加载,他们就无法看到Elasticsearch容器,从而产生以下消息: 日志存储:Logstash和Kibana无法在Docker中看到Elasticsearch容器,docker,elk,Docker,Elk,我在这里使用回购协议: CentOS 8 麋鹿7.4.0版 docker compose版本1.24.1 Docker版本18.06.3-ce 当我提起集装箱时,Elasticsearch装载得很好。 装载后,Kibana和Logstash容器启动。但一旦加载,他们就无法看到Elasticsearch容器,从而产生以下消息: 日志存储: [2019-10-22T18:32:57,321][WARN ][logstash.outputs.elasticsearch] Attempted to re
[2019-10-22T18:32:57,321][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elastic:xxxxxx@elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elastic:xxxxxx@elasticsearch:9200/][Manticore::SocketException] No route to host (Host unreachable)"}
基巴纳:
{"type":"log","@timestamp":"2019-10-22T18:41:22Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","@timestamp":"2019-10-22T18:41:22Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
{"type":"log","@timestamp":"2019-10-22T18:41:22Z","tags":["license","warning","xpack"],"pid":6,"message":"License information from the X-Pack plugin could not be obtained from Elasticsearch for the [data] cluster. Error: No Living connections"}
{"type":"log","@timestamp":"2019-10-22T18:41:22Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","@timestamp":"2019-10-22T18:41:22Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
当我检查elasticsearch主机名是什么时,我会得到Docker自动生成的主机名
# docker-compose exec elasticsearch hostname
7d50d6a75028
我的印象是,如果容器在同一个网络下,那么docker应该将elasticsearch:9200映射到容器的正确ip地址
我尝试在docker compose文件中设置主机名,如下所示:
...
services:
elasticsearch:
hostname: elasticsearch
...
version: '3.7'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5000:5000"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
这种变化反映在容器中:
# docker-compose exec elasticsearch hostname
elasticsearch
# docker-compose exec kibana curl http://elasticsearch:9200
curl: (7) Failed connect to elasticsearch:9200; No route to host
但Kibana和Logstash仍然没有看到
我无法从Kibana容器中看到该主机:
# docker-compose exec elasticsearch hostname
elasticsearch
# docker-compose exec kibana curl http://elasticsearch:9200
curl: (7) Failed connect to elasticsearch:9200; No route to host
检查ES容器中的日志,它似乎工作正常:
# docker logs 9ef8
{"type": "server", "timestamp": "2019-10-22T18:50:55,870Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "docker-cluster", "node.name": "elasticsearch", "message": "Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.monitoring-es-7-2019.10.22][0]]]).", "cluster.uuid": "O7t3UC1tSFibbjkwjqbX6A", "node.id": "qbzcFdQpR2KHBad-s8U1Vw" }
我一定错过了什么,但我想不出是什么。
我搜索了这个错误,似乎我设置它的方式应该可以工作
有人能帮忙吗
我的docker compose文件如下所示:
...
services:
elasticsearch:
hostname: elasticsearch
...
version: '3.7'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5000:5000"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
我能够让它在CentOS 7.7上运行,没有任何问题。
似乎与CentOS 8有关。即使我在CentOS 8的docker内部运行elastic、kibana也有同样的问题 看起来有些问题,因为kibana无法在docker网络内与elastic通话 kibana内部卷曲到弹性:
CURL-X GEThttp://elasticsearch:9200
抛出错误连接到elasticsearch失败:9200;没有到主机的路由
同一个docker在windows docker中编写文件,在ubuntu中编写docker,等等
编辑:
经过一番研究,我终于想出了解决办法。
原因是centosfirewalld
阻止docker容器网络内的DNS,需要完全绕过或禁用它。
但是,kibana没有完全禁用防火墙D
,而是能够通过使用执行步骤后所示的步骤绕过防火墙中的docker DNS找到一个绕过的方法,kibana能够与Elastic inside docker连接。。谢谢:)