Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/visual-studio-2012/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Docker Filebeat不会将日志发送到logstash_Docker_Docker Compose_Containers_Elastic Stack_Filebeat - Fatal编程技术网

Docker Filebeat不会将日志发送到logstash

Docker Filebeat不会将日志发送到logstash,docker,docker-compose,containers,elastic-stack,filebeat,Docker,Docker Compose,Containers,Elastic Stack,Filebeat,因此,这里有一个大画面:我的目标是使用ELK stack+filebeat对大量(.txt)数据进行索引。 2018-08-14T12:13:46.334Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":24}},"t

因此,这里有一个大画面:我的目标是使用ELK stack+filebeat对大量(.txt)数据进行索引。

2018-08-14T12:13:46.334Z        INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":24}},"total":{"ticks":30,"time":{"ms":36},"value":30},"user":{"ticks":10,"time":{"ms":12}}},"info":{"ephemeral_id":"16c484f0-0cf8-4c10-838d-b39755284af9","uptime":{"ms":30017}},"memstats":{"gc_next":4473924,"memory_alloc":3040104,"memory_total":3040104,"rss":21061632}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":6},"load":{"1":1.46,"15":1.52,"5":1.66,"norm":{"1":0.2433,"15":0.2533,"5":0.2767}}}}}}
基本上,我的问题是filebeat似乎无法将日志发送到logstash。我猜,一些docker网络配置关闭了

我的项目的代码可以在上找到

麋鹿容器 为此,我有一个
docker compose.yml
从图像运行一个容器,如下所示:

version: '2'

services:
  elk:
    container_name: elk
    image: sebp/elk
    ports:
      - "5601:5601"
      - "9200:9200"
      - "5045:5044"
    volumes:
      - /path/to/volumed-folder:/logstash
    networks:
      - elk_net

networks:
  elk_net:
    driver: bridge
创建容器后,我转到容器bash终端并运行以下命令:

/opt/logstash/bin/logstash --path.data /tmp/logstash/data -f /logstash/config/filebeat-config.conf
./filebeat -e -c ./filebeat.yml -E name="mybeat"
运行此命令,我将获得以下日志,然后它将开始等待,而不打印任何其他日志:

$ /opt/logstash/bin/logstash --path.data /tmp/logstash/data -f /logstash/config/filebeat-config.conf                                                                                             
Sending Logstash's logs to /opt/logstash/logs which is now configured via log4j2.properties
[2018-08-14T11:51:11,693][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/tmp/logstash/data/queue"}
[2018-08-14T11:51:11,701][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/tmp/logstash/data/dead_letter_queue"}
[2018-08-14T11:51:12,194][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-08-14T11:51:12,410][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"3646b6e4-d540-4c9c-a38d-2769aef5a05e", :path=>"/tmp/logstash/data/uuid"}
[2018-08-14T11:51:13,089][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-14T11:51:15,554][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-14T11:51:16,088][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-14T11:51:16,101][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-14T11:51:16,291][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-14T11:51:16,391][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-14T11:51:16,398][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-14T11:51:16,460][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-08-14T11:51:16,515][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-14T11:51:16,559][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-14T11:51:16,688][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2018-08-14T11:51:16,899][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5045"}
[2018-08-14T11:51:16,925][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x54ab986e run>"}
[2018-08-14T11:51:17,170][INFO ][org.logstash.beats.Server] Starting server on port: 5045
[2018-08-14T11:51:17,187][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-08-14T11:51:17,637][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9601}
FILEBEAT容器 My
filebeat
容器是使用下面的
docker compose.yml
文件创建的:

version: "2"

services:
  filebeat:
    container_name: filebeat
    hostname: filebeat
    image: docker.elastic.co/beats/filebeat:6.3.0
    user: root
    # command: ./filebeat -c /usr/share/filebeat-volume/config/filebeat.yml -E name=mybeat
    volumes:
      # "volumed-folder" lies under ${PROJECT_DIR}/filebeat or could be anywhere else you wish
      - /path/to/volumed-folder:/usr/share/filebeat/filebeat-volume:ro
    networks:
      - filebeat_net

networks:
  filebeat_net:
    external: true
创建容器后,我转到容器bash终端,将
/usr/share/filebeat
下现有的
filebeat.yml
替换为我已设置的卷,并运行以下命令:

/opt/logstash/bin/logstash --path.data /tmp/logstash/data -f /logstash/config/filebeat-config.conf
./filebeat -e -c ./filebeat.yml -E name="mybeat"
终端立即显示以下日志:

root@filebeat filebeat]# ./filebeat -e -c ./filebeat.yml -E name="mybeat"
2018-08-14T12:13:16.325Z        INFO    instance/beat.go:492    Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2018-08-14T12:13:16.325Z        INFO    instance/beat.go:499    Beat UUID: 3b4b3897-ef77-43ad-b982-89e8f690a96e
2018-08-14T12:13:16.325Z        INFO    [beat]  instance/beat.go:716    Beat info       {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "3b4b3897-ef77-43ad-b982-89e8f690a96e"}}}
2018-08-14T12:13:16.325Z        INFO    [beat]  instance/beat.go:725    Build info      {"system_info": {"build": {"commit": "a04cb664d5fbd4b1aab485d1766f3979c138fd38", "libbeat": "6.3.0", "time": "2018-06-11T22:34:44.000Z", "version": "6.3.0"}}}
2018-08-14T12:13:16.325Z        INFO    [beat]  instance/beat.go:728    Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":6,"version":"go1.9.4"}}}
2018-08-14T12:13:16.327Z        INFO    [beat]  instance/beat.go:732    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2018-08-04T17:34:15Z","containerized":true,"hostname":"filebeat","ips":["127.0.0.1/8","172.28.0.2/16"],"kernel_version":"4.4.0-116-generic","mac_addresses":["02:42:ac:1c:00:02"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":5,"patch":1804,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0}}}
2018-08-14T12:13:16.328Z        INFO    [beat]  instance/beat.go:761    Process info    {"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 93, "ppid": 28, "seccomp": {"mode":"filter"}, "start_time": "2018-08-14T12:13:15.530Z"}}}
2018-08-14T12:13:16.328Z        INFO    instance/beat.go:225    Setup Beat: filebeat; Version: 6.3.0
2018-08-14T12:13:16.329Z        INFO    pipeline/module.go:81   Beat name: mybeat
2018-08-14T12:13:16.329Z        WARN    [cfgwarn]       beater/filebeat.go:61   DEPRECATED: prospectors are deprecated, Use `inputs` instead. Will be removed in version: 7.0.0
2018-08-14T12:13:16.330Z        INFO    [monitoring]    log/log.go:97   Starting metrics logging every 30s
2018-08-14T12:13:16.330Z        INFO    instance/beat.go:315    filebeat start running.
2018-08-14T12:13:16.330Z        INFO    registrar/registrar.go:112      Loading registrar data from /usr/share/filebeat/data/registry
2018-08-14T12:13:16.330Z        INFO    registrar/registrar.go:123      States Loaded from registrar: 0
2018-08-14T12:13:16.331Z        WARN    beater/filebeat.go:354  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-08-14T12:13:16.331Z        INFO    crawler/crawler.go:48   Loading Inputs: 1
2018-08-14T12:13:16.331Z        INFO    log/input.go:111        Configured paths: [/usr/share/filebeat-volume/data/Shakespeare.txt]
2018-08-14T12:13:16.331Z        INFO    input/input.go:87       Starting input of type: log; ID: 1899165251698784346 
2018-08-14T12:13:16.331Z        INFO    crawler/crawler.go:82   Loading and starting Inputs completed. Enabled inputs: 1
每30秒,它会显示以下内容:

2018-08-14T12:13:46.334Z        INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":24}},"total":{"ticks":30,"time":{"ms":36},"value":30},"user":{"ticks":10,"time":{"ms":12}}},"info":{"ephemeral_id":"16c484f0-0cf8-4c10-838d-b39755284af9","uptime":{"ms":30017}},"memstats":{"gc_next":4473924,"memory_alloc":3040104,"memory_total":3040104,"rss":21061632}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":6},"load":{"1":1.46,"15":1.52,"5":1.66,"norm":{"1":0.2433,"15":0.2533,"5":0.2767}}}}}}
networks:
  filebeat_net:
    external:
      name: elk_elk_net
Kibana中没有创建索引模式

这就是我的
filebeat.yml
的样子:

input {
  beats {
    port => "5044"
  }
}

output {
  elasticsearch {
    hosts => [ "localhost:9200" ]
    index => "%{[@metadata][beat]}"
  }
}
filebeat.inputs:
- type: log
  paths:
    - /path/to/a/log/file

output.logstash:
  hosts: ["elk:5044"]

setup.kibana:
  host: "localhost:5601"
我曾经定义过我的
docker compose
文件中的
网络
部分,这样我的容器就可以使用它们的
容器名称
相互通信

所以,当我这么做的时候

output.logstash:
  hosts: ["elk:5044"]
我希望filebeat将日志发送到elk容器的5044端口,logstash在那个里监听传入的消息

在终端中运行filebeat后,我确实在终端中看到了以下日志,我在其中编写了《docker up elk》:

elk    | 
elk    | ==> /var/log/elasticsearch/elasticsearch.log <==
elk    | [2018-08-14T11:51:16,974][INFO ][o.e.c.m.MetaDataIndexTemplateService] [fZr_LDR] adding template [logstash] for index patterns [logstash-*]
这似乎是可行的,因为当我做麋鹿时,我得到了一个连接

当网络问题得到解决(我可以ping!)时,
Logstash
Filebeat
之间的连接仍然很麻烦,并且每隔30秒就会收到以下消息。

2018-08-14T12:13:46.334Z        INFO    [monitoring]    log/log.go:124  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":{"ms":24}},"total":{"ticks":30,"time":{"ms":36},"value":30},"user":{"ticks":10,"time":{"ms":12}}},"info":{"ephemeral_id":"16c484f0-0cf8-4c10-838d-b39755284af9","uptime":{"ms":30017}},"memstats":{"gc_next":4473924,"memory_alloc":3040104,"memory_total":3040104,"rss":21061632}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":6},"load":{"1":1.46,"15":1.52,"5":1.66,"norm":{"1":0.2433,"15":0.2533,"5":0.2767}}}}}}
在filebeat容器的终端中,在详细模式下运行filebeat命令时,我还定期获取以下日志:

2018-08-15T16:26:41.986Z        DEBUG   [input] input/input.go:124      Run input
2018-08-15T16:26:41.986Z        DEBUG   [input] log/input.go:147        Start next scan
2018-08-15T16:26:41.986Z        DEBUG   [input] log/input.go:168        input states cleaned up. Before: 0, After: 0, Pending: 0

默认情况下,在容器中使用名称空间进行网络连接,这意味着每个容器都有自己的私有ip,容器中的localhost仅在该容器的本地

这意味着您需要在配置文件中指定弹性服务器的DNS条目,而不是本地主机。在compose和swarm模式下,服务名称会自动设置,DNS条目指向您的容器:

input {
  beats {
    port => "5044"
  }
}

output {
  elasticsearch {
    hosts => [ "elk:9200" ]
    index => "%{[@metadata][beat]}"
  }
}
这还需要在容器之间共享一个公共网络。默认情况下,在同一个compose文件中创建所有内容时,都会得到此结果。当部署多个堆栈/项目时,需要在至少一个文件中定义一个公共外部网络。由于我无法告诉您的elk项目名称以了解完整的网络名称,因此您可以对elk进行以下更改,以将其连接到filebeat_net:

版本:“2”
服务:
麋鹿:
货柜名称:麋鹿
图片:sebp/麋鹿
端口:
- "5601:5601"
- "9200:9200"
- "5045:5044"
卷数:
-/path/to/volumed文件夹:/logstash
网络:
-麋鹿网
-网络
网络:
麋鹿网:
司机:驾驶台
filebeat_net:
外部:正确

我终于能够解决我的问题了。首先,正如我问题的更新(2018年8月15日)部分所述,解决了集装箱连接问题

Filebeat
没有将日志发送到
Logstash
的问题是因为我没有明确指定要启用的输入/输出配置(这对我来说是一个令人沮丧的事实,因为文档中没有明确提及)。因此,更改我的
filebeat.yml
文件后,下面的修复就成功了

filebeat.inputs:
-类型:原木
已启用:true
路径:
-${PWD}/filebeat volume/data/*.txt
output.logstash:
已启用:true
寄主:[“麋鹿:5044”]
索引:“你的cusotm索引”
setup.kibana:
主持人:“麋鹿:5601”

我遇到了类似的问题,但发生在我身上的是,我的端口没有暴露给容器外部的应用程序。 我所做的只是为其他应用程序公开端口。 我在安装docker时使用了选项-p5044,5044是侦听请求的端口

docker run -d --name logstash 
-p 5044:5044
--restart=always 
-e "XPACK.MONITORING.ELASTICSEARCH.URL=http://ELASTIC_IP:9200" 
docker.elastic.co/logstash/logstash:7.0.0

谢谢你的回复。我能够解决更新部分提到的网络问题。至于你的第一点,我认为
localhost
可以解决,因为
Logstash
Elasticsearch
都在同一个容器中。事实上,如果我运行一个不同的logstash.conf(例如,为文件内容而不是filebeat输入编制索引),它就可以工作,并且在
Kibana
@mhyousefi中一切都很好,不清楚您是否仍然存在问题。如果是这样,请确保指定更新中不起作用的内容。如果没有,你应该检查答案,这样问题就不会一直悬而未决。谢谢。对不起,我的问题现在在我的更新中被明确指定了。