elasticsearch 日志存储错误:未能发布事件,原因是:写入tcp YY.YY.YY.YY:40912->;二十、 XX.XX.XX:5044:写入:对等方重置连接
我正在使用filebeat使用logstash将日志推送到elasticsearch,之前的设置对我来说很好。我现在收到elasticsearch 日志存储错误:未能发布事件,原因是:写入tcp YY.YY.YY.YY:40912->;二十、 XX.XX.XX:5044:写入:对等方重置连接,elasticsearch,logstash,elastic-stack,filebeat,elasticsearch,Logstash,Elastic Stack,Filebeat,我正在使用filebeat使用logstash将日志推送到elasticsearch,之前的设置对我来说很好。我现在收到发布事件失败错误 filebeat | 2020-06-20T06:26:03.832969730Z 2020-06-20T06:26:03.832Z INFO log/harvester.go:254 Harvester started for file: /logs/app-service.log filebeat | 2020-0
发布事件失败错误
filebeat | 2020-06-20T06:26:03.832969730Z 2020-06-20T06:26:03.832Z INFO log/harvester.go:254 Harvester started for file: /logs/app-service.log
filebeat | 2020-06-20T06:26:04.837664519Z 2020-06-20T06:26:04.837Z ERROR logstash/async.go:256 Failed to publish events caused by: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat | 2020-06-20T06:26:05.970506599Z 2020-06-20T06:26:05.970Z ERROR pipeline/output.go:121 Failed to publish events: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat | 2020-06-20T06:26:05.970749223Z 2020-06-20T06:26:05.970Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://xx.com:5044))
filebeat | 2020-06-20T06:26:05.972790871Z 2020-06-20T06:26:05.972Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://xx.com:5044)) established
日志存储管道
02-beats-input.conf
input {
beats {
port => 5044
}
}
10-syslog-filter.conf
filter {
json {
source => "message"
}
}
30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "index-%{+YYYY.MM.dd}"
}
}
文件节拍配置
在/usr/share/filebeat/filebeat.yml上共享我的filebeat配置
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /logs/*
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["xx.com:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
当我使用telnet xx.xx 5044时,这是我在终端中看到的
Trying X.X.X.X...
Connected to xx.xx.
Escape character is '^]'
我也有同样的问题。这里有一些步骤,可以帮助您找到问题的核心。
首先,我测试了这种方式:filebeat(localhost)->logstash(localhost)->elastic->kibana。每个服务都在同一台机器上
My/etc/logstash/conf.d/config.conf:
input {
beats {
port => 5044
ssl => false
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
}
在这里,我特别禁用了ssl(在我的例子中,这是问题的主要原因,即使证书是正确的,也是如此)。
之后,不要忘记重新启动logstash并使用sudofilebeat-e
命令进行测试。
如果一切正常,您将不会看到“通过对等方重置连接”错误我面临同样的问题。我的弹性搜索,logstash和Kibana运行良好。但是当日志从filebeat推送到logstash时,有些事情出错了,停止了我的logstash和elasticsearch实例。。你有什么办法解决你的问题吗??