elasticsearch 文件拍缓冲控制,elasticsearch,logstash,filebeat,elasticsearch,Logstash,Filebeat" /> elasticsearch 文件拍缓冲控制,elasticsearch,logstash,filebeat,elasticsearch,Logstash,Filebeat" />

elasticsearch 文件拍缓冲控制

elasticsearch 文件拍缓冲控制,elasticsearch,logstash,filebeat,elasticsearch,Logstash,Filebeat,我在一台传输2GB数据的机器上运行filebeat。流式传输时,机器的缓冲区大小缓慢增加到100-2000MB(填满所有内存) filebeat.yml filebeat.inputs: - type: log tags: ["gunicorn"] paths: - "/home/ubuntu/data/gunicorn.log" - type: log tags: ["apache"] paths: - "/home/ubuntu/data/access.lo

我在一台传输2GB数据的机器上运行filebeat。流式传输时,机器的缓冲区大小缓慢增加到100-2000MB(填满所有内存)

filebeat.yml

filebeat.inputs:
- type: log
  tags: ["gunicorn"]
  paths:
    - "/home/ubuntu/data/gunicorn.log"

- type: log
  tags: ["apache"]
  paths:
    - "/home/ubuntu/data/access.log"

queue.mem:
  events: 8000
  flush.min_events: 512
  flush.timeout: 2s

output.logstash:
  hosts: ["xxx.xx.xxx.86:5044"]

问题是什么?@ibexit我们可以为filebeat固定最大缓冲区/队列大小(内存中)吗?也许您遇到了内存泄漏。您正在运行哪个版本的Filebeat?请查看最近修复的问题:
filebeat.inputs:
- type: log
  tags: ["gunicorn"]
  paths:
    - "/home/ubuntu/data/gunicorn.log"

- type: log
  tags: ["apache"]
  paths:
    - "/home/ubuntu/data/access.log"

queue.mem:
  events: 8000
  flush.min_events: 512
  flush.timeout: 2s

output.logstash:
  hosts: ["xxx.xx.xxx.86:5044"]