elasticsearch,logstash,elastic-stack,filebeat,Docker,elasticsearch,Logstash,Elastic Stack,Filebeat" /> elasticsearch,logstash,elastic-stack,filebeat,Docker,elasticsearch,Logstash,Elastic Stack,Filebeat" />

Docker Logstash不处理filebeat发送的文件

Docker Logstash不处理filebeat发送的文件,docker,elasticsearch,logstash,elastic-stack,filebeat,Docker,elasticsearch,Logstash,Elastic Stack,Filebeat,我已经与docker建立了麋鹿堆栈基础设施。 我看不到logstash正在处理的文件 Filebeat配置为将.csv文件从logstash发送到logstash,再发送到elasticsearch。我看到logstash filebeat listner盯着我看。elasticsearch管道的日志存储工作正常,但未写入任何文档/索引 请告知 filebeat.yml filebeat.prospectors: - input_type: log paths:

我已经与docker建立了麋鹿堆栈基础设施。 我看不到logstash正在处理的文件

Filebeat配置为将.csv文件从logstash发送到logstash,再发送到elasticsearch。我看到logstash filebeat listner盯着我看。elasticsearch管道的日志存储工作正常,但未写入任何文档/索引

请告知

filebeat.yml

    filebeat.prospectors:
    - input_type: log
      paths:
         - logs/sms/*.csv
      document_type: sms
      paths:
         - logs/voip/*.csv
      document_type: voip

    output.logstash:
      enabled: true
      hosts: ["logstash:5044"]

    logging.to_files: true
    logging.files:
logstash.conf

input {
    beats {
        port => "5044"
   }
}

filter {
 if [document_type] == "sms" {
        csv {
                columns => ['Date', 'Time', 'PLAN', 'CALL_TYPE', 'MSIDN', 'IMSI', 'IMEI']
                separator => " "
                skip_empty_columns => true
                quote_char => "'"
        }
  }
 if [document_type] == "voip" {
  csv {
    columns => ['Date', 'Time', 'PostDialDelay', 'Disconnect-Cause', 'Sip-Status','Session-Disposition', 'Calling-RTP-Packets-Lost','Called-RTP-Packets-Lost', 'Calling-RTP-Avg-Jitter','Called-RTP-Avg-Jitter', 'Calling-R-Factor', 'Called-R-Factor', 'Calling-MOS', 'Called-MOS', 'Ingress-SBC', 'Egress-SBC', 'Originating-Trunk-Group', 'Terminating-Trunk-Group']
        separator => " "
        skip_empty_columns => true
        quote_char => "'"
    }
  }
}

output {
     if [document_type] == "sms"{
                elasticsearch {
                        hosts => ["elasticsearch:9200"]
                        index => "smscdr_index"
                        }
                stdout {
                codec => rubydebug
               }
      }
  if [document_type] == "voip" {
               elasticsearch {
                        hosts => ["elasticsearch:9200"]
                        index => "voipcdr_index"
                    }
               stdout {
               codec => rubydebug
              }
      }
}
日志存储部分日志

[2019-12-05T12:48:38,227][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2019-12-05T12:48:38,411][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4ffc5251 run>"}
[2019-12-05T12:48:38,949][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-12-05T12:48:39,077][INFO ][org.logstash.beats.Server] Starting server on port: 5044
==========================================================================================
[2019-12-05T12:48:43,518][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
[2019-12-05T12:48:43,745][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x46e8e60c run>"}
[2019-12-05T12:48:43,780][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2019-12-05T12:48:45,770][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
码头工人

docker-compose -f docker-compose_stash.yml ps
The system cannot find the path specified.
      Name                     Command               State                            Ports
---------------------------------------------------------------------------------------------------------------------
elasticsearch_cdr   /usr/local/bin/docker-entr ...   Up      0.0.0.0:9200->9200/tcp, 9300/tcp
filebeat_cdr        /usr/local/bin/docker-entr ...   Up
kibana_cdr          /usr/local/bin/kibana-docker     Up      0.0.0.0:5601->5601/tcp
logstash_cdr        /usr/local/bin/docker-entr ...   Up      0.0.0.0:5000->5000/tcp, 0.0.0.0:5044->5044/tcp, 9600/tcp

在logstash中,您在字段
document_type
中有一个条件检查,但此字段不是由filebeat生成的,您需要更正filebeat配置

为您的输入尝试此配置

filebeat.prospectors:
- input_type: log
  paths:
     - logs/sms/*.csv
  fields:
    document_type: sms
  paths:
     - logs/voip/*.csv
  fields:
    document_type: voip
这将创建一个名为
fields
的字段和一个名为
document\u type
的嵌套字段,如下例所示

{ "fields" : { "document_type" : "voip" } }
if [fields][document_type] == "sms" {
  your filters
}
并更改日志存储条件以再次检查字段
字段。文档类型
,如下面的示例所示

{ "fields" : { "document_type" : "voip" } }
if [fields][document_type] == "sms" {
  your filters
}
如果需要,您可以使用filebeat中\u root:true下的选项
fields\u在文档的根目录中创建
文档类型
,因此您无需更改日志存储条件

filebeat.prospectors:
- input_type: log
  paths:
     - logs/sms/*.csv
  fields:
    document_type: sms
  fields_under_root: true

elasticsearch\u cdr/usr/local/bin/docker entr。。。向上0.0.0.0:9200->9200/tcp,9300/tcp文件节拍\u cdr/usr/local/bin/docker entr。。。向上kibana_cdr/usr/local/bin/kibana docker向上0.0.0:5601->5601/tcp logstash_cdr/usr/local/bin/docker entr。。。上升0.0.0.0:5000->5000/tcp,0.0.0.0:5044->5044/tcp,9600/tcpThanks,使其部分工作。SMS CDR不是由filebeat获取的,不知道为什么,我可能会就此在堆栈上提出另一个问题