elasticsearch,logstash,Security,elasticsearch,Logstash" /> elasticsearch,logstash,Security,elasticsearch,Logstash" />

Security Logstash:Logstash输入中有多个插件

Security Logstash:Logstash输入中有多个插件,security,elasticsearch,logstash,Security,elasticsearch,Logstash,我目前正在使用logstash和vulnwhisperer(将json格式的openvas报告提取到一个目录)。这一切进展顺利 现在我在logstash中的配置文件有问题。 最初它只接收来自文件夹目录的输入,但我需要解析我可以通过查询elasticsearch获得的信息。因此我试图在配置文件的logstash输入中使用两个插件。 正如您在下面看到的,logstash工作不正常,由于配置文件中的错误,它一直在启动和关闭 下面您可以看到日志存储状态和日志。我是logstash的新手,非常感谢你的帮助

我目前正在使用logstash和vulnwhisperer(将json格式的openvas报告提取到一个目录)。这一切进展顺利

现在我在logstash中的配置文件有问题。 最初它只接收来自文件夹目录的输入,但我需要解析我可以通过查询elasticsearch获得的信息。因此我试图在配置文件的logstash输入中使用两个插件。

正如您在下面看到的,logstash工作不正常,由于配置文件中的错误,它一直在启动和关闭

下面您可以看到日志存储状态和日志。我是logstash的新手,非常感谢你的帮助。谢谢大家!

ip的位置标记为“X”,仅用于此目的

日志存储配置文件:

# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 03/04/2018
# Version 0.3
# Description: Take in qualys web scan reports from vulnWhisperer and pumps into logstash

input {
  file {
    path => "/opt/VulnWhisperer/data/openvas/*.json"
    type => json
    codec => json
    start_position => "beginning"
    tags => [ "openvas_scan", "openvas" ]
  }
  elasticsearch {
    hosts => "http://XX.XXX.XXX.XXX:9200" (http://XX.XXX.XXX.XXX:9200') 
    index => "metricbeat-*"
    query => { "query": { "match": {"host.name" : "%{asset}" } } }
    size => 1
    docinfo => false
    sort => "sort": [ { "@timestamp": { "order": "desc"} } ]
  }
}

filter {
  if "openvas_scan" in [tags] {
    mutate {
      replace => [ "message", "%{message}" ]
      gsub => [
        "message", "\|\|\|", " ",
        "message", "\t\t", " ",
        "message", "    ", " ",
        "message", "   ", " ",
        "message", "  ", " ",
        "message", "nan", " ",
        "message",'\n',''
      ]
    }

    grok {
        match => { "path" => "openvas_scan_%{DATA:scan_id}_%{INT:last_updated}.json$" }
     tag_on_failure => []
    }

    mutate {
      add_field => { "risk_score" => "%{cvss}" }
    }

    if [risk] == "1" {
      mutate { add_field => { "risk_number" => 0 }}
      mutate { replace => { "risk" => "info" }}
    }
    if [risk] == "2" {
      mutate { add_field => { "risk_number" => 1 }}
      mutate { replace => { "risk" => "low" }}
    }
    if [risk] == "3" {
      mutate { add_field => { "risk_number" => 2 }}
      mutate { replace => { "risk" => "medium" }}
    }
    if [risk] == "4" {
      mutate { add_field => { "risk_number" => 3 }}
      mutate { replace => { "risk" => "high" }}
    }
    if [risk] == "5" {
      mutate { add_field => { "risk_number" => 4 }}
      mutate { replace => { "risk" => "critical" }}
    }

    mutate {
      remove_field => "message"
    }

    if [first_time_detected] {
      date {
        match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
        target => "first_time_detected"
      }
    }
    if [first_time_tested] {
      date {
        match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
        target => "first_time_tested"
      }
    }
    if [last_time_detected] {
      date {
        match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
        target => "last_time_detected"
      }
    }
    if [last_time_tested] {
      date {
        match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
        target => "last_time_tested"
      }
    }
    date {
      match => [ "last_updated", "UNIX" ]
      target => "@timestamp"
      remove_field => "last_updated"
    }
    mutate {
      convert => { "plugin_id" => "integer"}
      convert => { "id" => "integer"}
      convert => { "risk_number" => "integer"}
      convert => { "risk_score" => "float"}
      convert => { "total_times_detected" => "integer"}
      convert => { "cvss_temporal" => "float"}
      convert => { "cvss" => "float"}
    }
    if [risk_score] == 0 {
      mutate {
        add_field => { "risk_score_name" => "info" }
      }
    }
    if [risk_score] > 0 and [risk_score] < 3 {
      mutate {
        add_field => { "risk_score_name" => "low" }
      }
    }
    if [risk_score] >= 3 and [risk_score] < 6 {
      mutate {
        add_field => { "risk_score_name" => "medium" }
      }
    }
    if [risk_score] >=6 and [risk_score] < 9 {
      mutate {
        add_field => { "risk_score_name" => "high" }
      }
    }
    if [risk_score] >= 9 {
      mutate {
        add_field => { "risk_score_name" => "critical" }
      }
    }
    # Add your critical assets by subnet or by hostname. Comment this field out if you don't want to tag any, but the asset panel will break.
    if [asset] =~ "^10\.0\.100\." {
      mutate {
        add_tag => [ "critical_asset" ]
      }
    }
  }
}
output {
  if "openvas" in [tags] {
    stdout { codec => rubydebug }
    elasticsearch {
      hosts => [ "XX.XXX.XXX.XXX:XXXX" ]
      index => "logstash-vulnwhisperer-%{+YYYY.MM}"
    }
  }
}
root@logstash:/etc/logstash/conf.d# service logstash status
● logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2018-11-23 12:17:29 WET; 9s ago
 Main PID: 7041 (java)
    Tasks: 17 (limit: 4915)
   CGroup: /system.slice/logstash.service
           └─7041 /usr/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedyna

Nov 23 12:17:29 logstash systemd[1]: logstash.service: Service hold-off time over, scheduling restart.
Nov 23 12:17:29 logstash systemd[1]: Stopped logstash.
Nov 23 12:17:29 logstash systemd[1]: Started logstash.
[2018-11-23T16:16:57,156][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-11-23T16:17:27,133][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}
[2018-11-23T16:17:28,380][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, \", ', } at line 31, column 43 (byte 643) after input {\n  file {\n    path => \"/opt/VulnWhisperer/data/openvas/*.json\"\n    type => json\n    codec => json\n    start_position => \"beginning\"\n    tags => [ \"openvas_scan\", \"openvas\" ]\n  }\n  elasticsearch {\n    hosts => \"http://XX.XXX.XXX.XXX:9200\" ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:149:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:22:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:90:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:309:in `block in converge_state'"]}
[2018-11-23T16:17:28,801][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-11-23T16:17:58,602][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}
[2018-11-23T16:17:59,808][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, \", ', } at line 31, column 43 (byte 643) after input {\n  file {\n    path => \"/opt/VulnWhisperer/data/openvas/*.json\"\n    type => json\n    codec => json\n    start_position => \"beginning\"\n    tags => [ \"openvas_scan\", \"openvas\" ]\n  }\n  elasticsearch {\n    hosts => \"http://XX.XXX.XXX.XXX:XXXX\" ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:149:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:22:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:90:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:309:in `block in converge_state'"]}
[2018-11-23T16:18:00,174][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
日志存储日志:

# Author: Austin Taylor and Justin Henderson
# Email: austin@hasecuritysolutions.com
# Last Update: 03/04/2018
# Version 0.3
# Description: Take in qualys web scan reports from vulnWhisperer and pumps into logstash

input {
  file {
    path => "/opt/VulnWhisperer/data/openvas/*.json"
    type => json
    codec => json
    start_position => "beginning"
    tags => [ "openvas_scan", "openvas" ]
  }
  elasticsearch {
    hosts => "http://XX.XXX.XXX.XXX:9200" (http://XX.XXX.XXX.XXX:9200') 
    index => "metricbeat-*"
    query => { "query": { "match": {"host.name" : "%{asset}" } } }
    size => 1
    docinfo => false
    sort => "sort": [ { "@timestamp": { "order": "desc"} } ]
  }
}

filter {
  if "openvas_scan" in [tags] {
    mutate {
      replace => [ "message", "%{message}" ]
      gsub => [
        "message", "\|\|\|", " ",
        "message", "\t\t", " ",
        "message", "    ", " ",
        "message", "   ", " ",
        "message", "  ", " ",
        "message", "nan", " ",
        "message",'\n',''
      ]
    }

    grok {
        match => { "path" => "openvas_scan_%{DATA:scan_id}_%{INT:last_updated}.json$" }
     tag_on_failure => []
    }

    mutate {
      add_field => { "risk_score" => "%{cvss}" }
    }

    if [risk] == "1" {
      mutate { add_field => { "risk_number" => 0 }}
      mutate { replace => { "risk" => "info" }}
    }
    if [risk] == "2" {
      mutate { add_field => { "risk_number" => 1 }}
      mutate { replace => { "risk" => "low" }}
    }
    if [risk] == "3" {
      mutate { add_field => { "risk_number" => 2 }}
      mutate { replace => { "risk" => "medium" }}
    }
    if [risk] == "4" {
      mutate { add_field => { "risk_number" => 3 }}
      mutate { replace => { "risk" => "high" }}
    }
    if [risk] == "5" {
      mutate { add_field => { "risk_number" => 4 }}
      mutate { replace => { "risk" => "critical" }}
    }

    mutate {
      remove_field => "message"
    }

    if [first_time_detected] {
      date {
        match => [ "first_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
        target => "first_time_detected"
      }
    }
    if [first_time_tested] {
      date {
        match => [ "first_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
        target => "first_time_tested"
      }
    }
    if [last_time_detected] {
      date {
        match => [ "last_time_detected", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
        target => "last_time_detected"
      }
    }
    if [last_time_tested] {
      date {
        match => [ "last_time_tested", "dd MMM yyyy HH:mma 'GMT'ZZ", "dd MMM yyyy HH:mma 'GMT'" ]
        target => "last_time_tested"
      }
    }
    date {
      match => [ "last_updated", "UNIX" ]
      target => "@timestamp"
      remove_field => "last_updated"
    }
    mutate {
      convert => { "plugin_id" => "integer"}
      convert => { "id" => "integer"}
      convert => { "risk_number" => "integer"}
      convert => { "risk_score" => "float"}
      convert => { "total_times_detected" => "integer"}
      convert => { "cvss_temporal" => "float"}
      convert => { "cvss" => "float"}
    }
    if [risk_score] == 0 {
      mutate {
        add_field => { "risk_score_name" => "info" }
      }
    }
    if [risk_score] > 0 and [risk_score] < 3 {
      mutate {
        add_field => { "risk_score_name" => "low" }
      }
    }
    if [risk_score] >= 3 and [risk_score] < 6 {
      mutate {
        add_field => { "risk_score_name" => "medium" }
      }
    }
    if [risk_score] >=6 and [risk_score] < 9 {
      mutate {
        add_field => { "risk_score_name" => "high" }
      }
    }
    if [risk_score] >= 9 {
      mutate {
        add_field => { "risk_score_name" => "critical" }
      }
    }
    # Add your critical assets by subnet or by hostname. Comment this field out if you don't want to tag any, but the asset panel will break.
    if [asset] =~ "^10\.0\.100\." {
      mutate {
        add_tag => [ "critical_asset" ]
      }
    }
  }
}
output {
  if "openvas" in [tags] {
    stdout { codec => rubydebug }
    elasticsearch {
      hosts => [ "XX.XXX.XXX.XXX:XXXX" ]
      index => "logstash-vulnwhisperer-%{+YYYY.MM}"
    }
  }
}
root@logstash:/etc/logstash/conf.d# service logstash status
● logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2018-11-23 12:17:29 WET; 9s ago
 Main PID: 7041 (java)
    Tasks: 17 (limit: 4915)
   CGroup: /system.slice/logstash.service
           └─7041 /usr/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedyna

Nov 23 12:17:29 logstash systemd[1]: logstash.service: Service hold-off time over, scheduling restart.
Nov 23 12:17:29 logstash systemd[1]: Stopped logstash.
Nov 23 12:17:29 logstash systemd[1]: Started logstash.
[2018-11-23T16:16:57,156][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-11-23T16:17:27,133][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}
[2018-11-23T16:17:28,380][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, \", ', } at line 31, column 43 (byte 643) after input {\n  file {\n    path => \"/opt/VulnWhisperer/data/openvas/*.json\"\n    type => json\n    codec => json\n    start_position => \"beginning\"\n    tags => [ \"openvas_scan\", \"openvas\" ]\n  }\n  elasticsearch {\n    hosts => \"http://XX.XXX.XXX.XXX:9200\" ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:149:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:22:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:90:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:309:in `block in converge_state'"]}
[2018-11-23T16:17:28,801][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-11-23T16:17:58,602][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}
[2018-11-23T16:17:59,808][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, \", ', } at line 31, column 43 (byte 643) after input {\n  file {\n    path => \"/opt/VulnWhisperer/data/openvas/*.json\"\n    type => json\n    codec => json\n    start_position => \"beginning\"\n    tags => [ \"openvas_scan\", \"openvas\" ]\n  }\n  elasticsearch {\n    hosts => \"http://XX.XXX.XXX.XXX:XXXX\" ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:149:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:22:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:90:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:309:in `block in converge_state'"]}
[2018-11-23T16:18:00,174][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

请更改以下设置

elasticsearch {
    hosts => "localhost" 
    index => "metricbeat-*"
    query => '{ "query": { "match": {"host.name" : "%{asset}" } } }'
    size => 1
    docinfo => false
    #sort => "sort": [ { "@timestamp": { "order": "desc"} } ]
  }

Angel H的答案是正确的,只是它排除了排序。下面详细解释了logstash.conf抛出错误的原因

  • hosts=>”http://XX.XXX.XXX.XXX:9200" (http://XX.XXX.XXX.XXX:9200“)
    -->这应该是
    hosts=>”http://XX.XXX.XXX.XXX:9200“
    如果您只有一台主机。对于多个主机,请使用
    hosts=>[”http://XX.XXX.XXX.XXX:9200","http://XX.XXX.XXX.XXX:9200“]

  • 查询==>值。这里的值应该在引号中
    。因此,
    query=>'{“query”:{“match”:{“host.name”:“{asset}}”,sort:[{“@timestamp”:{“order”:“desc”}]}'

  • 排序
    应在
    查询

  • 以下是有效的修改版本:

      elasticsearch {
        hosts => ["http://XX.XXX.XXX.XXX:9200","http://XX.XXX.XXX.XXX:9200"]
        index => "metricbeat-*"
        query => '{ "query": { "match": {"host.name" : "%{asset}" } }, "sort": [ { "@timestamp": { "order": "desc"} } ] }'
        size => 1
        docinfo => false
        #sort => '"sort": [ { "@timestamp": { "order": "desc"} } ]'
      }
    
    您可以使用logstash中的
    --config.test_和_exit
    选项轻松测试
    logstash.conf
    文件,而无需实际运行logstash。这就像是一次试跑

    bin sandeep_kanabar$ ./logstash -f ../config/logstash.conf --config.test_and_exit
    ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
    Sending Logstash's logs to /<logstash_dir>/logstash-5.5.1/logs which is now configured via log4j2.properties
    Configuration OK
    [2019-10-25T13:19:32,018][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
    bin sandeep_kanabar$
    

    第16行
    是带有主机的那一行

    为什么在
    elasticsearch
    输入中的
    hosts
    行的括号中有一个值?在示例中,我发现他们这样使用它,但我不太确定elasticsearch输入sintax或我使用两个插件的方式。您需要删除这一点,并且只有
    hosts=>”http://XX.XXX.XXX.XXX:9200“
    ,它会工作得更好。我已经做了这些更改,不幸的是,我在状态和日志扫描中收到了相同的错误,您共享了新的错误,可能还有另一个问题