elasticsearch,logstash,logstash-grok,Logging,elasticsearch,Logstash,Logstash Grok" /> elasticsearch,logstash,logstash-grok,Logging,elasticsearch,Logstash,Logstash Grok" />

Logging Surigata安装在麋鹿堆顶部的建议?

Logging Surigata安装在麋鹿堆顶部的建议?,logging,elasticsearch,logstash,logstash-grok,Logging,elasticsearch,Logstash,Logstash Grok,我已经使用安装在一系列不同linux机器上的logstash转发器创建了一个用于日志收集的ELK堆栈,这非常有效 我现在正在考虑将suricata安装到主麋鹿堆栈上,以开始使用IDS/IPS功能 我的第一个问题是,我只需要将suricata安装到主麋鹿箱上,并更改此箱上的conf文件以及logtash转发器,所以suricata只需要安装在一个箱上 其次,我试图更改conf文件以允许使用suricata,因此我在下面列出了我的用于logstash和logstash转发器的conf文件 文件13-

我已经使用安装在一系列不同linux机器上的logstash转发器创建了一个用于日志收集的ELK堆栈,这非常有效

我现在正在考虑将suricata安装到主麋鹿堆栈上,以开始使用IDS/IPS功能

我的第一个问题是,我只需要将suricata安装到主麋鹿箱上,并更改此箱上的conf文件以及logtash转发器,所以suricata只需要安装在一个箱上

其次,我试图更改conf文件以允许使用suricata,因此我在下面列出了我的用于logstash和logstash转发器的conf文件

文件13-suricata.conf是我尝试将其放入logstash conf文件中的文件,但我不确定这是否是正确的方法,甚至不确定如何处理logstash转发器conf

任何帮助都将是惊人的

/etc/logstash/conf.d$ ls 
01-lumberjack-input.conf  11-sshlog.conf  13-suricata.conf
10-syslog.conf            12-apache.conf  30-lumberjack-output.conf
01-lumberjack-input.conf

input   {
  lumberjack    {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }

}
10-syslog.conf

滤器{ 如果[类型]=“系统日志”{

} }

11-sshlog.conf

filter {
if [type] == "sshlog" {
  grok {
    type => "sshlog"
    match => {"message" => "Failed password for (invalid user |)%{USERNAME:username} from %{IP:src_ip} port %{BASE10NUM:port} ssh2"}
    add_tag => "ssh_brute_force_attack"
  }

  grok {
    type => "sshlog"
    match => {"message" => "Accepted password for %{USERNAME:username} from %{IP:src_ip} port %{BASE10NUM:port} ssh2"}
    add_tag => "ssh_sucessful_login"
  }

  geoip {
    source => "src_ip"
  }
}
}
12-apache.conf

filter {
  if [type] == "apache-access" {
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
  }
}
13-suricata.conf

    filter {
      if [type] == "SuricataIDPS" {
        date {
          match => [ "timestamp", "ISO8601" ]
        }
        ruby {
          code => "if event['event_type'] == 'fileinfo'; event['fileinfo']['type']=event['fileinfo']['magic'].to_s.split(',')[0]; end;"
        }
      }

  if [src_ip]  {
    geoip {
      source => "src_ip"
      target => "geoip"
      #database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
    if ![geoip.ip] {
      if [dest_ip]  {
        geoip {
          source => "dest_ip"
          target => "geoip"
          #database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
          add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
          add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
        }
        mutate {
          convert => [ "[geoip][coordinates]", "float" ]
        }
      }
    }
  }
}
30-lumberjack-output.conf

output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}
logstash fordwarer形态

"files": [
   {
      "paths": [
        "/var/log/syslog",
        "/var/log/auth.log"
       ],
      "fields": { "type": "syslog" }
    },
    # An array of hashes. Each hash tells what paths to watch and
    # what fields to annotate on events from those paths.
    #{
      #"paths": [
        # single paths are fine
        #"/var/log/messages",
        # globs are fine too, they will be periodically evaluated
        # to see if any new files match the wildcard.
        #"/var/log/*.log"
      #],

      # A dictionary of fields to annotate on each event.
      #"fields": { "type": "syslog" }
    #}, {
      # A path of "-" means stdin.
      #"paths": [ "-" ],
      #"fields": { "type": "stdin" }
#    },
      {
      "paths": [
        "/var/log/apache2/*.log"
      ],
      "fields": { "type": "apache-access" }
        },
        {
      "paths": [
        "/var/log/auth*.log"
      ],
      "fields": { "type": "sshlog" }
        }
        "files": [
    {
      "paths": [ "/var/log/suricata/eve.json" ],
      "fields": { "type": "suricata" }
    }

  ]


}

必须在两台服务器上安装suricata,并进行一些配置更改,以将数据发送到PostJSON

除非上面发布的所有内容都是需要的

"files": [
   {
      "paths": [
        "/var/log/syslog",
        "/var/log/auth.log"
       ],
      "fields": { "type": "syslog" }
    },
    # An array of hashes. Each hash tells what paths to watch and
    # what fields to annotate on events from those paths.
    #{
      #"paths": [
        # single paths are fine
        #"/var/log/messages",
        # globs are fine too, they will be periodically evaluated
        # to see if any new files match the wildcard.
        #"/var/log/*.log"
      #],

      # A dictionary of fields to annotate on each event.
      #"fields": { "type": "syslog" }
    #}, {
      # A path of "-" means stdin.
      #"paths": [ "-" ],
      #"fields": { "type": "stdin" }
#    },
      {
      "paths": [
        "/var/log/apache2/*.log"
      ],
      "fields": { "type": "apache-access" }
        },
        {
      "paths": [
        "/var/log/auth*.log"
      ],
      "fields": { "type": "sshlog" }
        }
        "files": [
    {
      "paths": [ "/var/log/suricata/eve.json" ],
      "fields": { "type": "suricata" }
    }

  ]


}