elasticsearch grok filter可从消息字段中选择特定单词,elasticsearch,logstash,logstash-grok,elasticsearch,Logstash,Logstash Grok" /> elasticsearch grok filter可从消息字段中选择特定单词,elasticsearch,logstash,logstash-grok,elasticsearch,Logstash,Logstash Grok" />

elasticsearch grok filter可从消息字段中选择特定单词

elasticsearch grok filter可从消息字段中选择特定单词,elasticsearch,logstash,logstash-grok,elasticsearch,Logstash,Logstash Grok,下面是我遇到的问题: 我只想挑出一些词,但一直没能把它说对 我想要的只是得到一个时间戳位于消息字符串中的字符串。和另一个词一起,说OrderCreated 可以这样从消息字段中选择特定的字符串/单词吗 解剖工作得很好,但现在我遇到了一个我以前没有遇到的问题 input { file { path => "C:\Data\data.log" start_position => "beginning" sincedb_path => "NUL" }

下面是我遇到的问题:

我只想挑出一些词,但一直没能把它说对

我想要的只是得到一个时间戳位于消息字符串中的字符串。和另一个词一起,说OrderCreated

可以这样从消息字段中选择特定的字符串/单词吗

解剖工作得很好,但现在我遇到了一个我以前没有遇到的问题

input {
  file {
    path => "C:\Data\data.log"
    start_position => "beginning"
    sincedb_path => "NUL"

  }
}
filter {
    if [type] == "apache" {
        grok {
                    match => ["message", "%{COMBINEDAPACHELOG} "]
        }
    }mutate{
           remove_field => ["@timestamp"]
           remove_field => ["host"]
           remove_field => ["@version"]
           remove_field => ["path"]
    }   
}

output {
    elasticsearch{
    hosts => "localhost:9200"
    index => "logdata2"
    document_type => "logs"
    }
    stdout {codec => rubydebug}
}
输出如下。这对我来说是新的,“\r”部分以前不存在。。这对任何人来说都熟悉吗?我怎样修理这个零件

dissect filter 
input {
  file {
    path => "C:\Data\Logs\testrunning.log"
    start_position => "beginning"
    sincedb_path => "NUL"

  }
}
  filter {
    dissect {
      mapping => {
        "message" => "%{ts} %{+ts} %{+ts} %{src} %{} : %{msg}"
      }
    }mutate { remove_field => "@timestamp" 
    remove_field => "pid"
    remove_field => "prog"
    remove_field => "@version"
    remove_field => "host"
    remove_field => "path"
    remove_field => "src"
  } 
}
output {
    elasticsearch{
    hosts => "localhost:9200"
    index => "logdata12"
    document_type => "logs"
    }
    stdout {codec => rubydebug}
} 

您可以使用
dissect{}
。例如,如果您有以下日志行:

198.41.30.203---[21/May/2018:14:36:35-0500]“GET/tag/eclipse/feed/HTTP/1.1”404 5232“-”UniversalFeedParser/4.2-pre-308-svn+http://feedparser.org/"

您的剖析可以是这样的,您甚至可以转换数据类型。dissect的性能比grok好得多:

{
    "message" => "General 2018-05-17 15:47:33.149 : StatusInformationSomeData.Unsubscribe() \r",
        "msg" => "StatusInformationSomeData.Unsubscribe() \r",
         "ts" => "General 2018-05-17 15:47:33.149"
}
{
    "message" => "\r",
        "msg" => "\r",
         "ts" => "  "
}

Daniel,也许您应该尝试使用
dissect{}
filter,它比grok更友好,速度也更快。非常感谢,我刚刚开始使用它,它看起来比grok更有希望!你能给我们看一份真实的日志吗?@SufiyanGhori,问题解决了,我理解的原因是日志中的消息长度决定了如何在解析时自动拆分。我可能错了。但无论如何,问题已经不存在了,我回去把原木的形状改得更干净、更容易。解剖比格罗克更有效,部分问题现在解决了,非常感谢!
dissect {
    mapping => {
        "message" => '%{source_ip} %{} %{username} [%{raw_timestamp}] "%{http_verb} %{http_path} %{http_version}" %{http_response} %{http_bytes} "%{site}" "%{useragent}"'
        convert_datatype => {
            http_bytes => "int"
        }
    } 
}