Filter 分析多行消息时发生Grok分析错误

Filter 分析多行消息时发生Grok分析错误,filter,elastic-stack,logstash-grok,grok,Filter,Elastic Stack,Logstash Grok,Grok,我试图找出解析多条消息(如异常跟踪)的grok模式&下面就是这样一个日志 2017-03-30 14:57:41 [12345] [qtp1533780180-12] ERROR com.app.XYZ - Exception occurred while processing java.lang.NullPointerException: null at spark.webserver.MatcherFilter.doFilter(MatcherFilter.java:162)

我试图找出解析多条消息(如异常跟踪)的grok模式&下面就是这样一个日志

2017-03-30 14:57:41 [12345] [qtp1533780180-12] ERROR com.app.XYZ - Exception occurred while processing
java.lang.NullPointerException: null
        at spark.webserver.MatcherFilter.doFilter(MatcherFilter.java:162)
        at spark.webserver.JettyHandler.doHandle(JettyHandler.java:61)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:119)
        at org.eclipse.jetty.server.Server.handle(Server.java:517)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:302)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:242)
        at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:245)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
        at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:75)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:213)
        at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:147)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
        at java.lang.Thread.run(Thread.java:745)
这是我的logstash.conf

    input {
  file {
    path => ["/debug.log"]
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601} "
      negate => true
      what => previous
    }
  }
}

filter {

  mutate {
    gsub => ["message", "r", ""]
  }
  grok {
    match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} \[%{NOTSPACE:uid}\] \[%{NOTSPACE:thread}\] %{LOGLEVEL:loglevel} %{DATA:class}\-%{GREEDYDATA:message}" ]
    overwrite => [ "message" ]
  }
  date {
    match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
  }
}


output {
  elasticsearch { hosts => localhost }
  stdout { codec => rubydebug }
}
这对于单行日志解析很好,但在

0]“\u grokparsefailure”

对于多行异常跟踪


有人能告诉我解析多行日志的正确过滤器模式吗?

如果您正在处理多行日志,请使用logstash提供的多行过滤器。首先需要在多行筛选器中区分新记录的开始。从您的日志中,我可以看到新记录以“TIMESTAMP”开头,下面是使用示例

用法示例::

filter {
  multiline {
    type => "/debug.log"
    pattern => "^%{TIMESTAMP}"
    what => "previous"
 }
}

然后,您可以使用Gsub替换将由多行筛选器添加到记录中的“\n”和“\r”。之后,使用Grok。

删除后,上述日志存储配置工作正常

变异{ gsub=>[“消息”、“r”和“] }

因此,用于解析上述日志模式的单行和多行输入的工作日志存储配置

input {
  file {
    path => ["./debug.log"]
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^%{TIMESTAMP_ISO8601} "
      negate => true
      what => previous
    }
  }
}

filter {
  grok {
    match => [ "message", "%{TIMESTAMP_ISO8601:timestamp} \[%{NOTSPACE:uid}\] \[%{NOTSPACE:thread}\] %{LOGLEVEL:loglevel} %{DATA:class}\-%{GREEDYDATA:message}" ]
    overwrite => [ "message" ]
  }
  date {
    match => [ "timestamp" , "yyyy-MM-dd HH:mm:ss" ]
  }
}


output {
  elasticsearch { hosts => localhost }
  stdout { codec => rubydebug }
}

我尝试过这样做&我现在收到以下错误“无法加载无效配置{:reason=>”找不到任何名为“multiline”的筛选器插件。您确定这是正确的吗?尝试加载多行筛选器插件导致以下错误:加载所请求的名为filter类型的multiline的插件时出现问题。错误:NameError NameError“}”@Aarish::这取决于您使用的Elastic/Logstash版本。对于最新版本,您需要在输入中使用多行。参考链接:我通过安装多行编解码器修复了它&我有上面的logstash.conf。即使这样,我也会得到grok解析错误。你能检查一下并告诉我错误是什么吗?