Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/360.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/visual-studio-2008/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Logstash 日志的自定义Grok模式_Logstash_Logstash Grok - Fatal编程技术网

Logstash 日志的自定义Grok模式

Logstash 日志的自定义Grok模式,logstash,logstash-grok,Logstash,Logstash Grok,下面是我的日志示例: 23:28:32.226 WARN [MsgParser:ListProc-Q0:I5] Parsing error Error mapping the fieldAdditional Information: at com.authentic.mapper.parsing.LengthVar.readBytes(LengthVar.java:178) at com.authentic.mapper.parsing.GrpLengthVar.rea

下面是我的日志示例:

23:28:32.226 WARN  [MsgParser:ListProc-Q0:I5]   Parsing error
Error mapping the fieldAdditional Information: 

    at com.authentic.mapper.parsing.LengthVar.readBytes(LengthVar.java:178)
    at com.authentic.mapper.parsing.GrpLengthVar.read(GrpLengthVar.java:96)
    at com.authentic.mapper.parsing.GrpLengthVar.read(GrpLengthVar.java:119)
    at com.authentic.mapper.parsing.MsgParser.processReadEnumeration(MsgParser.java:339)
    at com.authentic.mapper.parsing.MsgParser.parseIncomingMessageBody(MsgParser.java:295)
    at com.authentic.mapper.MapperMgr.parseMsg(MapperMgr.java:1033)
    at com.authentic.architecture.interchange.accesspoint.AbstractConnectionHandler.parseMessage(AbstractConnectionHandler.java:4408)
    at com.authentic.architecture.interchange.accesspoint.AbstractConnectionHandler.plainMessageReceivedEvent(AbstractConnectionHandler.java:2031)
    at com.authentic.architecture.interchange.accesspoint.AbstractConnectionHandler.messageReceivedEvent(AbstractConnectionHandler.java:1911)
    at com.authentic.architecture.interchange.accesspoint.SocketConnectionHandler.messageReceivedEvent(SocketConnectionHandler.java:801)
    at com.authentic.architecture.interchange.accesspoint.SocketConnectionHandler.messageReceivedEvent(SocketConnectionHandler.java:282)
    at com.authentic.architecture.interchange.accesspoint.SocketConnectionHandler.messageReceivedEvent(SocketConnectionHandler.java:261)
    at com.authentic.architecture.interchange.accesspoint.AbstractConnectionHandler.processEventQueue(AbstractConnectionHandler.java:4110)
    at com.authentic.architecture.interchange.accesspoint.AbstractConnectionHandler.access$100(AbstractConnectionHandler.java:320)
    at com.authentic.architecture.interchange.accesspoint.AbstractConnectionHandler$ConnectionHandlerRunner.execute(AbstractConnectionHandler.java:416)
    at com.authentic.architecture.actions.ListProcessor.suspend(ListProcessor.java:1130)
    at com.authentic.architecture.actions.ListProcessor.run(ListProcessor.java:775)
    at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NumberFormatException: For input string: "^123"
    at java.lang.NumberFormatException.forInputString(Unknown Source)
    at java.lang.Integer.parseInt(Unknown Source)
    at java.lang.Integer.parseInt(Unknown Source)
    at com.authentic.mapper.parsing.LengthVar.readBytes(LengthVar.java:170)
    ... 17 more
我必须将这些日志解析为以下字段:时间戳、日志级别、记录器、消息、堆栈跟踪

我使用了多行过滤器:

multiline {
pattern => "%{TIME:timestamp}"
negate => true
what => “previous”
}
以及我在grok filter中使用的模式:

match=>{"message"=>"%{TIME:timestamp} %{LOGLEVEL:loglevel} \s*\[%{DATA:logger}\]\s*%{GREEDYDATA:msg}\n*(?<stacktrace>(.|\r|\n)*)"}
match=>{“message”=>“%{TIME:timestamp}%{LOGLEVEL:LOGLEVEL}\s*\[%{DATA:logger}\]\s*%{greedydydata:msg}\n*(?(.|\r |\n)*)”
我已经和你核对过了。但是stacktrace字段出现匹配错误

请提出一些建议。
提前感谢。

如果要匹配整个stacktrace,您需要一个多行过滤器。此多行过滤器适用于您:

codec => multiline {
        pattern => "^%{TIME} "
        negate => true
        what => previous
    }
说明:不是以时间戳开头的每一行(如23:28:32.226)都将作为前一行的一部分重新组织。另请参见有关处理多行的说明

现在谈谈你的模式。以下是我的作品:

%{TIME:timestamp} %{LOGLEVEL:loglevel}  \[%{DATA:logger}\]   %{GREEDYDATA:message}\n(?<stacktrace>(.|\r|\n)*)

结果如下:

hi@Phonolog请查看我的答案请查看您的原始问题,而不是发布另一个答案。hi@Phonolog我已经编辑了相同的答案,请现在提供一些解决方案。我认为您没有使用右侧的mutliline字段。您必须选中“否定多行正则表达式”复选框。这对应于配置中的
negate=>true
,好了,现在这个在线工具也显示了“stacktrace”的匹配结果,但是当我在KIBANA中可视化它时,这种解析不会发生,在KIBANA中,它没有显示字段的名称,即可用字段选项卡中的stacktrace…………还有一件事……在在线工具GROK Constructor中,如果我将两行日志放在第一行(除了stacktrace),第二行(正常日志)。。然后,它不会将第二行标识为多行,只将其添加到第一个stacktrace行中。
input {
  file {
    path => "/var/log/yourlog.log"
    start_position => "beginning"
    codec => multiline {
        pattern => "^%{TIME} "
        negate => true
        what => previous
    }
  }
}
filter {
  grok {
    match => [ "message", "%{TIME:timestamp} %{LOGLEVEL:loglevel}  \[%{DATA:logger}\]   %{GREEDYDATA:message}\n(?<stacktrace>(.|\r|\n)*)" ]
  }
}