Logstash grok筛选器-动态命名字段
我有以下格式的日志行,希望提取字段:Logstash grok筛选器-动态命名字段,logstash,Logstash,我有以下格式的日志行,希望提取字段: [field1: content1] [field2: content2] [field3: content3] ... 我既不知道字段名,也不知道字段数 我尝试了反向引用和sprintf格式,但没有结果: match => [ "message", "(?:\[(\w+): %{DATA:\k<-1>}\])+" ] # not working match => [ "message", "(?:\[%{WORD:fieldnam
[field1: content1] [field2: content2] [field3: content3] ...
我既不知道字段名,也不知道字段数
我尝试了反向引用和sprintf格式,但没有结果:
match => [ "message", "(?:\[(\w+): %{DATA:\k<-1>}\])+" ] # not working
match => [ "message", "(?:\[%{WORD:fieldname}: %{DATA:%{fieldname}}\])+" ] # not working
kv筛选器也不合适,因为字段的内容可能包含空格
是否有任何插件/策略可以解决此问题?Logstash Ruby插件可以帮助您: 以下是配置:
input {
stdin {}
}
filter {
ruby {
code => "
fieldArray = event['message'].split('] [')
for field in fieldArray
field = field.delete '['
field = field.delete ']'
result = field.split(': ')
event[result[0]] = result[1]
end
"
}
}
output {
stdout {
codec => rubydebug
}
}
使用您的日志:
[field1: content1] [field2: content2] [field3: content3]
这是输出:
{
"message" => "[field1: content1] [field2: content2] [field3: content3]",
"@version" => "1",
"@timestamp" => "2014-07-07T08:49:28.543Z",
"host" => "abc",
"field1" => "content1",
"field2" => "content2",
"field3" => "content3"
}
我尝试了4个领域,它也工作
请注意,ruby代码中的事件是logstash事件。您可以使用它获取所有事件字段,如消息、@timestamp
等
享受吧 我找到了另一种使用正则表达式的方法:
ruby {
code => "
fields = event['message'].scan(/(?<=\[)\w+: .*?(?=\](?: |$))/)
for field in fields
field = field.split(': ')
event[field[0]] = field[1]
end
"
}
ruby{
代码=>”
fields=event['message'].scan(/(?我知道这是一篇老文章,但我今天才看到它,所以我想我应该提供一个替代方法。请注意,作为一个规则,我几乎总是使用ruby过滤器,正如前面两个答案中的建议。不过,我想我会提供这个作为替代方法
如果有固定数量的字段或最大数量的字段(即,可能少于三个字段,但永远不会超过三个字段),也可以通过组合grok
和mutate
过滤器来实现
# Test message is: `[fieldname: value]`
# Store values in [@metadata] so we don't have to explicitly delete them.
grok {
match => {
"[message]" => [
"\[%{DATA:[@metadata][_field_name_01]}:\s+%{DATA:[@metadata][_field_value_01]}\]( \[%{DATA:[@metadata][_field_name_02]}:\s+%{DATA:[@metadata][_field_value_02]}\])?( \[%{DATA:[@metadata][_field_name_03]}:\s+%{DATA:[@metadata][_field_value_03]}\])?"
]
}
}
# Rename the fieldname, value combinations. I.e., if the following data is in the message:
#
# [foo: bar]
#
# It will be saved in the elasticsearch output as:
#
# {"foo":"bar"}
#
mutate {
rename => {
"[@metadata][_field_value_01]" => "[%{[@metadata][_field_name_01]}]"
"[@metadata][_field_value_02]" => "[%{[@metadata][_field_name_02]}]"
"[@metadata][_field_value_03]" => "[%{[@metadata][_field_name_03]}]"
}
tag_on_failure => []
}
对于那些可能不太熟悉正则表达式的人,()?
中的捕获是可选的正则表达式匹配,这意味着如果没有匹配,表达式将不会失败。标记\u on_failure=>[]mutate
过滤器中的
选项确保,如果其中一个重命名失败,因为没有要捕获的数据,因此没有要重命名的字段,则不会向标记追加错误
# Test message is: `[fieldname: value]`
# Store values in [@metadata] so we don't have to explicitly delete them.
grok {
match => {
"[message]" => [
"\[%{DATA:[@metadata][_field_name_01]}:\s+%{DATA:[@metadata][_field_value_01]}\]( \[%{DATA:[@metadata][_field_name_02]}:\s+%{DATA:[@metadata][_field_value_02]}\])?( \[%{DATA:[@metadata][_field_name_03]}:\s+%{DATA:[@metadata][_field_value_03]}\])?"
]
}
}
# Rename the fieldname, value combinations. I.e., if the following data is in the message:
#
# [foo: bar]
#
# It will be saved in the elasticsearch output as:
#
# {"foo":"bar"}
#
mutate {
rename => {
"[@metadata][_field_value_01]" => "[%{[@metadata][_field_name_01]}]"
"[@metadata][_field_value_02]" => "[%{[@metadata][_field_name_02]}]"
"[@metadata][_field_value_03]" => "[%{[@metadata][_field_name_03]}]"
}
tag_on_failure => []
}