elasticsearch Logstash将不同字段输出到不同的弹性搜索索引,elasticsearch,logstash,elastic-stack,filebeat,elasticsearch,Logstash,Elastic Stack,Filebeat" /> elasticsearch Logstash将不同字段输出到不同的弹性搜索索引,elasticsearch,logstash,elastic-stack,filebeat,elasticsearch,Logstash,Elastic Stack,Filebeat" />

elasticsearch Logstash将不同字段输出到不同的弹性搜索索引

elasticsearch Logstash将不同字段输出到不同的弹性搜索索引,elasticsearch,logstash,elastic-stack,filebeat,elasticsearch,Logstash,Elastic Stack,Filebeat,我有一个Filebeat实例,它将Apache访问日志发送到Logstash。 Logstash管道将文件转换并加载处理后的字段,例如(field1、field2和field3)到弹性搜索到索引indexA。流程简单且有效。这是我的pipeline.conf input{ beats{ port => "5043" } } filter { grok { patterns_dir => ["/usr/share/l

我有一个
Filebeat
实例,它将
Apache
访问日志发送到
Logstash
Logstash
管道将文件转换并加载处理后的字段,例如(field1、field2和field3)到
弹性搜索
到索引indexA。流程简单且有效。这是我的pipeline.conf

input{
    beats{
        port => "5043"
    }
}
filter 
{

    grok 
    {
        patterns_dir => ["/usr/share/logstash/patterns"]
        match =>{   "message" => ["%{IPORHOST:[client_ip]} - %{DATA:[user_name]} \[%{HTTPDATE:[access_time]}\] \"%{WORD:[method]} %{DATA:[url]} HTTP/%{NUMBER:[http_version]}\" %{NUMBER:[response_code]} %{NUMBER:[bytes]}( \"%{DATA:[referrer]}\")?( \"%{DATA:[user_agent]}\")?",
                    "%{IPORHOST:[remote_ip]} - %{DATA:[user_name]} \\[%{HTTPDATE:[time]}\\] \"-\" %{NUMBER:[response_code]} -" ] 
                }
        remove_field => "@version"
        remove_field => "beat"
        remove_field => "input_type"
        remove_field => "source"
        remove_field => "type"
        remove_field => "tags"
        remove_field => "http_version"
        remove_field => "@timestamp"
        remove_field => "message"
    }
    mutate
    {
        add_field => { "field1" => "%{access_time}" }
        add_field => { "field2" => "%{host}" }
        add_field => { "field3" => "%{read_timestamp}" }
    }
}
output {
    elasticsearch{
        hosts => ["localhost:9200"]
        index => "indexA"
    }
}
现在我要做的是添加另外三个字段field4field5,并将它们添加到名为indexB的单独索引中。因此,在最后,indexA持有field1字段2和field3,而IndexB持有field4和field5

到目前为止,这是修改过的pipeline.conf,它似乎不起作用

input{
    beats{
        port => "5043"
    }
}
filter 
{

    grok 
    {
        patterns_dir => ["/usr/share/logstash/patterns"]
        match =>{   "message" => ["%{IPORHOST:[client_ip]} - %{DATA:[user_name]} \[%{HTTPDATE:[access_time]}\] \"%{WORD:[method]} %{DATA:[url]} HTTP/%{NUMBER:[http_version]}\" %{NUMBER:[response_code]} %{NUMBER:[bytes]}( \"%{DATA:[referrer]}\")?( \"%{DATA:[user_agent]}\")?",
                    "%{IPORHOST:[remote_ip]} - %{DATA:[user_name]} \\[%{HTTPDATE:[time]}\\] \"-\" %{NUMBER:[response_code]} -" ] 
                }
        remove_field => "@version"
        remove_field => "beat"
        remove_field => "input_type"
        remove_field => "type"
        remove_field => "http_version"
        remove_field => "@timestamp"
        remove_field => "message"
    }
    mutate
    {
        add_field => { "field1" => "%{access_time}" }
        add_field => { "field2" => "%{host}" }
        add_field => { "field3" => "%{read_timestamp}" }
    }   
}
output {
    elasticsearch{
        hosts => ["localhost:9200"]
        index => "indexA"
    }
}
filter
{
    mutate
    {
        add_field => { "field4" => "%{source}" }
        add_field => { "field5" => "%{tags}" }
        remove_field => "field1"
        remove_field => "field2"
        remove_field => "field3"
    }
}
output {
    elasticsearch{
        hosts => ["localhost:9200"]
        index => "indexB"
    }
}   

请有人指出我的错误所在或解决方案的任何替代方案。

您需要使用。然后,您可以将所需字段添加到每个相应的事件中,并将它们放入两个不同的ES索引中:

input{
    beats{
        port => "5043"
    }
}
filter 
{

    grok 
    {
        patterns_dir => ["/usr/share/logstash/patterns"]
        match =>{   "message" => ["%{IPORHOST:[client_ip]} - %{DATA:[user_name]} \[%{HTTPDATE:[access_time]}\] \"%{WORD:[method]} %{DATA:[url]} HTTP/%{NUMBER:[http_version]}\" %{NUMBER:[response_code]} %{NUMBER:[bytes]}( \"%{DATA:[referrer]}\")?( \"%{DATA:[user_agent]}\")?",
                    "%{IPORHOST:[remote_ip]} - %{DATA:[user_name]} \\[%{HTTPDATE:[time]}\\] \"-\" %{NUMBER:[response_code]} -" ] 
                }
        remove_field => "@version"
        remove_field => "beat"
        remove_field => "input_type"
        remove_field => "type"
        remove_field => "http_version"
        remove_field => "@timestamp"
        remove_field => "message"
    }
    clone {
        clones => ["log1", "log2"]
    }
    if [type] == "log1" {
        mutate
        {
            add_field => { "field1" => "%{access_time}" }
            add_field => { "field2" => "%{host}" }
            add_field => { "field3" => "%{read_timestamp}" }
        }
    } else {   
        mutate
        {
            add_field => { "field4" => "%{source}" }
            add_field => { "field5" => "%{tags}" }
        }
    }
}
output {
    if [type] == "log1" {
        elasticsearch{
            hosts => ["localhost:9200"]
            index => "indexA"
        }
    } else {   
        elasticsearch{
            hosts => ["localhost:9200"]
            index => "indexB"
        }
    }
}   

您需要使用复制事件。然后,您可以将所需字段添加到每个相应的事件中,并将它们放入两个不同的ES索引中:

input{
    beats{
        port => "5043"
    }
}
filter 
{

    grok 
    {
        patterns_dir => ["/usr/share/logstash/patterns"]
        match =>{   "message" => ["%{IPORHOST:[client_ip]} - %{DATA:[user_name]} \[%{HTTPDATE:[access_time]}\] \"%{WORD:[method]} %{DATA:[url]} HTTP/%{NUMBER:[http_version]}\" %{NUMBER:[response_code]} %{NUMBER:[bytes]}( \"%{DATA:[referrer]}\")?( \"%{DATA:[user_agent]}\")?",
                    "%{IPORHOST:[remote_ip]} - %{DATA:[user_name]} \\[%{HTTPDATE:[time]}\\] \"-\" %{NUMBER:[response_code]} -" ] 
                }
        remove_field => "@version"
        remove_field => "beat"
        remove_field => "input_type"
        remove_field => "type"
        remove_field => "http_version"
        remove_field => "@timestamp"
        remove_field => "message"
    }
    clone {
        clones => ["log1", "log2"]
    }
    if [type] == "log1" {
        mutate
        {
            add_field => { "field1" => "%{access_time}" }
            add_field => { "field2" => "%{host}" }
            add_field => { "field3" => "%{read_timestamp}" }
        }
    } else {   
        mutate
        {
            add_field => { "field4" => "%{source}" }
            add_field => { "field5" => "%{tags}" }
        }
    }
}
output {
    if [type] == "log1" {
        elasticsearch{
            hosts => ["localhost:9200"]
            index => "indexA"
        }
    } else {   
        elasticsearch{
            hosts => ["localhost:9200"]
            index => "indexB"
        }
    }
}   

这对我的案例非常有帮助。特别感谢您修改代码。太棒了,很高兴它能帮上忙!这对我的案例非常有帮助。特别感谢您修改代码。太棒了,很高兴它能帮上忙!