elasticsearch logstash性能下降,elasticsearch,logstash,elasticsearch,Logstash" /> elasticsearch logstash性能下降,elasticsearch,logstash,elasticsearch,Logstash" />

elasticsearch logstash性能下降

elasticsearch logstash性能下降,elasticsearch,logstash,elasticsearch,Logstash,我注意到logstash中的一些非常奇怪的表现。如果我的配置文件设置如下: input { kafka { topics => ["kafka-jmx"] bootstrap_servers => "kafka1.com:9092" consumer_threads => 1 } } output { stdout

我注意到logstash中的一些非常奇怪的表现。如果我的配置文件设置如下:

input { 
        kafka { 
                topics => ["kafka-jmx"]
                bootstrap_servers => "kafka1.com:9092"
                consumer_threads => 1
        }

}
output {
                stdout {}
}
filter {
        json {  
                source => "message"
        }
        grok {  
                patterns_dir => "/home/ec2-user/logstash-5.2.0/bin/patterns/"
                match => {"metric_path" => [
                                "%{DATA:kafka_host}\.%{DATA:kafka_metric_group}:type=%{DATA:kafka_metric_type},name=%{WORD:kafka_metric_name},topic=%{KTOPIC:kafka_topic},partition=%{KPARTITION:topic_partition}\.%{GREEDYDATA:attr_type}",
                                "%{DATA:kafka_host}\.%{DATA:kafka_metric_group}:type=%{DATA:kafka_metric_type},name=%{WORD:kafka_metric_name},topic=%{KTOPIC:kafka_topic}\.%{GREEDYDATA:attr_type}",
                                "%{DATA:kafka_host}\.%{DATA:kafka_metric_group}:type=%{DATA:kafka_metric_type},name=%{GREEDYDATA:kafka_metric_name}\.%{GREEDYDATA:attr_type}"
                                ]
                         }
        }
               ruby {   
                        code => "event.set('time', event.get('@timestamp').to_f * 1000 )"
                }
                mutate {
                        remove_field => ["message"]
                        convert => {"time" => "integer"
                                    "metric_value_number" => "integer"
                        }
                }
}
output {
        influxdb {
                host => "10.204.95.88"
                db => "monitoring"
                measurement => "BrokerMetrics"
                retention_policy => "one_week"
                allow_time_override => "true"
                exclude_fields => ["@timestamp", "@version", "path"]
                data_points => {
                                "time" => "%{time}"
                                "cluster_field" => "%{cluster}"
                                "kafka_host_field" => "%{kafka_host}"
                                "kafka_metric_group_field" => "%{kafka_metric_group}"
                                "kafka_metric_type_field" => "%{kafka_metric_type}"
                                "kafka_metric_name_field" => "%{kafka_metric_name}"
                                "kafka_topic_field" => "%{kafka_topic}"
                                "attr_type_field" => "%{attr_type}"
                                "cluster" => "%{[cluster]}"
                                "kafka_host" => "%{[kafka_host]}"
                                "kafka_metric_group" => "%{[kafka_metric_group]}"
                                "kafka_metric_type" => "%{[kafka_metric_type]}"
                                "kafka_metric_name" => "%{[kafka_metric_name]}"
                                "kafka_topic" => "%{[kafka_topic]}"
                                "attr_type" => "%{[attr_type]}"
                                "metric_value_number" => "%{metric_value_number}"
                                "metric_value_string" => "%{metric_value_string}"
                                "topic_partition_field" => "%{topic_partition}"
                                "topic_partition" => "%{[topic_partition]}"
                        }
                coerce_values => {"metric_value_number" => "integer"}
                send_as_tags => [ "kafka_host", "kafka_metric_group","cluster", "kafka_metric_type", "kafka_metric_name", "attr_type", "kafka_topic", "topic_partition" ]
                }
       }
我的消费量大约是每秒2万条来自卡夫卡的信息。我可以看到这一点,因为我使用RMI侦听器启动了logstash,所以我可以通过jconsole看到JVM中发生了什么

只要我像这样添加一个过滤器:

input { 
        kafka { 
                topics => ["kafka-jmx"]
                bootstrap_servers => "kafka1.com:9092"
                consumer_threads => 1
        }

}
output {
                stdout {}
}
filter {
        json {  
                source => "message"
        }
        grok {  
                patterns_dir => "/home/ec2-user/logstash-5.2.0/bin/patterns/"
                match => {"metric_path" => [
                                "%{DATA:kafka_host}\.%{DATA:kafka_metric_group}:type=%{DATA:kafka_metric_type},name=%{WORD:kafka_metric_name},topic=%{KTOPIC:kafka_topic},partition=%{KPARTITION:topic_partition}\.%{GREEDYDATA:attr_type}",
                                "%{DATA:kafka_host}\.%{DATA:kafka_metric_group}:type=%{DATA:kafka_metric_type},name=%{WORD:kafka_metric_name},topic=%{KTOPIC:kafka_topic}\.%{GREEDYDATA:attr_type}",
                                "%{DATA:kafka_host}\.%{DATA:kafka_metric_group}:type=%{DATA:kafka_metric_type},name=%{GREEDYDATA:kafka_metric_name}\.%{GREEDYDATA:attr_type}"
                                ]
                         }
        }
               ruby {   
                        code => "event.set('time', event.get('@timestamp').to_f * 1000 )"
                }
                mutate {
                        remove_field => ["message"]
                        convert => {"time" => "integer"
                                    "metric_value_number" => "integer"
                        }
                }
}
output {
        influxdb {
                host => "10.204.95.88"
                db => "monitoring"
                measurement => "BrokerMetrics"
                retention_policy => "one_week"
                allow_time_override => "true"
                exclude_fields => ["@timestamp", "@version", "path"]
                data_points => {
                                "time" => "%{time}"
                                "cluster_field" => "%{cluster}"
                                "kafka_host_field" => "%{kafka_host}"
                                "kafka_metric_group_field" => "%{kafka_metric_group}"
                                "kafka_metric_type_field" => "%{kafka_metric_type}"
                                "kafka_metric_name_field" => "%{kafka_metric_name}"
                                "kafka_topic_field" => "%{kafka_topic}"
                                "attr_type_field" => "%{attr_type}"
                                "cluster" => "%{[cluster]}"
                                "kafka_host" => "%{[kafka_host]}"
                                "kafka_metric_group" => "%{[kafka_metric_group]}"
                                "kafka_metric_type" => "%{[kafka_metric_type]}"
                                "kafka_metric_name" => "%{[kafka_metric_name]}"
                                "kafka_topic" => "%{[kafka_topic]}"
                                "attr_type" => "%{[attr_type]}"
                                "metric_value_number" => "%{metric_value_number}"
                                "metric_value_string" => "%{metric_value_string}"
                                "topic_partition_field" => "%{topic_partition}"
                                "topic_partition" => "%{[topic_partition]}"
                        }
                coerce_values => {"metric_value_number" => "integer"}
                send_as_tags => [ "kafka_host", "kafka_metric_group","cluster", "kafka_metric_type", "kafka_metric_name", "attr_type", "kafka_topic", "topic_partition" ]
                }
       }
它从20k/秒上升到1500/秒左右

然后当我像这样添加输出时:

input { 
        kafka { 
                topics => ["kafka-jmx"]
                bootstrap_servers => "kafka1.com:9092"
                consumer_threads => 1
        }

}
output {
                stdout {}
}
filter {
        json {  
                source => "message"
        }
        grok {  
                patterns_dir => "/home/ec2-user/logstash-5.2.0/bin/patterns/"
                match => {"metric_path" => [
                                "%{DATA:kafka_host}\.%{DATA:kafka_metric_group}:type=%{DATA:kafka_metric_type},name=%{WORD:kafka_metric_name},topic=%{KTOPIC:kafka_topic},partition=%{KPARTITION:topic_partition}\.%{GREEDYDATA:attr_type}",
                                "%{DATA:kafka_host}\.%{DATA:kafka_metric_group}:type=%{DATA:kafka_metric_type},name=%{WORD:kafka_metric_name},topic=%{KTOPIC:kafka_topic}\.%{GREEDYDATA:attr_type}",
                                "%{DATA:kafka_host}\.%{DATA:kafka_metric_group}:type=%{DATA:kafka_metric_type},name=%{GREEDYDATA:kafka_metric_name}\.%{GREEDYDATA:attr_type}"
                                ]
                         }
        }
               ruby {   
                        code => "event.set('time', event.get('@timestamp').to_f * 1000 )"
                }
                mutate {
                        remove_field => ["message"]
                        convert => {"time" => "integer"
                                    "metric_value_number" => "integer"
                        }
                }
}
output {
        influxdb {
                host => "10.204.95.88"
                db => "monitoring"
                measurement => "BrokerMetrics"
                retention_policy => "one_week"
                allow_time_override => "true"
                exclude_fields => ["@timestamp", "@version", "path"]
                data_points => {
                                "time" => "%{time}"
                                "cluster_field" => "%{cluster}"
                                "kafka_host_field" => "%{kafka_host}"
                                "kafka_metric_group_field" => "%{kafka_metric_group}"
                                "kafka_metric_type_field" => "%{kafka_metric_type}"
                                "kafka_metric_name_field" => "%{kafka_metric_name}"
                                "kafka_topic_field" => "%{kafka_topic}"
                                "attr_type_field" => "%{attr_type}"
                                "cluster" => "%{[cluster]}"
                                "kafka_host" => "%{[kafka_host]}"
                                "kafka_metric_group" => "%{[kafka_metric_group]}"
                                "kafka_metric_type" => "%{[kafka_metric_type]}"
                                "kafka_metric_name" => "%{[kafka_metric_name]}"
                                "kafka_topic" => "%{[kafka_topic]}"
                                "attr_type" => "%{[attr_type]}"
                                "metric_value_number" => "%{metric_value_number}"
                                "metric_value_string" => "%{metric_value_string}"
                                "topic_partition_field" => "%{topic_partition}"
                                "topic_partition" => "%{[topic_partition]}"
                        }
                coerce_values => {"metric_value_number" => "integer"}
                send_as_tags => [ "kafka_host", "kafka_metric_group","cluster", "kafka_metric_type", "kafka_metric_name", "attr_type", "kafka_topic", "topic_partition" ]
                }
       }
消耗量从1500/秒下降到大约300/秒。总之,我的速率从20000/秒下降到300/秒

我没有在logstash.yml文件中设置任何设置,我将heap_大小设置为2g(jvm告诉我有足够的堆空间)。我也只使用了大约60%的cpu使用率

为什么会这样?我也尝试过使用
-w2
,并且在启动logstash时一直尝试到4,但这似乎没有影响….

一些事情

您的正则表达式需要调整

让grok跑得更快的最简单的胜利之一就是锚定你的正则表达式。这就是把
^
放在前面,把
$
放在最后。这为正则表达式引擎提供了一些重要的线索来找出匹配项,并将减少子字符串搜索

使用
%{DATA}
而不是
%{GREEDYDATA}
,除非它是匹配中的最后一个字段

过份紧张会影响未命中的表现。grok字典中的第三个匹配项中有两个
GREEDYDATA
。将第一个设置为
数据
,您可能会发现您的性能会因此而提高。这是因为GREEDYDATA告诉regex引擎匹配到字符串的末尾;如果没有匹配,请从结尾处切掉一个字符,然后重试,直到匹配或拒绝为止<代码>数据则相反,从一个字符开始扩展



至于为什么流入产出放缓,我没有确切的线索。我知道有些输出没有其他输出那么复杂,其他输出会为每个事件打开TCP连接。

Logstash 5.6.3上的grok filter会导致性能问题:

根据您使用的版本,您可能也会受到影响

我建议你升级到logstash的最新版本,或者至少升级grok插件