Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/sockets/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
<img src="//i.stack.imgur.com/RUiNP.png" height="16" width="18" alt="" class="sponsor tag img">elasticsearch 是否有方法从logstash中最后一次解析的位置开始解析文件?_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch_Logstash_Elastic Stack_Logstash Grok - Fatal编程技术网 elasticsearch 是否有方法从logstash中最后一次解析的位置开始解析文件?,elasticsearch,logstash,elastic-stack,logstash-grok,elasticsearch,Logstash,Elastic Stack,Logstash Grok" /> elasticsearch 是否有方法从logstash中最后一次解析的位置开始解析文件?,elasticsearch,logstash,elastic-stack,logstash-grok,elasticsearch,Logstash,Elastic Stack,Logstash Grok" />

elasticsearch 是否有方法从logstash中最后一次解析的位置开始解析文件?

elasticsearch 是否有方法从logstash中最后一次解析的位置开始解析文件?,elasticsearch,logstash,elastic-stack,logstash-grok,elasticsearch,Logstash,Elastic Stack,Logstash Grok,这是我的日志存储配置(根据的答案修改) 以下是我的问题: 缓慢:我的文件大小为50MB,logstash在解析它时花费了很长时间。是否有某些配置导致了这种速度慢,或者是否有其他原因,或者在解析这种大小的文件时日志存储速度慢 从上次解析的位置开始解析日志,因为我将把这些解析的事件发送给ELK 当这是日志中的最后一行时处理多行。 input { file { path => "/u/bansalp/activemq_primary_plugin.stats.log.0" ### F

这是我的日志存储配置(根据的答案修改)

以下是我的问题:

缓慢:我的文件大小为50MB,logstash在解析它时花费了很长时间。是否有某些配置导致了这种速度慢,或者是否有其他原因,或者在解析这种大小的文件时日志存储速度慢

从上次解析的位置开始解析日志,因为我将把这些解析的事件发送给ELK

当这是日志中的最后一行时处理多行。

input {
  file {
    path => "/u/bansalp/activemq_primary_plugin.stats.log.0"
### For testing and continual process of the same file, remove these before produciton
    start_position => "beginning"
    sincedb_path => "/dev/null"
### Lets read the logfile and recombine multi line details
    codec => multiline {
      # Grok pattern names are valid! :)
      pattern => "^\[%{YEAR}%{MONTHNUM}%{MONTHDAY}\s*%{TIME}"
      negate => true
      what => "previous"
    }
  }
}
filter {
    ### Let's get some high level data before we split the line (note: anything you grab before the split gets copied)
    if [message] =~ "logPerDestinationStats" {
        grok {
            match => { 
                "message" => "^\[%{YEAR:yr}%{MONTHNUM:mnt}%{MONTHDAY:daynum}\s*%{TIME:time}\s*%{TZ:timezone}\s*(%{DATA:thread_name})\s*%{JAVACLASS:javaclass}#%{WORD:method}\s*%{LOGLEVEL}\]\s*%{DATA}:%{DATA:msg}"
            }
        }
        ### Split the lines back out to being a single line now. (this may be a \r or \n, test which one)
        split { 
            "field" => "message"
        }
        ### Ok, the lines should now be independent, lets add another grok here to get the patterns as dictated by your example [fieldA: str | field2: 0...] etc.
        ### Note: you should look to change the grok pattern to better suit your requirements, I used DATA here to quickly capture your content
        if [message] =~ "^\[destName" {
            grok {
                break_on_match => false
                match => { "message" => "^\[%{DATA}:\s*%{DATA:destName}\s*\|\s*%{DATA}:\s*%{NUMBER:enqueueCount}\s*\|\s*%{DATA}:\s*%{NUMBER:dequeueCount}\s*\|\s*%{DATA}:\s*%{NUMBER:dispatchCount}\s*\|\s*%{DATA}:\s*%{NUMBER:expiredCount}\s*\|\s*%{DATA}:\s*%{NUMBER:inflightCount}\s*\|\s*%{DATA}:\s*%{NUMBER:msgsHeld}\s*\|\s*%{DATA}:\s*%{NUMBER:msgsCached}\s*\|\s*%{DATA}:\s*%{NUMBER:memoryPercentUsage}\s*\|\s*%{DATA}:\s*%{NUMBER:memoryUsage}\s*\|\s*%{DATA}:\s*%{NUMBER:memoryLimit}\s*\|\s*%{DATA}:\s*%{NUMBER:avgEnqueueTimeMs}\s*\|\s*%{DATA}:\s*%{NUMBER:maxEnqueueTimeMs}\s*\|\s*%{DATA}:\s*%{NUMBER:minEnqueueTimeMs}\s*\|\s*%{DATA}:\s*%{NUMBER:currentConsumers}\s*\|\s*%{DATA}:\s*%{NUMBER:currentProducers}\s*\|\s*%{DATA}:\s*%{NUMBER:blockedSendsCount}\s*\|\s*%{DATA}:\s*%{NUMBER:blockedSendsTimeMs}\s*\|\s*%{DATA}:\s*%{NUMBER:minMsgSize}\s*\|\s*%{DATA}:\s*%{NUMBER:maxMsgSize}\s*\|\s*%{DATA}:\s*%{NUMBER:avgMsgSize}\s*\|\s*%{DATA}:\s*%{NUMBER:totalMsgSize}\]$" }
            }
        }
        mutate {
        convert => { "message" => "string" }
            add_field => {
                "session_timestamp" => "%{yr}-%{mnt}-%{daynum} %{time} %{timezone}"
                "load_timestamp" => "%{@timestamp}"
            }
            remove_field => ["yr","mnt", "daynum", "time", "timezone"]
        }
    }
}
output {
  stdout { codec => rubydebug }
}
相同的样本日志

[20170513 06:08:29.734 EDT (StatsCollector-1) bansalp.tools.jms.ActiveMQLoggingPlugin$ActiveMQDestinationStatsCollector#logPerDestinationStats INFO] ActiveMQ Destination Stats (97 destinations):
[destName: topic://topic1 | enqueueCount: 1 | dequeueCount: 1 | dispatchCount: 1 | expiredCount: 0 | inflightCount: 0 | msgsHeld: 0 | msgsCached: 0 | memoryPercentUsage: 0 | memoryUsage: 0 | memoryLimit: 536870912 | avgEnqueueTimeMs: 0.0 | maxEnqueueTimeMs: 0 | minEnqueueTimeMs: 0 | currentConsumers: 1 | currentProducers: 0 | blockedSendsCount: 0 | blockedSendsTimeMs: 0 | minMsgSize: 2392 | maxMsgSize: 2392 | avgMsgSize: 2392.0 | totalMsgSize: 2392]
[destName: topic://topic2 | enqueueCount: 0 | dequeueCount: 0 | dispatchCount: 0 | expiredCount: 0 | inflightCount: 0 | msgsHeld: 0 | msgsCached: 0 | memoryPercentUsage: 0 | memoryUsage: 0 | memoryLimit: 536870912 | avgEnqueueTimeMs: 0.0 | maxEnqueueTimeMs: 0 | minEnqueueTimeMs: 0 | currentConsumers: 3 | currentProducers: 0 | blockedSendsCount: 0 | blockedSendsTimeMs: 0 | minMsgSize: 0 | maxMsgSize: 0 | avgMsgSize: 0.0 | totalMsgSize: 0]

你的日志有多慢?logstash运行的机器的内核数是多少?它是一台8内核的机器(虽然不是完全免费的)。我观察到,解析这些数据需要1个多小时。您是否觉得配置有问题。它显示“设置:默认管道工作人员:20个管道主管道已启动”,并在那里停留了1个多小时,最后我需要使用cntrl cwsh终止进程。您在logstash的日志中看到了什么?如果您使用的是logstash>5,请运行此命令来验证logstash curl-XGET'logstashIp:9600/_node/stats/pipeline?pretty'的统计信息。我在“{:timestamp=>”2017-05-06T03:02:34.974000-0400”中找到了此信息:message=>“translation missing:en.logstash.runner.configuration.file not found”,:level=>:error}{:timestamp=>“2017-05-06T03:08:05.388000-0400”“,:message=>“找不到翻译:en.logstash.runner.configuration.file”,:level=>:error}{:timestamp=>“2017-05-06T05:12:35.208000-0400”,:message=>“找不到翻译:en.logstash.runner.configuration.file”,:level=>:error}”在logstash.log-20170508中。