Logstash 不要在我的储藏室里炒菜

Logstash 不要在我的储藏室里炒菜,logstash,statsd,logstash-configuration,Logstash,Statsd,Logstash Configuration,配置文件: # input are the kafka messages input { kafka { topic_id => 'test2' } } # Try to match sensor info filter { json { source => "message"} } # StatsD and stdout output output { stdout { codec => li

配置文件:

# input are the kafka messages
input
{
    kafka
    {
        topic_id => 'test2'
    }
}

# Try to match sensor info
filter
{
    json { source => "message"}
}

# StatsD and stdout output
output
{
    stdout
    {
        codec => line
        {
            format => "%{[testmessage][0][key]}"
        }
    }

    stdout { codec=>rubydebug }

    statsd
    {
        host => "localhost"
        port => 8125
        increment => ["test.%{[testmessage][0][key]}"]
    }
}
输入卡夫卡消息:

{"testmessage":[{"key":"key-1234"}]}
输出:

key-1234
{
    "testmessage" => [
        [0] {
            "key" => "key-1234"
        }
    ],
       "@version" => "1",
     "@timestamp" => "2015-11-09T20:11:52.374Z"
}
日志:


为什么statsd不能在我的logstash中工作。看看谷歌的很多例子,不知道为什么。欢迎提出任何建议。谢谢。

我找到了原因,logstash output statsd默认使用UDP。但是我的statsd服务器设置为使用TCP

{:timestamp=>"2015-11-09T20:29:03.562000+0000", :message=>"Done running kafka input", :level=>:info}
{:timestamp=>"2015-11-09T20:29:03.563000+0000", :message=>"Plugin is finished", :plugin=><LogStash::Outputs::Stdout codec=><LogStash::Codecs::Line format=>"%{[testmessage][0][key]}", charset=>"UTF-8">, workers=>1>, :level=>:info}
{:timestamp=>"2015-11-09T20:29:03.564000+0000", :message=>"Plugin is finished", :plugin=><LogStash::Outputs::Statsd increment=>["test1.test", "test.%{[testmessage][0][key]}"], codec=><LogStash::Codecs::Plain charset=>"UTF-8">, workers=>1, host=>"localhost", port=>8125, namespace=>"logstash", sender=>"%{host}", sample_rate=>1, debug=>false>, :level=>:info}
{:timestamp=>"2015-11-09T20:29:03.564000+0000", :message=>"Pipeline shutdown complete.", :level=>:info}