Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/sockets/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Sockets Logstash TCP输入崩溃_Sockets_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch_Tcp_Logstash - Fatal编程技术网 elasticsearch,tcp,logstash,Sockets,elasticsearch,Tcp,Logstash" /> elasticsearch,tcp,logstash,Sockets,elasticsearch,Tcp,Logstash" />

Sockets Logstash TCP输入崩溃

Sockets Logstash TCP输入崩溃,sockets,elasticsearch,tcp,logstash,Sockets,elasticsearch,Tcp,Logstash,我们有一个logstash(v2.3)设置,其中1个队列服务器运行RabbitMQ,10个elasticsearch节点和一个kibana的web节点。一切都“正常”,我们有大量服务器在队列服务器上发送日志。大多数日志都已登录,但我们注意到许多日志从未显示 查看logstash.log文件,我们将看到以下开始显示: {:timestamp=>"2016-07-15T16:21:34.638000+0000", :message=>"A plugin had an unrecovera

我们有一个logstash(v2.3)设置,其中1个队列服务器运行RabbitMQ,10个elasticsearch节点和一个kibana的web节点。一切都“正常”,我们有大量服务器在队列服务器上发送日志。大多数日志都已登录,但我们注意到许多日志从未显示

查看logstash.log文件,我们将看到以下开始显示:

{:timestamp=>"2016-07-15T16:21:34.638000+0000", :message=>"A plugin had an unrecoverable error. Will restart this plugin.\n  Plugin: <LogStash::Inputs::Tcp type=>\"syslog\", port=>5544, codec=><LogStash::Codecs::JSONLines charset=>\"UTF-8\", delimiter=>\"\\n\">, add_field=>{\"deleteme\"=>\"\"}, host=>\"0.0.0.0\", data_timeout=>-1, mode=>\"server\", ssl_enable=>false, ssl_verify=>true, ssl_key_passphrase=><password>>\n  Error: closed stream", :level=>:error}
我们最近在上面的conf中添加了UDP以进行测试,但是日志也不能可靠地将其加入其中

以防Elasticsearch集群配置相关:

input {
  tcp {
    type => "syslog"
    port => "5544"
  }
  udp {
    type => "syslog"
    port => "5543"
  }
}

output {
  rabbitmq {
    key => "thekey"
    exchange => "theexchange"
    exchange_type => "direct"
    user => "username"
    password => "password"
    host => "127.0.0.1"
    port => 5672
    durable => true
    persistent => true
  }
}
我们有一个10节点的Elasticsearch集群,设置为从队列服务器中拉取,它按预期工作,并且与队列服务器在同一版本的Logstash上。它们使用conf从rabbitMQ服务器中提取:

input {
  rabbitmq {
    durable => "true"
    host => "***.**.**.**"
    key => "thekey"
    exchange => "theexchange"
    queue => "thequeue"
    user => "username"
    password => "password"
  }
}
有没有人能帮我们找出tcp输入插件的问题?

谢谢