Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/dart/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
<img src="//i.stack.imgur.com/RUiNP.png" height="16" width="18" alt="" class="sponsor tag img">elasticsearch 从filebeat和logstash索引到elasticsearch时出错_<img Src="//i.stack.imgur.com/RUiNP.png" Height="16" Width="18" Alt="" Class="sponsor Tag Img">elasticsearch_Logstash_Logstash Configuration_Logstash Forwarder_Filebeat - Fatal编程技术网 elasticsearch 从filebeat和logstash索引到elasticsearch时出错,elasticsearch,logstash,logstash-configuration,logstash-forwarder,filebeat,elasticsearch,Logstash,Logstash Configuration,Logstash Forwarder,Filebeat" /> elasticsearch 从filebeat和logstash索引到elasticsearch时出错,elasticsearch,logstash,logstash-configuration,logstash-forwarder,filebeat,elasticsearch,Logstash,Logstash Configuration,Logstash Forwarder,Filebeat" />

elasticsearch 从filebeat和logstash索引到elasticsearch时出错

elasticsearch 从filebeat和logstash索引到elasticsearch时出错,elasticsearch,logstash,logstash-configuration,logstash-forwarder,filebeat,elasticsearch,Logstash,Logstash Configuration,Logstash Forwarder,Filebeat,我设置了一个elk堆栈,在本地使用日志文件;现在,我正在尝试添加filebeat,它将输出到logstash,以便在索引到elasticsearch之前进行过滤。这是我的配置 filebeat.yml: prospectors: # Each - is a prospector. Below are the prospector specific configurations - paths: - /var/samplelogs/wwwlogs/framework*.log in

我设置了一个elk堆栈,在本地使用日志文件;现在,我正在尝试添加filebeat,它将输出到logstash,以便在索引到elasticsearch之前进行过滤。这是我的配置 filebeat.yml:

prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
  paths:
    - /var/samplelogs/wwwlogs/framework*.log
  input_type: log
  document_type: framework
logstash:
   # The Logstash hosts
   hosts: ["localhost:5044"]
logging:
   to_syslog: true
以下是日志存储配置:

input {
  beats {
    port => 5044
  }
}
filter {
  if [type] == "framework" {
    grok {
      patterns_dir => "/etc/logstash/conf.d/patterns"
        match => {'message' => "\[%{WR_DATE:logtime}\] \[error\] \[app %{WORD:application}\] \[client %{IP:client}\] \[host %{HOSTNAME:host}\] \[uri %{URIPATH:resource}\] %{GREEDYDATA:error_message}"}
    }
    date {
      locale => "en"
        match => [ "logtime", "EEE MMM dd HH:mm:ss yyyy" ]
    }
  }
 }
 output {
  elasticsearch {
    host => "localhost"
    port => "9200"
    protocol => "http"
    # manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}
当我使用--configtest时,这个日志存储配置检查正常。filebeat启动正常,但我在logstash.log中发现以下错误:

    {:timestamp=>"2016-03-09T12:26:58.976000-0700", :message=>["INFLIGHT_EVENTS_REPORT", "2016-03-09T12:26:58-07:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}
{:timestamp=>"2016-03-09T12:27:03.977000-0700", :message=>["INFLIGHT_EVENTS_REPORT", "2016-03-09T12:27:03-07:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}
{:timestamp=>"2016-03-09T12:27:08.060000-0700", :message=>"Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];", :level=>:error}
{:timestamp=>"2016-03-09T12:27:08.060000-0700", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>"Java::OrgElasticsearchClusterBlock::ClusterBlockException", :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:215)", "org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:67)", "org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:153)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}
{:timestamp=>"2016-03-09T12:27:08.977000-0700", :message=>["INFLIGHT_EVENTS_REPORT", "2016-03-09T12:27:08-07:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}
{:timestamp=>"2016-03-09T12:27:13.977000-0700", :message=>["INFLIGHT_EVENTS_REPORT", "2016-03-09T12:27:13-07:00", {"input_to_filter"=>20, "filter_to_output"=>20, "outputs"=>[]}], :level=>:warn}
这些错误不断重复。
elasticsearch日志中存在错误illegalargumentexception:空文本。 我尝试将logstash输出配置中的协议更改为“节点”。
在我看来,elasticsearch无法访问,但它正在运行:

$ curl localhost:9200
{
  "status" : 200,
  "name" : "Thena",
  "version" : {
    "number" : "1.1.2",
    "build_hash" : "e511f7b28b77c4d99175905fac65bffbf4c80cf7",
    "build_timestamp" : "2014-05-22T12:27:39Z",
    "build_snapshot" : false,
    "lucene_version" : "4.7"
  },
  "tagline" : "You Know, for Search"
}

这是我第一次尝试logstash。有人能给我指出正确的方向吗?

我的堆栈能够正常工作。每个人的评论都很中肯,但在这种情况下,这恰好是一个配置调整,我仍然不完全理解。
在log stash输出配置中,在elasticsearch{}选项中,我注释掉了端口和协议(设置为9200和HTTP),它成功了。我的第一次修复尝试是删除协议选项,因此默认情况下使用节点协议。当这不起作用时,我也删除了协议选项。协议的默认值是“node”,因此我似乎无法通过HTTP使其工作,而且我忘记删除端口选项。移除两个后,它工作正常。

这可能对将来的用户没有帮助,但是如果要使用节点协议,请确保不要忘记从配置中删除端口选项——至少我认为我在这里遇到了这个问题

“服务不可用/1/状态未恢复”表示群集不满意。检查它,然后四处搜索关于您所发现内容的更多信息:不确定LS错误和Filebeat是否相关。filebeat配置输出部分看起来错误。您在顶层指定了logstash输出,但它应该嵌套在输出下: