elasticsearch 带弹性搜索的Logstash,elasticsearch,logstash,elasticsearch,Logstash" /> elasticsearch 带弹性搜索的Logstash,elasticsearch,logstash,elasticsearch,Logstash" />

elasticsearch 带弹性搜索的Logstash

elasticsearch 带弹性搜索的Logstash,elasticsearch,logstash,elasticsearch,Logstash,我试图将Logstash与Elasticsearch连接起来,但无法使其工作 这是我的日志存储配置: input { stdin { type => "stdin-type" } file { type => "syslog-ng" # Wildcards work, here :) path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ] } }

我试图将Logstash与Elasticsearch连接起来,但无法使其工作

这是我的日志存储配置:

input {
  stdin {
    type => "stdin-type"
  }

  file {
    type => "syslog-ng"

    # Wildcards work, here :)
    path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog" ]
  }
}

output {
  stdout { }
  elasticsearch{
        type => "all"
        embedded => false
        host => "192.168.0.23"
        port => "9300"
        cluster => "logstash-cluster"
        node_name => "logstash"
        }
}
我只是在elasticsearch.yml中更改了这些细节

cluster.name: logstash-cluster
node.name: "logstash"
node.master: false
network.bind_host: 192.168.0.23
network.publish_host: 192.168.0.23
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["localhost"]

使用这些配置,我无法使Logstash连接到ES。有人能告诉我哪里出了问题吗?

首先,我建议匹配您的“类型”属性。 在输入中有两种不同的类型,在输出中有一种在任何输入中都不存在的类型

对于测试,请将输出更改为:

output {
stdout { }
elasticsearch{
    type => "stdin-type"
    embedded => false
    host => "192.168.0.23"
    port => "9300"
    cluster => "logstash-cluster"
    node_name => "logstash"
    }
}
那么,您是否在ES实例上创建了索引

根据我使用的指南和我自己的经验(其他人可能有另一种工作方式),我总是使用索引,这样当我将某些内容推入ES时,我可以使用ES API快速检查数据是否已进入

另一个建议是简单地使用调试标志运行日志存储转发器和索引器,以查看幕后的情况

能否连接到127.0.0.1上的ES实例?另外,尝试使用端口和主机进行试验。作为Logstash系统的新用户,我发现我一开始的理解与设置的实际情况不符。有时,主机IP和端口都不是您认为的那样。如果您愿意检查您的网络并识别侦听端口和IP,那么您可以解决这个问题,否则请进行一些智能测试和出错


我强烈建议将此作为一个全面的起点。我提到的这两点在指南中都直接提到了。虽然本指南的起点稍微复杂一些,但其思想和概念非常全面。

我收到了相同的错误消息,我花了一段时间才在elasticsearch发现过程的跟踪日志中发现logstash使用的ip地址不正确

我有几个ip地址,logstash用错了。在那之后,一切都很顺利

我无法让Logstash连接到ES

这发生在我的日志存储和elasticsearch版本不同步时

从文档中:

版本说明:您的Elasticsearch群集必须运行Elasticsearch 1.1.1. 如果您使用Elasticsearch的任何其他版本,则应在此插件中设置
protocol=>http


如上所述,显式设置
protocol=>http
为我解决了这个问题。

正如Adam所说,问题在于协议设置,所以只有在测试时我才这样做:

logstash -e 'input { stdin { } } output { elasticsearch { host => localhost protocol => "http" port => "9200" } }'

这似乎在OSX上起作用。问题。

我有一个两节点的elastisearch集群,只有一个用于logstatsh

此配置适用于我:

节点elk1:

#/etc/elasticsearch/elasticsearch.yml

script.disable_dynamic: true
cluster.name: elk-fibra 
node.name: "elk1"
node.master: true
node.data: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["elk1.lab.fibra"]
root@elk1:

#/etc/logstash/conf.d/30-lumberjack-output.conf
output {
  elasticsearch { host => localhost protocol => "http" port => "9200" }
  stdout { codec => rubydebug }
}
节点elk2:

#/etc/elasticsearch/elasticsearch.yml

script.disable_dynamic: true
cluster.name: elk-fibra
node.name: "elk2"
node.master: false
node.data: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["elk1.lab.fibra"]

首先,您不需要在ES中创建索引

因为,您不需要在elasticsearch中创建“索引”;当日志存储分配索引时,将自动创建索引

顺便说一下,如果没有设置索引值,它将被设置为默认值“logstash-%{+YYYY.MM.dd}”

(你可以登记)~

第二,你可以保持你的“弹性类型”与你的“输入类型”相同;您也可以这样编写输出:

output {
stdout { }
elasticsearch{
    embedded => false
    host => "192.168.0.23"
    port => "9300"
    index => "a_new_index"
    cluster => "logstash-cluster"
    node_name => "logstash"
    document_type =>"my-own-type"
    }
}
使用“文档类型”,您可以将日志保存到所需的任何类型~

最后,如果您不想分配“文档类型”;它将与您的“输入类型”设置相同

甚至忘记在“所有配置文件”中指定类型;该类型将被设置为默认值,即logs~


好的,尽情享受吧~

以下是在

elasticsearch:5.4.0

日志存储:5.4.0

(我在OpenStack上使用了docker容器)

对于Elasticsearch:

/usr/share/elasticsearch/config/elasticsearch.yml
应如下所示-

cluster.name: "docker-cluster"
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline

# http://111.*.*.11:9200 is the IP & Port of Elasticsearch's server 
xpack.monitoring.elasticsearch.url: http://111.*.*.11:9200

# "elastic" is the user name of Elasticsearch's account
xpack.monitoring.elasticsearch.username: elastic 

# "changeme" is the password of Elasticsearch's "elastic" user 
xpack.monitoring.elasticsearch.password: changeme
input {
        file {
                path => "/usr/share/logstash/test_i.log"

        }
}


output {

        elasticsearch {
                # http://111.*.*.11:9200 is the IP & Port of Elasticsearch's server
                hosts => ["http://111.*.*.11:9200"]

                # "elastic" is the user name of Elasticsearch's account
                user => "elastic"

                # "changeme" is the password of Elasticsearch's "elastic" user
                password => "changeme"
        }
}
不需要更改
/usr/share/elasticsearch/config/
的任何其他文件

使用简单命令-

sudo docker run --name elasticsearch -p 9200:9200 docker.elastic.co/elasticsearch/elasticsearch:5.4.0
对于Logstash:

/usr/share/logstash/config/logstash.yml
应如下所示-

cluster.name: "docker-cluster"
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline

# http://111.*.*.11:9200 is the IP & Port of Elasticsearch's server 
xpack.monitoring.elasticsearch.url: http://111.*.*.11:9200

# "elastic" is the user name of Elasticsearch's account
xpack.monitoring.elasticsearch.username: elastic 

# "changeme" is the password of Elasticsearch's "elastic" user 
xpack.monitoring.elasticsearch.password: changeme
input {
        file {
                path => "/usr/share/logstash/test_i.log"

        }
}


output {

        elasticsearch {
                # http://111.*.*.11:9200 is the IP & Port of Elasticsearch's server
                hosts => ["http://111.*.*.11:9200"]

                # "elastic" is the user name of Elasticsearch's account
                user => "elastic"

                # "changeme" is the password of Elasticsearch's "elastic" user
                password => "changeme"
        }
}
不需要更改
/usr/share/logstash/config/
的任何其他文件

/usr/share/logstash/pipeline/logstash.conf
应如下所示-

cluster.name: "docker-cluster"
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline

# http://111.*.*.11:9200 is the IP & Port of Elasticsearch's server 
xpack.monitoring.elasticsearch.url: http://111.*.*.11:9200

# "elastic" is the user name of Elasticsearch's account
xpack.monitoring.elasticsearch.username: elastic 

# "changeme" is the password of Elasticsearch's "elastic" user 
xpack.monitoring.elasticsearch.password: changeme
input {
        file {
                path => "/usr/share/logstash/test_i.log"

        }
}


output {

        elasticsearch {
                # http://111.*.*.11:9200 is the IP & Port of Elasticsearch's server
                hosts => ["http://111.*.*.11:9200"]

                # "elastic" is the user name of Elasticsearch's account
                user => "elastic"

                # "changeme" is the password of Elasticsearch's "elastic" user
                password => "changeme"
        }
}
使用simple命令运行Logstash-

sudo docker run --name logstash --expose 25826 -p 25826:25826 docker.elastic.co/logstash/logstash:5.4.0 --debug

注意:在运行Docker容器之前无需进行任何配置。首先,使用上面提到的简单命令运行容器。然后转到相应的目录,进行必要的更改,保存它,退出容器并重新启动容器,更改将被反映。

关键点是“type”属性。它必须与输入中给定的内容相匹配,因此必须与输出中给定的内容相匹配。这就是为什么Logstash无法将任何输入与输出匹配!谢谢你给我的启发=)顺便说一句,你的文章非常有用,我喜欢它。在建立logstash/ES/Kibana系统的过程中,我遇到了很多“愚蠢”的问题。那本指南给了我最好的起点。我不能只为自己保留它!很乐意提供帮助。正如你所知,我尝试了你的链接,刚刚得到了shotgun的通用支持页面,你认为该指南还在吗?你检查了端口号是否打开了吗?这个简单的配置解决了我的问题。我从一个节点开始,它没有协议文件,但我改为两节点集群,我必须添加这个配置。事实上,我在logstash上提交了一个问题:。很高兴知道它解决了您的问题:)