elasticsearch 日志存储未为Filebeat和Packetbeat创建正确的索引
我已经把我的弹性叠成这样了。我试图通过Filebeat和Topbeat发送日志和顶级数据,并使用自定义索引名 尽管如此,Logstash并没有为我使用自定义索引名传递的数据创建任何索引 日志存储配置:
elasticsearch 日志存储未为Filebeat和Packetbeat创建正确的索引,
elasticsearch,logstash,
elasticsearch,Logstash,我已经把我的弹性叠成这样了。我试图通过Filebeat和Topbeat发送日志和顶级数据,并使用自定义索引名 尽管如此,Logstash并没有为我使用自定义索引名传递的数据创建任何索引 日志存储配置: input{ beats{ port => 27080 congestion_threshold => 1500 } jmx { path => "file://Machine01/Users/username/proj
input{
beats{
port => 27080
congestion_threshold => 1500
}
jmx {
path => "file://Machine01/Users/username/projects/Logstash/logstash/bin/jmx"
polling_frequency => 15
type => "jmx"
nb_thread => 4
}
}
filter {
if [type] == "Type1"{
grok{
break_on_match => false
patterns_dir => ["C:\Users\users\projects\Logstash\logstash\bin\patterns"]
match => { "message" => "%{YEAR:Year}%{MONTHNUM:Month}%{MONTHDAY:Day} %{HOUR:Hour}%{MINUTE:Minute}%{SECOND:Second} %{LogLevel:LogVerbosity} %{MODULE:MODULENAME}%{SPACE}%{MESSAGEID:MESSAGEID} %{SUBMODULE:SUBMODULE} %{MESSAGE:MESSAGE}"}
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
add_tag => ["Groked"]
}
if "_grokparsefailure" in [tags] {
drop { }
}
if [type] == "jmx" {
if ("OperatingSystem.ProcessCpuLoad" in [metric_path] or "OperatingSystem.SystemCpuLoad" in [metric_path]) {
ruby {
code => "event['cpuLoad'] = event['metric_value_number'] * 100"
add_tag => [ "cpuLoad" ]
}
}
}
}
}
output {
if [type] == "jmx" {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "jmx-%{+YYYY.MM.dd}"
}
} else {
elasticsearch {
hosts => ["http://localhost:9200"]
manage_template => true
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
if [type] == "dbtable" {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "dbtable-%{+YYYY.MM.dd}"
}
}
}
}
文件节拍配置:
filebeat:
prospectors:
- paths:
- test.log
input_type: log
tail_files: false
scan_frequency: 3s
backoff: 20s
backoff_factor: 1
document_type: custom
registry:
fields:
type: custom
spool_size: 10000
idle_timeout: 2s
output:
logstash:
index: custom
hosts: ["valid hostname"]
logging:
to_files: true
files:
path: ./
name: filebeat.log
rotateeverybytes: 10485760
level: debug
我希望当我设置索引:custom
时,它应该在Elasticsearch中创建一个索引“custom YYYY.MM.dd”。但它只是在Elasticsearch中创建索引“%{[@metadata][beat]}-%{+YYYY.MM.dd}
”
如果我注释#index:custom
,它将在Elasticsearch中创建索引filebeat YYYY.MM.dd
哪里出了问题,为什么它不适用于自定义索引模式?设置Filebeat
output.logstash.index
配置参数会导致它用自定义索引名称覆盖[@metadata][beat]
值。通常,[@metadata][beat]
值是节拍的名称(例如filebeat或packetbeat)
根据Logstash测试Filebeat配置显示[@metadata][beat]
值确实设置为custom
,因此Filebeat配置工作正常
输出配置中使用的条件逻辑可能有问题。我简化了输出配置,使其更加简洁
output {
# Remove this after you finish debugging.
stdout { codec => rubydebug { metadata => true } }
if [@metadata][beat] {
# Use this output only for Beats.
elasticsearch {
hosts => ["http://localhost:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
} else if [type] == "jmx" or [type] == "dbtable" {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[type]}-%{+YYYY.MM.dd}"
}
}
}
当您在任何节拍中使用自定义索引时,必须确保安装和自定义索引模板(不要在节拍中使用Logstash的manage\u template=>true
)。Filebeat在下载中分发的文件中提供其索引模板。您需要更改模板
行,使其应用于“custom-*”索引,而不是“filebeat-*”。然后使用curl-XPUT将模板安装到Elasticsearchhttp://localhost:9200/_template/custom -d@filebeat.template.json