Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/sql-server-2008/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Filebeat/Logstash初始摄入+;持续工作量_Logstash_Kibana_Elastic Stack_Filebeat - Fatal编程技术网

Filebeat/Logstash初始摄入+;持续工作量

Filebeat/Logstash初始摄入+;持续工作量,logstash,kibana,elastic-stack,filebeat,Logstash,Kibana,Elastic Stack,Filebeat,我是ELK stack的新手,为了让事情变得简单一点,我们有几个fuelphp实例,其中有自己的应用程序日志。我试图将这些日志聚合到麋鹿中,以便可以对它们进行搜索和可视化 我已经在讨论中的服务器上设置了一个filebeat进程,并为logstash和elasticsearch设置了一个单独的服务器。在它为每个应用程序日志文件启动一个harverster之前,一切似乎都在运行。目前,我们有大约6个项目,这些项目有大约一年的日志,采用这种格式: /YYYY/MM/D.log 我试图从一开始就接收所

我是ELK stack的新手,为了让事情变得简单一点,我们有几个fuelphp实例,其中有自己的应用程序日志。我试图将这些日志聚合到麋鹿中,以便可以对它们进行搜索和可视化

我已经在讨论中的服务器上设置了一个filebeat进程,并为logstash和elasticsearch设置了一个单独的服务器。在它为每个应用程序日志文件启动一个harverster之前,一切似乎都在运行。目前,我们有大约6个项目,这些项目有大约一年的日志,采用这种格式:

/YYYY/MM/D.log
我试图从一开始就接收所有这些文件,但我只想看最新的,但我一直在点击
打开的文件太多
问题,这也是由于注册表持有存储,当文件由于不活动而关闭时,如果我调整配置并重新启动,它们都会重新打开

以下是filebeat.yml的片段:

# List of prospectors to fetch data.
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

# Type of the files. Based on this the way the file is read is decided.
# The different types cannot be mixed in one prospector
#
# Possible options are:
# * log: Reads every line of the log file (default)
# * stdin: Reads the standard in

#------------------------------ Log prospector --------------------------------
- input_type: log

  # Paths that should be crawled and fetched. Glob based paths.
  # To fetch all ".log" files from a specific level of subdirectories
  # /var/log/*/*.log can be used.
  # For each file found under this path, a harvester is started.
  # Make sure not file is defined twice as this can lead to unexpected behaviour.
  paths:
    - /srv/**/production/fuel/app/logs/**/**/*.php

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list. The include_lines is called before
  # exclude_lines. By default, no lines are dropped.
  exclude_lines: ['^<\?php ']

  # Set to true to store the additional fields as top level fields instead
  # of under the "fields" sub-dictionary. In case of name conflicts with the
  # fields added by Filebeat itself, the custom fields overwrite the default
  # fields.
  fields_under_root: false

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  multiline.pattern: '^[A-Z]+ - '

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  multiline.negate: true

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  multiline.match: after

  # Defines if prospectors is enabled
  enabled: true
output.logstash:
  # Boolean flag to enable or disable the output module.
  enabled: true

  # The Logstash hosts
  hosts: ["<IP>:5044"]

  # Number of workers per Logstash host.
  worker: 4

  # Set gzip compression level.
  compression_level: 3
#用于获取数据的浏览者列表。
filebeat.prospectors:
#每个人都是探矿者。大多数选项可以在“浏览”级别设置,因此
#可以对各种配置使用不同的浏览器。
#以下是特定于“浏览者”的配置。
#文件的类型。在此基础上,决定读取文件的方式。
#不同的类型不能在一个“浏览”中混合
#
#可能的选择包括:
#*日志:读取日志文件的每一行(默认)
#*stdin:读取中的标准
#------------------------------测井勘探者--------------------------------
-输入类型:日志
#应该爬网和获取的路径。基于全局的路径。
#从特定级别的子目录中获取所有“.log”文件
#可以使用/var/log/*/*.log。
#对于在此路径下找到的每个文件,将启动一个harvester。
#请确保未对文件进行两次定义,因为这可能导致意外行为。
路径:
-/srv/**/production/fuel/app/logs/**/**.php
#排除行。要匹配的正则表达式列表。它删除了那些
#匹配列表中的任何正则表达式。在此之前调用include_行
#排除_行。默认情况下,不会删除任何行。
排除_行:['^