elasticsearch Kibana仅显示有限数量的数据-麋鹿堆栈,elasticsearch,logstash,kibana-4,elastic-stack,elasticsearch,Logstash,Kibana 4,Elastic Stack" /> elasticsearch Kibana仅显示有限数量的数据-麋鹿堆栈,elasticsearch,logstash,kibana-4,elastic-stack,elasticsearch,Logstash,Kibana 4,Elastic Stack" />

elasticsearch Kibana仅显示有限数量的数据-麋鹿堆栈

elasticsearch Kibana仅显示有限数量的数据-麋鹿堆栈,elasticsearch,logstash,kibana-4,elastic-stack,elasticsearch,Logstash,Kibana 4,Elastic Stack,我使用logstash解析了一个apache访问日志文件,它成功地解析了所有日志,如命令提示符窗口中所示。但当我打开Kibana时,它只显示了其中的8个。为什么它不显示所有解析的日志 更新: 我重新开始安装elasticsearch-4.2.0、logstash-2.0.0和Kibana 4。我的日志文件名为http_access_2015-03-06_log,正在解析并显示在elasticsearch kopf中,但未显示Kibana上的任何日志 科普夫 命令提示输出:已更新 .conf文件

我使用logstash解析了一个apache访问日志文件,它成功地解析了所有日志,如命令提示符窗口中所示。但当我打开Kibana时,它只显示了其中的8个。为什么它不显示所有解析的日志

更新: 我重新开始安装elasticsearch-4.2.0、logstash-2.0.0和Kibana 4。我的日志文件名为http_access_2015-03-06_log,正在解析并显示在elasticsearch kopf中,但未显示Kibana上的任何日志

科普夫

命令提示输出:已更新

.conf文件:已更新

p、 s.Kiabna Discover选项卡显示所有_all的所有数据

这里有几件事

您的access文件代码路径中只有一个grok{} 您正在获取_grokparsefailures,因此您的grok{match=>[path,G:/logstash-1.5.0/bin/tmp/?[^/_logs]+/]}块不匹配。 您可能会遇到这样的情况,即文件被存储到sincedb中以进行日志存储,因此您只能在第一次运行它之后在文件中看到新记录。您需要找到并删除.sincedb或将其指向类似于/dev/null的位置
您是否尝试过在Kibana web界面的右上角设置所有时间?在你的截图中是最后30分钟。是的,我今天、每周和每月都做了。这就是它显示的全部内容。好的,请尝试删除设置选项卡中的索引,重新启动Kibana和elasticsearch,然后重新创建索引。我从Kibana的设置选项卡中删除了索引,重新启动Kibana和elasticsearch,并在Kibana中重新创建了索引,现在Kibana发现选项卡中根本没有显示任何数据。完全丢失。好的,在输出索引=>测试中设置索引。您是否在Kibana的“设置”选项卡中指定了此索引名测试?请再核对一下。除此之外,您的输出设置是不必要的。一个空白的elasticsearch输出elasticsearch{}应该可以工作,并为您提供如下索引:logstash-%{+YYYY.MM.dd}。请记住,您需要相应地配置Kibana。我完全删除并重新安装了logstash,并删除了grok{match=>[path,G:/logstash-1.5.0/bin/tmp/?[^/_logs]+/]。然后我解析了文件,所有日志都被解析了,这次没有grokparsefailure。它仍然只显示了8个日志。是否默认设置了ELK堆栈中的任何位置的限制?使用当前配置和日志文件的示例更新问题,特别是未加载的行。使用我的新工作更新问题。请参考它。
   input {
  file {
    path => "G:/MIT/level_03/Project/logstash-2.0.0/bin/tmp/*_log"
    #sincedb_path => "/dev/null"
    start_position => "beginning"
  }
}

filter {
  #grok {
  #  match => ["path", "G:/logstash-1.5.0/bin/tmp/(?<project>[^/_logs]+)/"]
  #}
  if [path] =~ "access" {
    mutate { replace => { type => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    date {
      match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
    }
  } else if [path] =~ "error" {
    mutate { replace => { type => "apache_error" } }
  } else {
    mutate { replace => { type => "random_logs" } }
  }
}

output {
  elasticsearch { 
   # action => "index"
    hosts => "localhost" 
   # index => "test"
}
  stdout { codec => rubydebug }
}
# Kibana is served by a back end server. This controls which port to use.
# server.port: 5601

# The host to bind the server to.
# server.host: "0.0.0.0"

# The Elasticsearch instance to use for all your queries.
# elasticsearch.url: "http://localhost:9200"

# preserve_elasticsearch_host true will send the hostname specified in `elasticsearch`. If you set it to false,
# then the host you use to connect to *this* Kibana instance will be sent.
# elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations
# and dashboards. It will create a new index if it doesn't already exist.
# kibana.index: ".kibana"

# The default application to load.
# kibana.defaultAppId: "discover"

# If your Elasticsearch is protected with basic auth, this is the user credentials
# used by the Kibana server to perform maintenance on the kibana_index at startup. Your Kibana
# users will still need to authenticate with Elasticsearch (which is proxied through
# the Kibana server)
# elasticsearch.username: user
# elasticsearch.password: pass

# SSL for outgoing requests from the Kibana Server to the browser (PEM formatted)
# server.ssl.cert: /path/to/your/server.crt
# server.ssl.key: /path/to/your/server.key

# Optional setting to validate that your Elasticsearch backend uses the same key files (PEM formatted)
# elasticsearch.ssl.cert: /path/to/your/client.crt
# elasticsearch.ssl.key: /path/to/your/client.key

# If you need to provide a CA certificate for your Elasticsearch instance, put
# the path of the pem file here.
# elasticsearch.ssl.ca: /path/to/your/CA.pem

# Set to false to have a complete disregard for the validity of the SSL
# certificate.
# elasticsearch.ssl.verify: true

# Time in milliseconds to wait for elasticsearch to respond to pings, defaults to
# request_timeout setting
# elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or elasticsearch.
# This must be > 0
# elasticsearch.requestTimeout: 300000

# Time in milliseconds for Elasticsearch to wait for responses from shards.
# Set to 0 to disable.
# elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying
# elasticsearch.startupTimeout: 5000

# Set the path to where you would like the process id file to be created.
# pid.file: /var/run/kibana.pid

# If you would like to send the log output to a file you can set the path below.
# logging.dest: stdout

# Set this to true to suppress all logging output.
# logging.silent: false

# Set this to true to suppress all logging output except for error messages.
# logging.quiet: false

# Set this to true to log all events, including system usage information and all requests.
# logging.verbose