elasticsearch 未能刷新缓冲区fluentd,elasticsearch,kubernetes,logging,kibana,fluentd,elasticsearch,Kubernetes,Logging,Kibana,Fluentd" /> elasticsearch 未能刷新缓冲区fluentd,elasticsearch,kubernetes,logging,kibana,fluentd,elasticsearch,Kubernetes,Logging,Kibana,Fluentd" />

elasticsearch 未能刷新缓冲区fluentd

elasticsearch 未能刷新缓冲区fluentd,elasticsearch,kubernetes,logging,kibana,fluentd,elasticsearch,Kubernetes,Logging,Kibana,Fluentd,我得到了这些错误。数据加载到elasticsearch中,但kibana中缺少一些记录。我在kubernetes的fluentd日志中看到了这一点 2021-04-26 15:58:10 +0000 [warn]: #0 failed to flush the buffer. retry_time=29 next_retry_seconds=2021-04-26 15:58:43 +0000 chunk="5c0e21cad29fc91298a9d881c6bd9873" e

我得到了这些错误。数据加载到elasticsearch中,但kibana中缺少一些记录。我在kubernetes的fluentd日志中看到了这一点

2021-04-26 15:58:10 +0000 [warn]: #0 failed to flush the buffer. retry_time=29 next_retry_seconds=2021-04-26 15:58:43 +0000 chunk="5c0e21cad29fc91298a9d881c6bd9873" error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="Elasticsearch returned errors, retrying. Add '@log_level debug' to your config to see the full response"
  2021-04-26 15:58:10 +0000 [warn]: #0 suppressed same stacktrace
这是myy fluentd conf

  fluent.conf: |
    <match fluent.**>
        # this tells fluentd to not output its log on stdout
        @type null
    </match>
    # here we read the logs from Docker's containers and parse them
    <source>
      @type tail
      path /var/log/containers/*nginx-ingress-controller*.log,/var/log/containers/*kong*.log
      pos_file /var/log/nginx-containers.log.pos
      @label @NGINX
      tag kubernetes.*
      read_from_head true
      <parse>
        @type json
        time_format %Y-%m-%dT%H:%M:%S.%NZ
      </parse>
    </source>
    <source>
      @type tail
      path /var/log/containers/*.log
      exclude_path ["/var/log/containers/*nginx-ingress-controller*.log", "/var/log/containers/*kong*.log"]
      pos_file /var/log/fluentd-containers.log.pos
      tag kubernetes.*
      read_from_head true
      <parse>
        @type json
        time_format %Y-%m-%dT%H:%M:%S.%NZ
      </parse>
    </source>
    # we use kubernetes metadata plugin to add metadatas to the log
    <filter kubernetes.**>
        @type kubernetes_metadata
    </filter>
    <label @NGINX>
        <filter kubernetes.**>
            @type kubernetes_metadata
        </filter>
        <filter kubernetes.**>
          @type parser
          key_name log
          reserve_data true
          <parse>
            @type regexp
            expression /^(?<remote>[^ ]*(?: [^ ]* [^ ]*)?) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)"(?: (?<request_length>[^ ]*) (?<request_time>[^ ]*) (?<proxy_upstream_name>[^ ]*(?: \[[^ ]*\])*) (?<upstream_addr>[^ ]*(?:, [^ ]*)*) (?<upstream_response_length>[^ ]*(?:, [^ ]*)*) (?<upstream_response_time>[^ ]*(?:, [^ ]*)*) (?<upstream_status>[^ ]*(?:, [^ ]*)*) (?<req_id>[^ ]*))?)?$/
            time_format %d/%b/%Y:%H:%M:%S %z
          </parse>
        </filter>
        <match kubernetes.**>
            @type elasticsearch
            include_tag_key true
            host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
            port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
            scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'https'}"
            ssl_verify false
            reload_connections false
            logstash_prefix k8-nginx
            logstash_format true
            <buffer>
                 flush_mode interval
                 retry_type exponential_backoff
                 flush_thread_count 2
                 flush_interval 5s
                 retry_forever true
                 retry_max_interval 30
                 chunk_limit_size 2M
                 queue_limit_length 32
                 overflow_action block
            </buffer>
        </match>
    </label>
    # we send the logs to Elasticsearch
    <match kubernetes.**>
        @type elasticsearch
        include_tag_key true
        host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
        port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
        scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'https'}"
        ssl_verify false
        reload_connections false
        logstash_prefix k8-logstash
        logstash_format true
        <buffer>
               flush_mode interval
               retry_type exponential_backoff
               flush_thread_count 2
               flush_interval 5s
               retry_forever true
               retry_max_interval 30
               chunk_limit_size 2M
               queue_limit_length 32
               overflow_action block
        </buffer>
    </match>
fluent.conf:|
#这会告诉fluentd不要在标准输出上输出其日志
@类型null
#在这里,我们从Docker的容器中读取日志并解析它们
@型尾
路径/var/log/containers/*nginx入口控制器*.log、/var/log/containers/*kong*.log
pos_文件/var/log/nginx-containers.log.pos
@标签@NGINX
塔格·库伯内特斯*
从你的头上读出来是真的
@类型json
时间\u格式%Y-%m-%dT%H:%m:%S.%NZ
@型尾
路径/var/log/containers/*.log
排除路径[“/var/log/containers/*nginx入口控制器*.log”,“/var/log/containers/*kong*.log”]
pos_文件/var/log/fluentd-containers.log.pos
塔格·库伯内特斯*
从你的头上读出来是真的
@类型json
时间\u格式%Y-%m-%dT%H:%m:%S.%NZ
#我们使用kubernetes元数据插件将元数据添加到日志中
@类型kubernetes\u元数据
@类型kubernetes\u元数据
@类型分析器
密钥名称日志
保留数据为真
@类型regexp
(3)在(3)方面,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,(?[^]*(?:,[^]*)(?[^]*(?:,[^]*)*)(?[^]*))(?[^]*)?)$/
时间\u格式%d/%b/%Y:%H:%M:%S%z
@类型弹性搜索
包含标记键为真
主机“#{ENV['FLUENT_ELASTICSEARCH_host']}”
端口“#{ENV['FLUENT_ELASTICSEARCH_port']}”
方案“#{ENV['FLUENT_ELASTICSEARCH_scheme']| |'https'}”
ssl\u验证错误
重新加载\u连接错误
logstash_前缀k8 nginx
logstash_格式为true
刷新模式间隔
重试\u类型指数\u回退
齐平螺纹数2
冲洗间隔5s
你永远是真的
重试\u最大\u间隔30
块大小限制为2M
队列长度限制32
溢出动作块
#我们将日志发送到Elasticsearch
@类型弹性搜索
包含标记键为真
主机“#{ENV['FLUENT_ELASTICSEARCH_host']}”
端口“#{ENV['FLUENT_ELASTICSEARCH_port']}”
方案“#{ENV['FLUENT_ELASTICSEARCH_scheme']| |'https'}”
ssl\u验证错误
重新加载\u连接错误
logstash_前缀k8 logstash
logstash_格式为true
刷新模式间隔
重试\u类型指数\u回退
齐平螺纹数2
冲洗间隔5s
你永远是真的
重试\u最大\u间隔30
块大小限制为2M
队列长度限制32
溢出动作块

增加
请求超时时间
,同时检查您的ES是否已启动并正在运行,当您收到错误时,没有pods重新启动。我正在检查我的旧配置文件是否将共享它。您是否可以共享。您是否也遇到了缓冲区问题?我的ES是aws中的托管服务