Google cloud platform 将日志严重性与Google计算引擎和云日志代理一起使用

Google cloud platform 将日志严重性与Google计算引擎和云日志代理一起使用,google-cloud-platform,google-compute-engine,google-cloud-stackdriver,google-cloud-logging,Google Cloud Platform,Google Compute Engine,Google Cloud Stackdriver,Google Cloud Logging,我想使用日志和在计算引擎上运行的linux(Debian)VM 计算引擎实例是debian-9 n2-standard-4机器 我已经通过以下方式安装了云日志代理 根据,如果日志行是序列化的JSON对象,并且如果选项detect_JSON设置为true,则可以使用日志严重性 所以我记录了如下内容,但不幸的是,我在GCP中没有任何严重性 $ logger '{"severity":"ERROR","message":"This is an error"}' 但我希望这样: 我不介意日志条目的

我想使用日志和在计算引擎上运行的linux(Debian)VM

计算引擎实例是debian-9 n2-standard-4机器

我已经通过以下方式安装了云日志代理

根据,如果日志行是序列化的JSON对象,并且如果选项detect_JSON设置为true,则可以使用日志严重性

所以我记录了如下内容,但不幸的是,我在GCP中没有任何严重性

$ logger '{"severity":"ERROR","message":"This is an error"}'

但我希望这样:

我不介意日志条目的类型是textPayload或jsonPayload

文件
/etc/google fluentd/google fluentd.conf
,带有detect\u json启用:

$ cat /etc/google-fluentd/google-fluentd.conf 
# Master configuration file for google-fluentd

# Include any configuration files in the config.d directory.
#
# An example "catch-all" configuration can be found at
# https://github.com/GoogleCloudPlatform/fluentd-catch-all-config
@include config.d/*.conf

# Prometheus monitoring.
<source>
  @type prometheus
  port 24231
</source>
<source>
  @type prometheus_monitor
</source>

# Do not collect fluentd's own logs to avoid infinite loops.
<match fluent.**>
  @type null
</match>

# Add a unique insertId to each log entry that doesn't already have it.
# This helps guarantee the order and prevent log duplication.
<filter **>
  @type add_insert_ids
</filter>

# Configure all sources to output to Google Cloud Logging
<match **>
  @type google_cloud
  buffer_type file
  buffer_path /var/log/google-fluentd/buffers
  # Set the chunk limit conservatively to avoid exceeding the recommended
  # chunk size of 5MB per write request.
  buffer_chunk_limit 512KB
  # Flush logs every 5 seconds, even if the buffer is not full.
  flush_interval 5s
  # Enforce some limit on the number of retries.
  disable_retry_limit false
  # After 3 retries, a given chunk will be discarded.
  retry_limit 3
  # Wait 10 seconds before the first retry. The wait interval will be doubled on
  # each following retry (20s, 40s...) until it hits the retry limit.
  retry_wait 10
  # Never wait longer than 5 minutes between retries. If the wait interval
  # reaches this limit, the exponentiation stops.
  # Given the default config, this limit should never be reached, but if
  # retry_limit and retry_wait are customized, this limit might take effect.
  max_retry_wait 300
  # Use multiple threads for processing.
  num_threads 8
  # Use the gRPC transport.
  use_grpc true
  # If a request is a mix of valid log entries and invalid ones, ingest the
  # valid ones and drop the invalid ones instead of dropping everything.
  partial_success true
  # Enable monitoring via Prometheus integration.
  enable_monitoring true
  monitoring_type opencensus
  detect_json true
</match>
我错过了什么


注意:我知道,但它并不理想,因为它将所有内容记录在“全局”资源类型下,而不是我的VM中。

记录器使用syslog,syslog“解析时间戳,但仍将整行作为“消息”收集”

/etc/google fluentd/config.d/syslog.conf中所述

在您的情况下,可以使用json格式的日志严重性

这是我们的调查结果

echo“{”severity:“ERROR”,“message:“This is ERROR”}'>/tmp/test structured log.log

$ cat /etc/google-fluentd/google-fluentd.conf 
# Master configuration file for google-fluentd

# Include any configuration files in the config.d directory.
#
# An example "catch-all" configuration can be found at
# https://github.com/GoogleCloudPlatform/fluentd-catch-all-config
@include config.d/*.conf

# Prometheus monitoring.
<source>
  @type prometheus
  port 24231
</source>
<source>
  @type prometheus_monitor
</source>

# Do not collect fluentd's own logs to avoid infinite loops.
<match fluent.**>
  @type null
</match>

# Add a unique insertId to each log entry that doesn't already have it.
# This helps guarantee the order and prevent log duplication.
<filter **>
  @type add_insert_ids
</filter>

# Configure all sources to output to Google Cloud Logging
<match **>
  @type google_cloud
  buffer_type file
  buffer_path /var/log/google-fluentd/buffers
  # Set the chunk limit conservatively to avoid exceeding the recommended
  # chunk size of 5MB per write request.
  buffer_chunk_limit 512KB
  # Flush logs every 5 seconds, even if the buffer is not full.
  flush_interval 5s
  # Enforce some limit on the number of retries.
  disable_retry_limit false
  # After 3 retries, a given chunk will be discarded.
  retry_limit 3
  # Wait 10 seconds before the first retry. The wait interval will be doubled on
  # each following retry (20s, 40s...) until it hits the retry limit.
  retry_wait 10
  # Never wait longer than 5 minutes between retries. If the wait interval
  # reaches this limit, the exponentiation stops.
  # Given the default config, this limit should never be reached, but if
  # retry_limit and retry_wait are customized, this limit might take effect.
  max_retry_wait 300
  # Use multiple threads for processing.
  num_threads 8
  # Use the gRPC transport.
  use_grpc true
  # If a request is a mix of valid log entries and invalid ones, ingest the
  # valid ones and drop the invalid ones instead of dropping everything.
  partial_success true
  # Enable monitoring via Prometheus integration.
  enable_monitoring true
  monitoring_type opencensus
  detect_json true
</match>
$ cat /etc/google-fluentd/config.d/syslog.conf
<source>
  @type tail

  # Parse the timestamp, but still collect the entire line as 'message'
  format syslog

  path /var/log/syslog
  pos_file /var/lib/google-fluentd/pos/syslog.pos
  read_from_head true
  tag syslog
</source>