Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/amazon-web-services/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Amazon web services 如何将linux日志文件直接发送到S3(从而绕过CloudWatch)_Amazon Web Services_Amazon S3_Amazon Ec2 - Fatal编程技术网

Amazon web services 如何将linux日志文件直接发送到S3(从而绕过CloudWatch)

Amazon web services 如何将linux日志文件直接发送到S3(从而绕过CloudWatch),amazon-web-services,amazon-s3,amazon-ec2,Amazon Web Services,Amazon S3,Amazon Ec2,如何将linux日志文件直接发送到S3(从而绕过CloudWatch) 一个例子是使用logrotate,如中所述。现在有更好的技术吗?我已确认您无法将CloudWatch代理配置为发送到S3而不是CloudWatch。您希望将哪种日志上载到S3 我想你应该忽略Amazon CloudWatch,直接上传到Amazon S3 bucket 然后,您只需要编写一个自定义脚本(Bash/Python或 任何其他)来检索您的Linux日志。例如,我希望检索内存使用情况,如下所示: 之后,尝试生成并使用

如何将linux日志文件直接发送到S3(从而绕过CloudWatch)


一个例子是使用logrotate,如中所述。现在有更好的技术吗?我已确认您无法将CloudWatch代理配置为发送到S3而不是CloudWatch。

您希望将哪种日志上载到S3

我想你应该忽略Amazon CloudWatch,直接上传到Amazon S3 bucket

  • 然后,您只需要编写一个自定义脚本(Bash/Python或 任何其他)来检索您的Linux日志。例如,我希望检索内存使用情况,如下所示:
  • 之后,尝试生成并使用IAM密钥对,该密钥对可以 通过足够的操作(描述/上传)与Amazon S3通信
  • 最后,您可以使用
    crontab
    或任何Linux服务 自动触发将API命令上载到Amazon S3的对象 水桶
  • */5****aws cp/home/log/*。log s3:////
    
    非常感谢您的回复

    首先,我想直接绕过cloudwatch将下面的日志导出到s3 bucket中

    /var/log/audit/audit.log
    /var/log/dmesg
    /var/log/maillog
    /var/log/messages
    /var/log/secure
    /var/log/lynx/lynx.log
    /var/log/lynx/cmx_auth.log
    /var/log/lynx/lynxaudit.log
    /var/log/lynx/lynxretention.log
    /var/log/tomcat/catalina.log
    
    在搜索不同的方法时,我发现了一些东西,比如使用s3cmd并将日志直接导出到s3bucket中。像下面这样

    1. yum install s3cmd
    2. sudo s3cmd --configure --config=/root/logrotate-s3cmd.config (Where it asks the below information)
       Access Key: your-access-key
       Secret Key: your-secret-key
       Default Region [US]: ENTER
       S3 Endpoint [s3.amazonaws.com]: nyc3.digitaloceanspaces.com
       DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: %(bucket)s.nyc3.digitaloceanspaces.com
       Encryption password: ENTER, or specify a password to encrypt
       Path to GPG program [/usr/bin/gpg]: ENTER
       Use HTTPS protocol [Yes]: ENTER
       HTTP Proxy server name: ENTER, or fill out your proxy information
       At this point, s3cmd will summarize your responses, then ask you to test the connection. Press y then ENTER to start the test:
    
    Output
    Test access with supplied credentials? [Y/n] y
    Please wait, attempting to list all buckets...
    Success. Your access key and secret key worked fine :-)
    
    3. Edit vi /etc/logrotate.d/tomcat
    
    /var/log/tomcat/catalina.out {
      copytruncate
      rotate 3
      daily
      missingok
      compress
      create 0644 tomcat tomcat
        dateext
        dateformat -%Y-%m-%d-%s
        lastaction
                HOSTNAME=`hostname`
                /usr/bin/s3cmd sync /var/log/tomcat/catalina.log*.gz "s3://cmx-doc-eu-west-2-arv/$HOSTNAME/"
                /usr/bin/s3cmd sync /var/log/tomcat/catalina.out*.gz "s3://cmx-doc-eu-west-2-arv/$HOSTNAME/tomcat/"
        endscript
    }
    
    4. After adding the above I tested immediately whether it is working properly or not.
    
    sudo logrotate /etc/logrotate.conf --verbose --force
    
    但是wierd部分是步骤2没有工作logrotate-s3cmd.config文件没有任何访问密钥和密钥。就是这样

    crontab -e
    
    */5 * * * *    aws cp /home/log/*.log s3://<YOUR-BUCKET>/<YOUR-BUCKET-PATH>/
    
    /var/log/audit/audit.log
    /var/log/dmesg
    /var/log/maillog
    /var/log/messages
    /var/log/secure
    /var/log/lynx/lynx.log
    /var/log/lynx/cmx_auth.log
    /var/log/lynx/lynxaudit.log
    /var/log/lynx/lynxretention.log
    /var/log/tomcat/catalina.log
    
    1. yum install s3cmd
    2. sudo s3cmd --configure --config=/root/logrotate-s3cmd.config (Where it asks the below information)
       Access Key: your-access-key
       Secret Key: your-secret-key
       Default Region [US]: ENTER
       S3 Endpoint [s3.amazonaws.com]: nyc3.digitaloceanspaces.com
       DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: %(bucket)s.nyc3.digitaloceanspaces.com
       Encryption password: ENTER, or specify a password to encrypt
       Path to GPG program [/usr/bin/gpg]: ENTER
       Use HTTPS protocol [Yes]: ENTER
       HTTP Proxy server name: ENTER, or fill out your proxy information
       At this point, s3cmd will summarize your responses, then ask you to test the connection. Press y then ENTER to start the test:
    
    Output
    Test access with supplied credentials? [Y/n] y
    Please wait, attempting to list all buckets...
    Success. Your access key and secret key worked fine :-)
    
    3. Edit vi /etc/logrotate.d/tomcat
    
    /var/log/tomcat/catalina.out {
      copytruncate
      rotate 3
      daily
      missingok
      compress
      create 0644 tomcat tomcat
        dateext
        dateformat -%Y-%m-%d-%s
        lastaction
                HOSTNAME=`hostname`
                /usr/bin/s3cmd sync /var/log/tomcat/catalina.log*.gz "s3://cmx-doc-eu-west-2-arv/$HOSTNAME/"
                /usr/bin/s3cmd sync /var/log/tomcat/catalina.out*.gz "s3://cmx-doc-eu-west-2-arv/$HOSTNAME/tomcat/"
        endscript
    }
    
    4. After adding the above I tested immediately whether it is working properly or not.
    
    sudo logrotate /etc/logrotate.conf --verbose --force