Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 正在获取boto.exception.S3ResponseError:S3ResponseError:400错误请求异常_Python_Amazon S3_Boto - Fatal编程技术网

Python 正在获取boto.exception.S3ResponseError:S3ResponseError:400错误请求异常

Python 正在获取boto.exception.S3ResponseError:S3ResponseError:400错误请求异常,python,amazon-s3,boto,Python,Amazon S3,Boto,我正在尝试使用boto下载S3存储桶中存在的日志文件。不使用s3cmd和其他一些工具的原因是,我不希望我的代码依赖于某种软件/工具,这样其他人也可以直接使用我的代码,而不必担心下载其他依赖项 我得到以下堆栈跟踪。我看到了各种相关的帖子,但都没有解决我的问题 Traceback (most recent call last): File "/Library/Python/2.7/site-packages/fabric/main.py", line 743, in main *args, **kw

我正在尝试使用boto下载S3存储桶中存在的日志文件。不使用s3cmd和其他一些工具的原因是,我不希望我的代码依赖于某种软件/工具,这样其他人也可以直接使用我的代码,而不必担心下载其他依赖项

我得到以下堆栈跟踪。我看到了各种相关的帖子,但都没有解决我的问题

Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/fabric/main.py", line 743, in main
*args, **kwargs
File "/Library/Python/2.7/site-packages/fabric/tasks.py", line 405, in execute
results['<local-only>'] = task.run(*args, **new_kwargs)
File "/Library/Python/2.7/site-packages/fabric/tasks.py", line 171, in run
return self.wrapped(*args, **kwargs)
File "/pgbadger/pgbadger_html.py", line 86, in dlogs
s3 = S3()
File "/pgbadger/pgbadger_html.py", line 46, in __init__
self.bucket = self._get_bucket(self.log_bucket)
File "/pgbadger/pgbadger_html.py", line 65, in _get_bucket
return self.s3_conn.get_bucket(bucket)
File "/Library/Python/2.7/site-packages/boto/s3/connection.py", line 471, in get_bucket
return self.head_bucket(bucket_name, headers=headers)
File "/Library/Python/2.7/site-packages/boto/s3/connection.py", line 518, in head_bucket
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request
解决的问题: 在S3日志存储桶中,我提到了特定于我的存储桶的整个路径。。就像我有我的水桶,里面有多个文件夹。所以我提到了通往它的整个道路,但博托并不认为这会发生。因此,我只需提及bucket名称,而不必提及整个路径

以前我在做-->

正确的做法-->

from fabric.api import task
from fabric.api import env

S3_LOG_BUCKET = BUCKET-NAME


class S3(object):
    s3_conn = None
    log_bucket = S3_LOG_BUCKET
    region = region
    bucket = None
    env.host_string = REGION-NAME

    def __init__(self):
        self._s3_connect()
        self.bucket = self._get_bucket(self.log_bucket)

    def _s3_connect(self):
        if not self.s3_conn:
            self.s3_conn = boto.s3.connect_to_region(
                self.region,
                aws_access_key_id=AWS_ACCESS_KEY_ID,
                aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
            )
        if not self.s3_conn:
            raise ValueError('Invalid Region Name: {}'.format(region))

    def download_s3_logs(self):
        for l in self.bucket.list():
            key_string = str(l.key)
            l.get_contents_to_filename("/tempLogFiles/" + key_string)
            print l.key

    def _get_bucket(self, bucket):
        return self.s3_conn.get_bucket(bucket)


@task
def dlogs():
    s3 = S3()
    s3.download_s3_logs()
log_bucket = Bucket/Inner Folder 1/Inner Folder 2/.../ which was wrong
log_bucket = Bucket