Amazon s3 如何将s3 bucket链接到sagemaker笔记本

Amazon s3 如何将s3 bucket链接到sagemaker笔记本,amazon-s3,amazon-sagemaker,Amazon S3,Amazon Sagemaker,我正在尝试将我的s3存储桶链接到笔记本实例,但是我无法: 以下是我所知道的情况: from sagemaker import get_execution_role role = get_execution_role bucket = 'atwinebankloadrisk' datalocation = 'atwinebankloadrisk' data_location = 's3://{}/'.format(bucket) output_location = 's3://{}/'.for

我正在尝试将我的s3存储桶链接到笔记本实例,但是我无法:

以下是我所知道的情况:

from sagemaker import get_execution_role

role = get_execution_role
bucket = 'atwinebankloadrisk'
datalocation = 'atwinebankloadrisk'

data_location = 's3://{}/'.format(bucket)
output_location = 's3://{}/'.format(bucket)
要从存储桶调用数据,请执行以下操作:

df_test = pd.read_csv(data_location/'application_test.csv')
df_train = pd.read_csv('./application_train.csv')
df_bureau = pd.read_csv('./bureau_balance.csv')
但是,我不断出现错误,无法继续。 我还没有找到能帮上大忙的答案


PS:我不熟悉AWS

您试图使用Pandas从S3读取文件-Pandas可以从本地磁盘读取文件,但不能直接从S3读取文件。
取而代之的是,用熊猫来阅读它们

import boto3
import botocore

BUCKET_NAME = 'my-bucket' # replace with your bucket name
KEY = 'my_image_in_s3.jpg' # replace with your object key

s3 = boto3.resource('s3')

try:
    # download as local file
    s3.Bucket(BUCKET_NAME).download_file(KEY, 'my_local_image.jpg')

    # OR read directly to memory as bytes:
    # bytes = s3.Object(BUCKET_NAME, KEY).get()['Body'].read() 
except botocore.exceptions.ClientError as e:
    if e.response['Error']['Code'] == "404":
        print("The object does not exist.")
    else:
        raise
您可以使用直接使用pandas读取s3文件。下面的代码取自


您可以使用下面的示例代码将S3数据加载到AWS SageMaker笔记本中。请确保Amazon SageMaker角色附带了访问S3的策略

[1]


在pandas 1.0.5中,如果您已经提供了对笔记本实例的访问,那么从S3读取csv就很容易了():

df=pd.read\u csv('s3://.csv'))
在笔记本安装过程中,我向笔记本实例附加了一个
SageMakerFullAccess
策略,允许其访问S3存储桶。您也可以通过IAM管理控制台执行此操作

如果您需要凭据,有三种方法提供它们():

  • aws\u访问密钥\u id
    aws\u密码\u访问密钥
    ,以及
    aws\u会话\u令牌
    环境变量
  • 配置文件,如
    ~/.aws/credentials
  • 对于EC2上的节点,IAM元数据提供程序

您可以将s3位置传递给您的培训工作。我从来没见过你可以用笔记本电脑来做这个。如果你想把s3数据放在你的笔记本里,而不是通过boto3 s3客户端下载。我想在sagemaker笔记本实例中读取s3存储桶,而不必下载硬盘。我能得到帮助吗?@atwinemugumebytes=s3.Object(bucket,key).get()['Body'].read()这是怎么回事,怎么没人给它投票!!pandas使用s3fs处理s3文件,来源:
import os
import pandas as pd
from s3fs.core import S3FileSystem

os.environ['AWS_CONFIG_FILE'] = 'aws_config.ini'

s3 = S3FileSystem(anon=False)
key = 'path\to\your-csv.csv'
bucket = 'your-bucket-name'

df = pd.read_csv(s3.open('{}/{}'.format(bucket, key), mode='rb'))
import boto3 
import botocore 
import pandas as pd 
from sagemaker import get_execution_role 

role = get_execution_role() 

bucket = 'Your_bucket_name' 
data_key = your_data_file.csv' 
data_location = 's3://{}/{}'.format(bucket, data_key) 

pd.read_csv(data_location) 
import boto3

# files are referred as objects in S3.  
# file name is referred as key name in S3

def write_to_s3(filename, bucket_name, key):
    with open(filename,'rb') as f: # Read in binary mode
        return boto3.Session().resource('s3').Bucket(bucket).Object(key).upload_fileobj(f)

# Simple call the write_to_s3 function with required argument  

write_to_s3('file_name.csv', 
            bucket_name,
            'file_name.csv')
df = pd.read_csv('s3://<bucket-name>/<filepath>.csv')