Python aws部署ML模型

Python aws部署ML模型,python,amazon-web-services,amazon-s3,Python,Amazon Web Services,Amazon S3,我已经在本地实现了一个需要部署在S3上的ML模型,然后创建一个Lambda来调用它 问题是我面临着大量的错误。我试着阅读文档,并跟随一些笔记本,但我不知道如何使我的模型工作 代码如下: from sagemaker import get_execution_role import sagemaker import argparse import numpy as np import os import pandas as pd from sklearn.externals import jobl

我已经在本地实现了一个需要部署在S3上的ML模型,然后创建一个Lambda来调用它

问题是我面临着大量的错误。我试着阅读文档,并跟随一些笔记本,但我不知道如何使我的模型工作

代码如下:

from sagemaker import get_execution_role
import sagemaker
import argparse
import numpy as np
import os
import pandas as pd
from sklearn.externals import joblib
pd.options.mode.chained_assignment = None
import datetime as dt
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
import io
from sagemaker.sklearn.estimator import SKLearn
import s3fs


prefix = 'FP'
sagemaker_session = sagemaker.Session()
role = get_execution_role()

data = pd.read_csv("df.csv", header = 0, usecols = ["col1", "col2"])


os.makedirs('./data_DM', exist_ok=True)
data.to_csv('./data_DM/orders.csv')

WORK_DIRECTORY = 'data_DM'

train_input = sagemaker_session.upload_data(WORK_DIRECTORY, key_prefix="{}/{}".format(prefix, WORK_DIRECTORY) )


script_path = './data_DM/My_script.py'

sklearn = SKLearn(
    entry_point=script_path,
    train_instance_type="ml.m5.2xlarge",
    role=role,
    sagemaker_session=sagemaker_session)

sklearn.fit({'train': train_input})
这里是My_script.py:

import argparse
import numpy as np
import os
import pandas as pd
from sklearn.externals import joblib
pd.options.mode.chained_assignment = None
import datetime as dt
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
import io
from sklearn import tree
import boto3, re, sys, math, json, urllib.request

def cleaning(data):
     lots of cleaning
     return cleaned data


if __name__ =='__main__':
    
    bucket_name = 'ciao'
    file_name = 'df.csv 


    data_location = 's3://{}/{}'.format(bucket_name, file_name)

    parser = argparse.ArgumentParser()

    parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
    parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
    parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])

    args = parser.parse_args()

    data = pd.read_csv(data_location, header = 0, usecols = ["col1", "col2"])

    data_ml = cleaning(data) 

    y = data_ml.loc[:,"event"]
    X = data_ml.loc[:, data_ml.columns != 'event']

    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
    
    

    model =  tree.DecisionTreeClassifier(n_estimators=600, class_weight = "balanced", random_state=42)
    model.fit(X_train, y_train)

    #Save the model to the location specified by args.model_dir
    joblib.dump(model, os.path.join(args.model_dir, "model.joblib"))



def model_fn(model_dir):
    model = joblib.load(os.path.join(model_dir, "model.joblib"))
    return model


def input_fn(request_body, request_content_type):
    if request_content_type == 'text/csv':
        samples = []
        for r in request_body.split('|'):
            samples.append(list(map(float,r.split(','))))
        return np.array(samples)
    else:
        raise ValueError("Thie model only supports text/csv input")

def predict_fn(input_data, model):
    return model.predict_proba(cleaning(input_data))

def output_fn(prediction, content_type):
    return ' | '.join([INDEX_TO_LABEL[t] for t in prediction])

现在,错误如下所示:

/miniconda3/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Traceback (most recent call last):
File "/miniconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/miniconda3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/ml/code/Failure_Pred.py", line 206, in
"weight", "userPrice", "amount", "nParcel"])
File "/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 685, in parser_f
return _read(filepath_or_buffer, kwds)
File "/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 440, in _read
filepath_or_buffer, encoding, compression
File "/miniconda3/lib/python3.7/site-packages/pandas/io/common.py", line 206, in get_filepath_or_buffer
from pandas.io import s3
File "/miniconda3/lib/python3.7/site-packages/pandas/io/s3.py", line 10, in
"s3fs", extra="The s3fs package is required to handle s3 files."
File "/miniconda3/lib/python3.7/site-packages/pandas/compat/_optional.py", line 93, in import_optional_dependency
raise ImportError(message.format(name=name, extra=extra)) from None
ImportError: Missing optional dependency 's3fs'. The s3fs package is required to handle s3 files. Use pip or conda to install s3fs.
2020-07-09 12:13:27,645 sagemaker-containers ERROR ExecuteUserScriptError:
Command "/miniconda3/bin/python -m Failure_Pred"

2020-07-09 12:13:36 Uploading - Uploading generated training model
2020-07-09 12:13:36 Failed - Training job failed

Error for Training job sagemaker-scikit-learn-2020-07-09-12-10-17-446: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
Command "/miniconda3/bin/python -m Failure_Pred"
看起来我没有安装s3fs,但我已经通过pip安装和conda安装安装了它

我怎样才能解决这个问题

谢谢

编辑07/10:在本地阅读路径中添加训练密钥: 替换
opt/ml/input/data/orders.csv
by
opt/ml/input/data/train/orders.csv



您有一个错误,因为您的
data=pd.read\u csv(data\u location,…)
试图从S3读取数据。尝试替换为
data=pd.read\u csv('opt/ml/input/data/orders.csv',…)

如果您使用SageMaker,您不需要在培训脚本中读取S3:SageMaker会为您将S3复制到EC2

相反,如中所示,您只需要从本地路径
opt/ml/input/data/
读取数据,其中
是用于在培训调用
model.fit({'':'s3://my data'})中命名输入的键
。注意,本地在这里是指远程临时SageMaker Training EC2实例的本地,而不是您可能用于开发和编排的SageMaker Notebook EC2实例的本地


将工件复制到s3也是一样:您不需要自己做。只需将工件写入本地路径
opt/ml/model
,服务就会将其复制回S3。一些AWS提供的容器(如sklearn容器)也在环境变量中提供输入数据路径和人工制品路径(
SM_通道
SM_模型_目录
),您可以选择使用它们来避免在代码中硬编码它们。你可以从中获得灵感,并根据自己的情况加以调整。您不需要s3fs。

My_script.py
中导入
s3fs
。已经尝试过了,它不起作用:“没有名为s3fs的模块”嘿,Olivier,感谢您的帮助。不幸的是,它引发了一个“找不到文件”错误。我也尝试过不同的路径,但没有正确的,很抱歉我的路径错了,它需要包括频道名称:
pd.read\u csv('opt/ml/input/data/train/orders.csv',…)
基本上,如果使用
sklearn.fit({mylittlechannel:train\u input})
启动作业,在您的脚本中,您必须从
opt/ml/input/data/mylittlechannel/…
读取。通过遵循随机林演示,即使从s3读取,我也能够解决问题。谢谢