Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/amazon-web-services/12.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何为AWS Sagemaker托管的自定义Tensorflow模型使用多个输入_Python_Amazon Web Services_Tensorflow_Amazon Sagemaker - Fatal编程技术网

Python 如何为AWS Sagemaker托管的自定义Tensorflow模型使用多个输入

Python 如何为AWS Sagemaker托管的自定义Tensorflow模型使用多个输入,python,amazon-web-services,tensorflow,amazon-sagemaker,Python,Amazon Web Services,Tensorflow,Amazon Sagemaker,我有一个经过训练的Tensorflow模型,它使用两个输入进行预测。我已经成功地在AWS Sagemaker上设置并部署了该模型 from sagemaker.tensorflow.model import TensorFlowModel sagemaker_model = TensorFlowModel(model_data='s3://' + sagemaker_session.default_bucket() + '/R2-mo

我有一个经过训练的Tensorflow模型,它使用两个输入进行预测。我已经成功地在AWS Sagemaker上设置并部署了该模型

from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(model_data='s3://' + sagemaker_session.default_bucket() 
                              + '/R2-model/R2-model.tar.gz',
                             role = role,
                             framework_version = '1.12',
                             py_version='py2',
                             entry_point='train.py')

predictor = sagemaker_model.deploy(initial_instance_count=1,
                              instance_type='ml.m4.xlarge')

predictor.predict([data_scaled_1.to_csv(),
                   data_scaled_2.to_csv()]
                 )

我总是收到一个错误。我可以使用AWS Lambda函数,但我没有看到任何关于为部署的模型指定多个输入的文档。有人知道怎么做吗?

您可能需要自定义端点中加载的推理函数。在这里,您可以发现SageMaker TensorFlow部署有两个选项:

  • ,这是默认值,请检查是否修改
    input\u fn
    可以适应您的推理方案

您可以在Cloudwatch中诊断错误(可通过sagemaker端点UI访问),在上述两种服务架构中选择最合适的服务架构,并根据需要自定义推理功能。只有TF服务端点支持一个推理请求中的多个输入。您可以按照此处的文档来部署TFS端点-

首先部署模型时,您需要实际构建正确的签名。 此外,还需要使用tensorflow服务进行部署

在推断时,您还需要在请求时提供适当的输入格式:基本上,sagemaker docker服务器接受请求输入并将其传递给tensorflow服务。因此,输入需要匹配

下面是使用Sagemaker在Tensorflow服务中部署Keras多输入多输出模型的一个简单示例,以及之后如何进行推断:

import tarfile

from tensorflow.python.saved_model import builder
from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
from tensorflow.python.saved_model import tag_constants
from keras import backend as K
import sagemaker
#nano ~/.aws/config
#get_ipython().system('nano ~/.aws/config')
from sagemaker import get_execution_role
from sagemaker.tensorflow.serving import Model


def serialize_to_tf_and_dump(model, export_path):
    """
    serialize a Keras model to TF model
    :param model: compiled Keras model
    :param export_path: str, The export path contains the name and the version of the model
    :return:
    """
    # Build the Protocol Buffer SavedModel at 'export_path'
    save_model_builder = builder.SavedModelBuilder(export_path)
    # Create prediction signature to be used by TensorFlow Serving Predict API
    signature = predict_signature_def(
        inputs={
            "input_type_1": model.input[0],
            "input_type_2": model.input[1],
        },
        outputs={
            "decision_output_1": model.output[0],
            "decision_output_2": model.output[1],
            "decision_output_3": model.output[2]
        }
    )
    with K.get_session() as sess:
        # Save the meta graph and variables
        save_model_builder.add_meta_graph_and_variables(
            sess=sess, tags=[tag_constants.SERVING], signature_def_map={"serving_default": signature})
        save_model_builder.save()

# instanciate model
model = .... 

# convert to tf model
serialize_to_tf_and_dump(model, 'model_folder/1')

# tar tf model
with tarfile.open('model.tar.gz', mode='w:gz') as archive:
    archive.add('model_folder', recursive=True)

# upload it to s3
sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz')

# convert to sagemaker model
role = get_execution_role()
sagemaker_model = Model(model_data = inputs,
    name='DummyModel',
    role = role,
    framework_version = '1.12')

predictor = sagemaker_model.deploy(initial_instance_count=1,
    instance_type='ml.t2.medium', endpoint_name='MultiInputMultiOutputModel')
推断时,以下是如何请求预测:

import json
import boto3

x_inputs = ... # list with 2 np arrays of size (batch_size, ...)
data={
    'inputs':{
        "input_type_1": x[0].tolist(),
        "input_type_2": x[1].tolist()
        }
}

endpoint_name = 'MultiInputMultiOutputModel'
client = boto3.client('runtime.sagemaker')
response = client.invoke_endpoint(EndpointName=endpoint_name, Body=json.dumps(data), ContentType='application/json')
predictions = json.loads(response['Body'].read())

我甚至修改了输入,将一个序列化的csv拆分为两个输入,但文档中没有说明如何向模型发送多个输入。这是因为文档与模型无关。在SageMaker的帮助下你会怎么做?你的推理调用是什么样子的?