Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/295.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/templates/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何将您安装的变压器保存到blob中,以便您的预测管道可以在AML服务中使用它?_Python_Azure Machine Learning Service - Fatal编程技术网

Python 如何将您安装的变压器保存到blob中,以便您的预测管道可以在AML服务中使用它?

Python 如何将您安装的变压器保存到blob中,以便您的预测管道可以在AML服务中使用它?,python,azure-machine-learning-service,Python,Azure Machine Learning Service,我正在Azure机器学习服务上构建数据转换和培训管道。我想将我安装的变压器(例如tf idf)保存到blob,以便我的预测管道稍后可以访问它 transformed_data = PipelineData("transformed_data", datastore = default_datastore, output_path_on_compute="my_projec

我正在Azure机器学习服务上构建数据转换和培训管道。我想将我安装的变压器(例如tf idf)保存到blob,以便我的预测管道稍后可以访问它

transformed_data = PipelineData("transformed_data", 
                               datastore = default_datastore,
                               output_path_on_compute="my_project/tfidf")

step_tfidf = PythonScriptStep(name = "tfidf_step",
                              script_name = "transform.py",
                              arguments = ['--input_data', blob_train_data, 
                                           '--output_folder', transformed_data],
                              inputs = [blob_train_data],
                              outputs = [transformed_data],
                              compute_target = aml_compute,
                              source_directory = project_folder,
                              runconfig = run_config,
                              allow_reuse = False)

上面的代码将转换器保存到当前运行的文件夹中,该文件夹在每次运行期间动态生成

我想将转换器保存到blob上的一个固定位置,以便以后调用预测管道时可以访问它

我试图将
DataReference
类的实例用作
PythonScriptStep
输出,但它导致了一个错误:
ValueError:意外的输出类型:

这是因为
PythonScriptStep
只接受
PipelineData
OutputPortBinding
对象作为输出


我如何保存安装的变压器,以便以后可以通过任何aribitraly流程(例如,我的预测管道)访问它?

这可能不够灵活,无法满足您的需要(此外,我还没有测试过这一点),但是,如果您正在使用scikit学习,一种可能性是将tf idf/转换步骤包括到scikit学习
管道
对象中,并将其注册到您的工作区中

因此,您的培训脚本将包含:

pipeline = Pipeline([
    ('vectorizer', TfidfVectorizer(stop_words = list(text.ENGLISH_STOP_WORDS))),
    ('classifier', SGDClassifier()
])

pipeline.fit(train[label].values, train[pred_label].values)

# Serialize the pipeline
joblib.dump(value=pipeline, filename='outputs/model.pkl')
您的实验提交脚本将包含

run = exp.submit(src)
run.wait_for_completion(show_output = True)
model = run.register_model(model_name='my_pipeline', model_path='outputs/model.pkl')
然后,您可以使用注册的“模型”并将其作为服务部署,方法是通过

model_path = Model.get_model_path('my_pipeline')
# deserialize the model file back into a sklearn model
model = joblib.load(model_path) 

但是,这会在管道中烘焙转换,因此不会像您要求的那样模块化…

另一种解决方案是将
DataReference
作为输入传递到
PythonScriptStep

然后在
transform.py
中,您可以将此
DataReference
作为命令行参数读取

您可以解析它,并将其用作任何常规路径来保存矢量器

例如,你可以:

step_tfidf = PythonScriptStep(name = "tfidf_step",
                              script_name = "transform.py",
                              arguments = ['--input_data', blob_train_data, 
                                           '--output_folder', transformed_data,
                                           '--transformer_path', trained_transformer_path],
                              inputs = [blob_train_data, trained_transformer_path],
                              outputs = [transformed_data],
                              compute_target = aml_compute,
                              source_directory = project_folder,
                              runconfig = run_config,
                              allow_reuse = False)
然后在脚本中(
transform.py
在上面的示例中),您可以执行以下操作:

import argparse
import joblib as jbl
import os

from sklearn.feature_extraction.text import TfidfVectorizer

parser = argparse.ArgumentParser()
parser.add_argument('--transformer_path', dest="transformer_path", required=True)
args = parser.parse_args()

tfidf = ### HERE CREATE AND TRAIN YOUR VECTORIZER ###

vect_filename = os.path.join(args.transformer_path, 'my_vectorizer.jbl')



额外:第三种方法是将矢量器注册为工作区中的另一个模型。然后,您可以像使用任何其他注册模型一样使用它。(尽管此选项不涉及对blob的显式写入-如上问题所述)

另一个选项是使用
DataTransferStep
并使用它将输出复制到“已知位置”。有使用DataTransferStep从各种支持的数据存储复制数据和将数据复制到各种支持的数据存储的示例

from azureml.data.data_reference import DataReference
from azureml.exceptions import ComputeTargetException
from azureml.core.compute import ComputeTarget, DataFactoryCompute
from azureml.pipeline.steps import DataTransferStep

blob_datastore = Datastore.get(ws, "workspaceblobstore")

blob_data_ref = DataReference(
    datastore=blob_datastore,
    data_reference_name="knownloaction",
    path_on_datastore="knownloaction")

data_factory_name = 'adftest'

def get_or_create_data_factory(workspace, factory_name):
    try:
        return DataFactoryCompute(workspace, factory_name)
    except ComputeTargetException as e:
        if 'ComputeTargetNotFound' in e.message:
            print('Data factory not found, creating...')
            provisioning_config = DataFactoryCompute.provisioning_configuration()
            data_factory = ComputeTarget.create(workspace, factory_name, provisioning_config)
            data_factory.wait_for_completion()
            return data_factory
        else:
            raise e

data_factory_compute = get_or_create_data_factory(ws, data_factory_name)

# Assuming output data is your output from the step that you want to copy

transfer_to_known_location = DataTransferStep(
    name="transfer_to_known_location",
    source_data_reference=[output_data],
    destination_data_reference=blob_data_ref,
    compute_target=data_factory_compute
    )

from azureml.pipeline.core import Pipeline
from azureml.core import Workspace, Experiment

pipeline_01 = Pipeline(
    description="transfer_to_known_location",
    workspace=ws,
    steps=[transfer_to_known_location])

pipeline_run_01 = Experiment(ws, "transfer_to_known_location").submit(pipeline_01)
pipeline_run_01.wait_for_completion()

谢谢你,大卫。我认为这是个好主意!我来试试。嗨@PythoLove,上面提到的方法对我很有效。你的错误是什么?