Ibm cloud 如何使用最新的IBM Waston Studio API解决.MPS文件

Ibm cloud 如何使用最新的IBM Waston Studio API解决.MPS文件,ibm-cloud,ibm-watson,watson-studio,Ibm Cloud,Ibm Watson,Watson Studio,我正在尝试迁移一个实用程序,该实用程序使用IBM的API来解决.mps问题,该实用程序目前因中断更改而中断。 原始代码使用空的model.tar.gz文件创建部署,并将.mps文件传递给新作业。 (python)代码如下所示: import tarfile tar = tarfile.open("model.tar.gz", "w:gz") tar.close() test_metadata = { client.repository.Model

我正在尝试迁移一个实用程序,该实用程序使用IBM的API来解决.mps问题,该实用程序目前因中断更改而中断。
原始代码使用空的model.tar.gz文件创建部署,并将.mps文件传递给新作业。
(python)代码如下所示:

import tarfile
tar = tarfile.open("model.tar.gz", "w:gz")
tar.close()

test_metadata = {
    client.repository.ModelMetaNames.NAME: "Test",
    client.repository.ModelMetaNames.DESCRIPTION: "Model for Test",
    client.repository.ModelMetaNames.TYPE: "do-cplex_12.9",
    client.repository.ModelMetaNames.RUNTIME_UID: "do_12.9"    
}

model_details = client.repository.store_model(model='model.tar.gz', meta_props=test_metadata)
model_uid = client.repository.get_model_uid(model_details)
n_nodes = 1
meta_props = {
    client.deployments.ConfigurationMetaNames.NAME: "Test Deployment " + str(n_nodes),
    client.deployments.ConfigurationMetaNames.DESCRIPTION: "Test Deployment",
    client.deployments.ConfigurationMetaNames.BATCH: {},
    client.deployments.ConfigurationMetaNames.COMPUTE: {'name': 'S', 'nodes': n_nodes}
}

deployment_details = client.deployments.create(model_uid, meta_props=meta_props)
deployment_uid = client.deployments.get_uid(deployment_details)

solve_payload = {
    client.deployments.DecisionOptimizationMetaNames.SOLVE_PARAMETERS: {
                'oaas.logAttachmentName':'log.txt',
                'oaas.logTailEnabled':'true',
                'oaas.resultsFormat': 'JSON'
    },
    client.deployments.DecisionOptimizationMetaNames.INPUT_DATA_REFERENCES: [
        {
            'id':'test.mps',
            'type': 's3',
            'connection': {
                'endpoint_url': COS_ENDPOINT,
                'access_key_id': cos_credentials['cos_hmac_keys']["access_key_id"],
                'secret_access_key': cos_credentials['cos_hmac_keys']["secret_access_key"]
            },
            'location': {
                'bucket': COS_BUCKET,
                'path': 'test.mps'
            }
        }
    ],
    client.deployments.DecisionOptimizationMetaNames.OUTPUT_DATA_REFERENCES: [
        {
                    'id':'solution.json',
                    'type': 's3',
                    'connection': {
                        'endpoint_url': COS_ENDPOINT,
                        'access_key_id': cos_credentials['cos_hmac_keys']["access_key_id"],
                        'secret_access_key': cos_credentials['cos_hmac_keys']["secret_access_key"]
                    },
                    'location': {
                        'bucket': COS_BUCKET,
                        'path': 'solution.json'
                    }
                },
                {
                    'id':'log.txt',
                    'type': 's3',
                    'connection': {
                        'endpoint_url': COS_ENDPOINT,
                        'access_key_id': cos_credentials['cos_hmac_keys']["access_key_id"],
                        'secret_access_key': cos_credentials['cos_hmac_keys']["secret_access_key"]
                    },
                    'location': {
                        'bucket': COS_BUCKET,
                        'path': 'log.txt'
                    }
                }
    ]
}


job_details = client.deployments.create_job(deployment_uid, solve_payload)
我所做的最接近(这几乎正是我所需要的)的工作就是使用本例中的大部分代码:

这是一个完整的工作样本

from ibm_watson_machine_learning import APIClient
import os
import wget
import json
import pandas as pd
import time

COS_ENDPOINT = "https://s3.ams03.cloud-object-storage.appdomain.cloud" 
model_path = 'do-model.tar.gz'
api_key = 'XXXXX'
access_key_id = "XXXX",
secret_access_key= "XXXX"

location = 'eu-gb'
space_id = 'XXXX'
softwareSpecificationName = "do_12.9"
modelType = "do-docplex_12.9"

wml_credentials = {
    "apikey": api_key,
    "url": 'https://' + location + '.ml.cloud.ibm.com'
}

client = APIClient(wml_credentials)
client.set.default_space(space_id)

if not os.path.isfile(model_path):
    wget.download("https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/models/decision_optimization/do-model.tar.gz")

sofware_spec_uid = client.software_specifications.get_uid_by_name(softwareSpecificationName)

model_meta_props = {
                        client.repository.ModelMetaNames.NAME: "LOCALLY created DO model",
                        client.repository.ModelMetaNames.TYPE: modelType,
                        client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid
                    }
published_model = client.repository.store_model(model=model_path, meta_props=model_meta_props)
time.sleep(5) # So that the model is avalable on the API
published_model_uid = client.repository.get_model_uid(published_model)
client.repository.list_models()

meta_data = {
    client.deployments.ConfigurationMetaNames.NAME: "deployment_DO",
    client.deployments.ConfigurationMetaNames.BATCH: {},
    client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: {"name": "S", "num_nodes": 1}

}
deployment_details = client.deployments.create(published_model_uid, meta_props=meta_data)
time.sleep(5) # So that the deployment is avalable on the API
deployment_uid = client.deployments.get_uid(deployment_details)
client.deployments.list()


job_payload_ref = {
    client.deployments.DecisionOptimizationMetaNames.INPUT_DATA_REFERENCES: [
        {
                    'id':'diet_food.csv',
                    'type': 's3',
                    'connection': {
                        'endpoint_url': COS_ENDPOINT,
                        'access_key_id': access_key_id,
                        'secret_access_key': secret_access_key
                    },
                    'location': {
                        'bucket': "gvbucketname0api",
                        'path': "diet_food.csv"
                    }
        },
        {
                    'id':'diet_food_nutrients.csv',
                    'type': 's3',
                    'connection': {
                        'endpoint_url': COS_ENDPOINT,
                        'access_key_id': access_key_id,
                        'secret_access_key': secret_access_key
                    },
                    'location': {
                        'bucket': "gvbucketname0api",
                        'path': "diet_food_nutrients.csv"
                    }
        },
        {
                    'id':'diet_nutrients.csv',
                    'type': 's3',
                    'connection': {
                        'endpoint_url': COS_ENDPOINT,
                        'access_key_id': access_key_id,
                        'secret_access_key': secret_access_key
                    },
                    'location': {
                        'bucket': "gvbucketname0api",
                        'path': "diet_nutrients.csv"
                    }
        }
    ],
    client.deployments.DecisionOptimizationMetaNames.OUTPUT_DATA_REFERENCES: 
    [
        {
                    'id':'.*',
                    'type': 's3',
                    'connection': {
                        'endpoint_url': COS_ENDPOINT,
                        'access_key_id': access_key_id,
                        'secret_access_key':secret_access_key
                    },
                    'location': {
                        'bucket': "gvbucketname0api",
                        'path': "${job_id}/${attachment_name}"
                    }
        }
    ]
}

job = client.deployments.create_job(deployment_uid, meta_props=job_payload_ref)
上面的示例使用一个模型和几个csv文件作为输入。 当我将输入数据引用更改为使用.mps文件(和空模型)时,会出现一个错误

"errors": [
{
   "code": "invalid_model_archive_in_deployment",    
   "message": "Invalid or unrecognized archive type in deployment `XXX-XXX-XXX`.
               Supported archive types are `zip` or `tar.gz`"
}

我不是专家,但据我所知,mps文件包含输入和模型文件,因此我不必同时提供两者。

答案由on提供

可以在这里找到完整的示例:

上面的链接(类似于我问题中的代码)显示了一个带有“.lp”文件的示例,但“.mps”文件也完全相同。 (无需注意,型号为do-cplex_12.10,非do-docplex_12.10)

我的问题是我使用了一个空的model.tar.gz文件。
一旦存档中有了.lp/.mps文件,一切都会按预期进行