Tensorflow 具有局部预测的gcloud问题
我正在使用Tensorflow 具有局部预测的gcloud问题,tensorflow,google-cloud-platform,google-cloud-ml,Tensorflow,Google Cloud Platform,Google Cloud Ml,我正在使用gcloud local prediction测试导出的模型。该模型是在自定义数据集上训练的TensorFlow对象检测模型。我正在使用以下gcloud命令: gcloud ml-engine local predict --model-dir=/path/to/saved_model/ --json-instances=input.json --signature-name="serving_default" --verbosity debug 当我不使用verbose时,该命令
gcloud local prediction
测试导出的模型。该模型是在自定义数据集上训练的TensorFlow对象检测模型。我正在使用以下gcloud命令:
gcloud ml-engine local predict --model-dir=/path/to/saved_model/ --json-instances=input.json --signature-name="serving_default" --verbosity debug
当我不使用verbose时,该命令不会输出任何内容。将verbose设置为debug时,我得到以下回溯:
DEBUG: [Errno 32] Broken pipe
Traceback (most recent call last):
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 984, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 784, in Run
resources = command_instance.Run(args)
File "/google-cloud-sdk/lib/surface/ai_platform/local/predict.py", line 83, in Run
signature_name=args.signature_name)
File "/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine/local_utils.py", line 103, in RunPredict
proc.stdin.write((json.dumps(instance) + '\n').encode('utf-8'))
IOError: [Errno 32] Broken pipe
有关我的导出模型的详细信息:
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: encoded_image_string_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 4)
name: detection_boxes:0
outputs['detection_classes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300)
name: detection_classes:0
outputs['detection_features'] tensor_info:
dtype: DT_FLOAT
shape: (-1, -1, -1, -1, -1)
name: detection_features:0
outputs['detection_multiclass_scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 2)
name: detection_multiclass_scores:0
outputs['detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300)
name: detection_scores:0
outputs['num_detections'] tensor_info:
dtype: DT_FLOAT
shape: (-1)
name: num_detections:0
outputs['raw_detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 4)
name: raw_detection_boxes:0
outputs['raw_detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 300, 2)
name: raw_detection_scores:0
Method name is: tensorflow/serving/predict
我使用以下代码生成用于预测的input.json:
with open('input.json', 'wb') as f:
img = Image.open("image.jpg")
img = img.resize((width, height), Image.ANTIALIAS)
output_str = io.BytesIO()
img.save(output_str, "JPEG")
image_byte_array = output_str.getvalue()
image_base64 = base64.b64encode(image_byte_array)
json_entry = {"b64": image_base64.decode()}
#instances.append(json_entry
request = json.dumps({'inputs': json_entry})
f.write(request.encode('utf-8'))
f.close()
{"inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/......}}
我正在用一幅图像测试预测 根据这一点,二进制输入必须以\u字节作为后缀
在TensorFlow模型代码中,必须为二进制输入和输出张量命名别名,以便它们以“_字节”结尾
尝试使用
\u字节作为输入后缀
,或使用兼容的输入服务函数重建模型。运行该命令时,本地SDK文件/usr/lib/google cloud SDK/lib/googlecloudsdk/command\u lib/ml\u engine/local\u utils.py
,读取文件内容时似乎出现故障:
for instance in instances:
proc.stdin.write((json.dumps(instance) + '\n').encode('utf-8'))
proc.stdin.flush()
在您的情况下,我希望看到JSON格式正确,否则我们通常会得到:
ERROR: (gcloud.ai-platform.local.predict) Input instances are not in JSON format. See "gcloud ml-engine predict --help" for details.
这是我通常用来生成带有resize的b64编码图像的代码片段
import base64
from PIL import Image
INPUT_FILE = 'image.jpg'
OUTPUT_FILE = 'image_b64.json'
def convert_to_base64_resize(image_file):
"""Open image, resize, base64 encode it and create a JSON request"""
img = Image.open(image_file).resize((240, 240))
img.save(image_file)
with open(image_file, 'rb') as f:
jpeg_bytes = base64.b64encode(f.read()).decode('utf-8')
predict_request = '{"image_bytes": {"b64": "%s"}}' % jpeg_bytes
# Write JSON to file
with open(OUTPUT_FILE, 'w') as f:
f.write(predict_request)
return predict_request
convert_to_base64_resize(INPUT_FILE)
如果能看到JSON文件或图像的副本并比较其内容,那就太好了
对于正常的故障排除,我还使用tensorflow服务,特别是为了验证我的模型在本地工作。(指向地面军事系统位置的TensorFlow服务支架)
请记住,json实例的本地预测需要以下格式:
{“image_bytes”:{“b64”:body}
我假设您的模型经过上述建议的更改后如下所示:
...
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['image_bytes'] tensor_info:
dtype: DT_STRING
shape: (-1)
name: input_tensor:0
...
我遇到了同样的问题,发现
ml\u engine/local\u utils.py
使用python
运行为python2.7
构建的ml\u engine/local\u predict.pyc
。
我的python是python3
,因此当ml\u engine/local\u utils.py
尝试使用python
运行ml\u engine/local\u predict.pyc
(实际上是python3
)时,它会失败并出现错误:
RuntimeError: Bad magic number in .pyc file
解决方案1:
您可以在系统中将python2
设为默认值
解决方案2:
我用这样的补丁更改了ml\u engine/local\u utils.py
:
83c83
< python_executables = files.SearchForExecutableOnPath("python")
---
> python_executables = files.SearchForExecutableOnPath("python2")
114a115
> log.debug(args)
124,126c125,130
< for instance in instances:
< proc.stdin.write((json.dumps(instance) + "\n").encode("utf-8"))
< proc.stdin.flush()
---
> try:
> for instance in instances:
> proc.stdin.write((json.dumps(instance) + "\n").encode("utf-8"))
> proc.stdin.flush()
> except:
> pass
83c83
python_executables=files.SearchForExecutableOnPath(“python2”)
114a115
>log.debug(args)
124126C125130
<例如:
尝试:
>例如:
>proc.stdin.write((json.dumps(实例)+“\n”).encode(“utf-8”))
>程序标准冲洗()
>除:
>通过
try catch需要使脚本能够读取和打印运行
ml\u engine/local\u predict.pyc
时出现的错误。与@Roman Kovtuh不同,我能够使用python3
运行。然而,他创建异常处理程序的技术允许我确定tensorflow没有安装在流程可见的环境中。一旦完成,这个过程就开始了
我对googlecloudsdk/command\u lib/ml\u engine/local\u utils.py的更改:
106,109c106
< try:
< proc.stdin.write((json.dumps(instance) + '\n').encode('utf-8'))
< except Exception as e:
< print(f'Error displaying errors with instance {str(instance)[:100]}. Exception {e}')
---
> proc.stdin.write((json.dumps(instance) + '\n').encode('utf-8'))
106109c106
<试试:
proc.stdin.write((json.dumps(实例)+'\n').encode('utf-8'))
我向上投票@Roman Kovtuh,因为这确实有帮助。您的input.json文件有多大?python是什么版本?google cloud sdk也是什么版本?@TravisWebb python版本3.6.5,输入大小为137KB(我在预测请求中发送一张图像),google cloud sdk 268.0.0,beta 2019.05.17 bq 2.0.49,core 2019.10.18,gsutil 4.45你能解决这个问题吗?@gogasca我无法解决这个问题。gcp ml引擎也出现了故障。下面是与gcp ml引擎上的预测相关的线程。这意味着在我导出的模型中,而不是在“inputs”中,它应该是“inputs_bytes”,在json文件中也是如此?尝试使用签名def“input_bytes”重新导出模型,但仍然出现相同的错误。还重命名了输入json,但“input_bytes”仍然没有成功