Numpy 使用gcloud ml服务于大型图像
我有一个经过训练的tensorflow网络,我希望在gcloud ml引擎中用于预测 Predict gcloud ml服务应接受大小为320x240x3的numpy数组float32类型图像,并返回2个小矩阵作为输出 有人知道我应该如何创建接受这种输入类型的输入层吗 我尝试了多种方法,例如使用base64编码的json文件,但将字符串强制转换为float类型会产生一个不受支持的错误:Numpy 使用gcloud ml服务于大型图像,numpy,tensorflow,google-cloud-ml-engine,Numpy,Tensorflow,Google Cloud Ml Engine,我有一个经过训练的tensorflow网络,我希望在gcloud ml引擎中用于预测 Predict gcloud ml服务应接受大小为320x240x3的numpy数组float32类型图像,并返回2个小矩阵作为输出 有人知道我应该如何创建接受这种输入类型的输入层吗 我尝试了多种方法,例如使用base64编码的json文件,但将字符串强制转换为float类型会产生一个不受支持的错误: "error": "Prediction failed: Exception during model exe
"error": "Prediction failed: Exception during model execution: LocalError(code=StatusCode.UNIMPLEMENTED, details=\"Cast string to float is not supported\n\t [[Node: ToFloat = Cast[DstT=DT_FLOAT, SrcT=DT_STRING, _output_shapes=[[-1,320,240,3]], _device=\"/job:localhost/replica:0/task:0/cpu:0\"](ParseExample/ParseExample)]]\")"
这是创建json文件的示例(将上面的numpy数组保存为jpeg后):
python-c'导入base64、sys、json;img=base64.b64encode(打开(sys.argv[1],“rb”).read());打印json.dumps({“images”:{“b64”:img}})'example_img.jpg&>request.json
以及尝试处理输入的tensorflow命令:
raw_str_input = tf.placeholder(tf.string, name='source')
feature_configs = {
'image': tf.FixedLenFeature(
shape=[], dtype=tf.string),
}
tf_example = tf.parse_example(raw_str_input, feature_configs)
input = tf.identity(tf.to_float(tf_example['image/encoded']), name='input')
上面是一个已完成测试的示例,也尝试了多次尝试使用不同的tensorflow命令来处理输入,但都不起作用…如果使用输入/输出别名,则必须以“字节”结尾。所以我认为你需要这样做
python -c 'import base64, sys, json; img = base64.b64encode(open(sys.argv[1], "rb").read()); print json.dumps({"images_bytes": {"b64": img}})' example_img.jpg &> request.json
我建议不要首先使用
parse_示例
。发送图像数据有几个选项,每个选项都在复杂性和有效负载大小方面进行权衡:
# Dimensions represent [batch size, height width, channels]
input_images = tf.placeholder(dtype=tf.float32, shape=[None,320,240,3], name='source')
output_tensor = foo(input_images)
# Export the SavedModel
inputs = {'image': input_images}
outputs = {'output': output_tensor}
# ....
发送到服务的JSON看起来像(请参阅“实例JSON字符串”)。例如,(我建议删除尽可能多的空白;为了可读性,这里打印得很漂亮):
请注意,gcloud
根据输入文件格式生成请求正文,其中每个输入位于单独的行上(大多数输入打包在一行上),即:
这里需要特别注意的是:正如Jeremy Lewi所指出的,这个输入别名的名称以\u字节
(image\u字节
)结尾。这是因为JSON无法区分文本和二进制数据
请注意,相同的技巧可以应用于浮点数据,而不仅仅是uint8数据
您的客户机将负责创建UINT8的字节字符串。下面是如何在Python中使用numpy
实现这一点
import base64
import json
import numpy as np
images = []
# In real life, this is obtained via other means, e.g. scipy.misc.imread), for now, an array of all 1s
images.append(np.array([[[2]*3]*240]*320], dtype=np.uint8))
# If we want, we can send more than one image:
images.append(np.array([[[2]*3]*240]*320], dtype=np.uint8))
# Convert each image to byte strings
bytes_strings = (i.tostring() for i in images)
# Base64 encode the data
encoded = (base64.b64encode(b) for b in bytes_strings)
# Create a list of images suitable to send to the service as JSON:
instances = [{'image_bytes': {'b64': e}} for e in encoded]
# Create a JSON request
request = json.dumps({'instances': instances})
# Or if dumping a file for gcloud:
file_data = '\n'.join(json.dumps(instances))
压缩图像数据
在TensorFlow中发送原始图像、调整大小和解码通常是最方便的。这在中得到了举例说明,我在这里不再重复。客户端只需发送原始JPEG字节。同样的关于
\u字节
后缀也适用于这里。非常感谢Jeremey的回复,我将输入签名更改为“image\u字节”,这样它就可以接受json,但它仍然会给我同样的错误。这就是我所改变的:tf.saved_model.signature_def_utils.build_signature_def(inputs={'image_bytes':tensor_inputs_info}
输入/输出别名的确切含义是什么?你有一个简短的例子吗?
{"image": [[[1,1,1], [1,1,1], <240 of these>] ... <320 of these>]}
{"image": [[[2,2,2], [2,2,2], <240 of these>] ... <320 of these>]}
raw_byte_strings = tf.placeholder(dtype=tf.string, shape=[None], name='source')
# Decode the images. The shape of raw_byte_strings is [batch size]
# (were batch size is determined by how many images are sent), and
# the shape of `input_images` is [batch size, 320, 240, 3]. It's
# important that all of the images sent have the same dimensions
# or errors will result.
#
# We have to use a map_fn because decode_raw only works on a single
# image, and we need to decode a batch of images.
decode = lambda raw_byte_str: tf.decode_raw(raw_byte_str, tf.uint8)
input_images = tf.map_fn(decode, raw_byte_strings, dtype=tf.uint8)
output_tensor = foo(input_images)
# Export the SavedModel
inputs = {'image_bytes': input_images}
outputs = {'output': output_tensor}
# ....
import base64
import json
import numpy as np
images = []
# In real life, this is obtained via other means, e.g. scipy.misc.imread), for now, an array of all 1s
images.append(np.array([[[2]*3]*240]*320], dtype=np.uint8))
# If we want, we can send more than one image:
images.append(np.array([[[2]*3]*240]*320], dtype=np.uint8))
# Convert each image to byte strings
bytes_strings = (i.tostring() for i in images)
# Base64 encode the data
encoded = (base64.b64encode(b) for b in bytes_strings)
# Create a list of images suitable to send to the service as JSON:
instances = [{'image_bytes': {'b64': e}} for e in encoded]
# Create a JSON request
request = json.dumps({'instances': instances})
# Or if dumping a file for gcloud:
file_data = '\n'.join(json.dumps(instances))