Python Inception v3在google ml引擎上拍摄base64图像进行预测

Python Inception v3在google ml引擎上拍摄base64图像进行预测,python,tensorflow,keras,Python,Tensorflow,Keras,我正试图改变我的inception网络(用keras编码),将base64图像字符串作为预测的输入。之后,我想将其保存为tensorflow(.pb-file)网络,因为这是Google ml引擎所需要的 通常的预测方法如下: img = "image.jpg" image = image.load_img(img) x = image.img_to_array(image) x = np.expand_dims(x, axis=0) x = preprocess_input(x) scor

我正试图改变我的inception网络(用keras编码),将base64图像字符串作为预测的输入。之后,我想将其保存为tensorflow(.pb-file)网络,因为这是Google ml引擎所需要的

通常的预测方法如下:

img = "image.jpg"
image = image.load_img(img)


x = image.img_to_array(image)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
score = model.predict(x)
所以我尝试实现这个,然后像这样保存它:

input_images = tf.placeholder(dtype=tf.string, shape=[])
decoded = tf.image.decode_image(input_images, channels=3)
image = tf.cast(decoded, dtype=tf.uint8)
afbeelding = Image.open(io.BytesIO(image))

x = image.img_to_array(afbeelding)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
scores = model.predict(decoded)


signature = predict_signature_def(inputs={'image_bytes': input_images},
                              outputs={'predictions': scores})

with K.get_session() as sess:
    builder.add_meta_graph_and_variables(sess=sess,
                                     tags=[tag_constants.SERVING],
                                     signature_def_map={
                                     signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
builder.save()
但图像是张量,而不是实际图像。
老实说,我不知道如何全面实施它。没有办法得到张量的实际值,对吗?真的希望有人能帮我做这件事。

你应该能够使用tensorflow.keras.estimator.model\u to\u estimator()函数将你的keras模型转换成tensorflow估计器。然后,您可以构建并导出用于生成预测的图形。代码应该如下所示:

from tensorflow import keras
h5_model_path = os.path.join('path_to_model.h5')
estimator = keras.estimator.model_to_estimator(keras_model_path=h5_model_path)
import tensorflow as tf
HEIGHT = 128
WIDTH = 128
CHANNELS = 3
def serving_input_receiver_fn():
    def prepare_image(image_str_tensor):
        image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
        image = tf.expand_dims(image, 0)
        image = tf.image.resize_bilinear(image, [HEIGHT, WIDTH], align_corners=False)
        image = tf.squeeze(image, axis=[0])
        image = tf.cast(image, dtype=tf.uint8)
        return image

    input_ph = tf.placeholder(tf.string, shape=[None])
    images_tensor = tf.map_fn(
        prepare_image, input_ph, back_prop=False, dtype=tf.uint8)
    images_tensor = tf.image.convert_image_dtype(images_tensor, dtype=tf.float32)

    return tf.estimator.export.ServingInputReceiver(
        {'input': images_tensor},
        {'image_bytes': input_ph})

export_path = 'exported_model_directory'
estimator.export_savedmodel(
    export_path,
    serving_input_receiver_fn=serving_input_receiver_fn)
我只在使用tf.keras构建的模型中测试过这一点,但应该在本地keras模型中测试

然后,为了使用组件构建图形以处理base64输入,可以执行以下操作:

from tensorflow import keras
h5_model_path = os.path.join('path_to_model.h5')
estimator = keras.estimator.model_to_estimator(keras_model_path=h5_model_path)
import tensorflow as tf
HEIGHT = 128
WIDTH = 128
CHANNELS = 3
def serving_input_receiver_fn():
    def prepare_image(image_str_tensor):
        image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
        image = tf.expand_dims(image, 0)
        image = tf.image.resize_bilinear(image, [HEIGHT, WIDTH], align_corners=False)
        image = tf.squeeze(image, axis=[0])
        image = tf.cast(image, dtype=tf.uint8)
        return image

    input_ph = tf.placeholder(tf.string, shape=[None])
    images_tensor = tf.map_fn(
        prepare_image, input_ph, back_prop=False, dtype=tf.uint8)
    images_tensor = tf.image.convert_image_dtype(images_tensor, dtype=tf.float32)

    return tf.estimator.export.ServingInputReceiver(
        {'input': images_tensor},
        {'image_bytes': input_ph})

export_path = 'exported_model_directory'
estimator.export_savedmodel(
    export_path,
    serving_input_receiver_fn=serving_input_receiver_fn)
然后,导出的模型可以上传到Google Cloud ML,并用于预测。我花了一段时间努力让所有这些东西都能正常工作,并编写了一个功能完整的代码示例,可能会有额外的用途。在这里: