Python tflite解释器处理输入维度是否与h5模型不同?ValueError:无法设置张量:维度不匹配

Python tflite解释器处理输入维度是否与h5模型不同?ValueError:无法设置张量:维度不匹配,python,numpy,tensorflow,keras,tensorflow-lite,Python,Numpy,Tensorflow,Keras,Tensorflow Lite,当我尝试从h5 keras模型运行我的TFLite转换模型mymodel.TFLite时遇到问题。我转换了我的模型。h5如下: import numpy as np #from google.colab import files import tensorflow as tf from tensorflow import keras from keras.preprocessing import image import glob import pickle model_path = '/ho

当我尝试从h5 keras模型运行我的TFLite转换模型mymodel.TFLite时遇到问题。我转换了我的模型。h5如下:

import numpy as np
#from google.colab import files
import tensorflow as tf
from tensorflow import keras
from keras.preprocessing import image
import glob
import pickle

model_path = '/home/mymodel.h5'

# convert the model
model = keras.models.load_model(model_path)
print('model loaded!')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
print('model converted!')

print('saving model...')
# save the lite model
with open('mymodel.tflite','wb') as f:
    f.write(tflite_model)

# done
print('convertion finished!')
转换进展顺利。我尝试使用转换后的模型:

from tflite_runtime.interpreter import Interpreter
import numpy as np
from PIL import Image
from numpy import asarray
import glob

interpreter = Interpreter(model_path='mymodel.tflite')
print('model loaded!')
print('allocating tensors...')
interpreter.allocate_tensors()

input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
datadir = 'datapath'
fnames = glob.glob(datadir+'*.jpg')
i=0

        for fn in fnames:
            path = fn
            print(path)

            img = Image.open(path)
            print(img.size)
            x = asarray(img)
            print(x.shape)
            
            input_data = np.expand_dims(x, axis=0)
            print(input_data.shape)
            
            interpreter.set_tensor(input_details[0]['index'], input_data)
            print('tensor set!')
            
            interpreter.invoke()            
            print('interpreter invoked!')
            
            output_data = interpreter.get_tensor(output_details[0]['index'])
            print('output data calculated!')
            
            result = np.squeeze(output_data)
            print(result)
            print(i)
            i = i + 1
这是输出:

INFO: Initialized TensorFlow Lite runtime.
model loaded!
allocating tensors...
[{'quantization': (0.0, 0), 'shape': array([ 1, 640, 480, 3]), 'index': 0, 'name': 'conv2d_input', 'dtype': <class 'numpy.float32'>}]
(640, 480)
(480, 640, 3)
image expanded shape:
(1, 480, 640, 3)
/frames738.jpg
(640, 480)
(480, 640, 3)
(1, 480, 640, 3)
0 [[0. 1. 0. 0. 0.]]
1
/frames794.jpg
(640, 480)
(480, 640, 3)
(1, 480, 640, 3)
1 [[1. 0. 0. 0. 0.]]
2
/frames650.jpg
(640, 480)
(480, 640, 3)
(1, 480, 640, 3)
2 [[0. 1. 0. 0. 0.]]

...

/frames791.jpg
(640, 480)
(480, 640, 3)
(1, 480, 640, 3)
21 [[0. 0. 0. 0. 1.]]
22
...
最初,我认为转换后的模型只是希望转换我提供的输入。查看输入的_详细信息[0]['index'],模型期望得到一个形状数组[1,640,480,3]。但是,输入的_数据大小不同([1,480,640,3])。由于此形状不兼容,它在此行抛出错误

interpreter.set_tensor(input_details[0]['index'], input_data)
但是,如果我转置输入,转换后的模型会运行,但无论输入是什么,我都会得到[0,1,0,0,0]作为输出。但是,当我在原始模型中运行非常类似的代码时,我不需要转置np数组来匹配(640480,3)

这是运行原始模型mymodel.h5的代码:


import numpy as np
from tensorflow import keras
from PIL import Image
from numpy import asarray
import glob

model_path = 'mymodel.h5'
datadir = '/datapath/'

# load model
model = keras.models.load_model(model_path)
print('model loaded!')

fnames = glob.glob(datadir+'*.jpg')

i=0
for fn in fnames:
    path = fn
    print(path)
    img = Image.open(path)
    print(img.size)

    x = asarray(img)
    print(x.shape)

    x = np.expand_dims(x, axis=0)
    print(x.shape)

    pred = model.predict(x)
    print(i,pred)
    i = i + 1
它运行良好,这是输出:

INFO: Initialized TensorFlow Lite runtime.
model loaded!
allocating tensors...
[{'quantization': (0.0, 0), 'shape': array([ 1, 640, 480, 3]), 'index': 0, 'name': 'conv2d_input', 'dtype': <class 'numpy.float32'>}]
(640, 480)
(480, 640, 3)
image expanded shape:
(1, 480, 640, 3)
/frames738.jpg
(640, 480)
(480, 640, 3)
(1, 480, 640, 3)
0 [[0. 1. 0. 0. 0.]]
1
/frames794.jpg
(640, 480)
(480, 640, 3)
(1, 480, 640, 3)
1 [[1. 0. 0. 0. 0.]]
2
/frames650.jpg
(640, 480)
(480, 640, 3)
(1, 480, 640, 3)
2 [[0. 1. 0. 0. 0.]]

...

/frames791.jpg
(640, 480)
(480, 640, 3)
(1, 480, 640, 3)
21 [[0. 0. 0. 0. 1.]]
22
...
也许predict()函数考虑了keras中的图像与数组维度约定,但在我使用tflite_运行时的解释器时没有考虑?有人知道发生了什么吗

非常感谢您的指点/建议