Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/293.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python ValueError:无法设置张量:维度不匹配。已获取3,但输入0应为4_Python_Tensorflow_Tensorflow2.0_Tensorflow Lite - Fatal编程技术网

Python ValueError:无法设置张量:维度不匹配。已获取3,但输入0应为4

Python ValueError:无法设置张量:维度不匹配。已获取3,但输入0应为4,python,tensorflow,tensorflow2.0,tensorflow-lite,Python,Tensorflow,Tensorflow2.0,Tensorflow Lite,我是TF和Keras的新手。我使用以下代码对模型进行了培训并保存 from tensorflow.keras.preprocessing.image import ImageDataGenerator import tensorflow as tf from tensorflow.python.keras.optimizer_v2.rmsprop import RMSprop train_data_gen = ImageDataGenerator(rescale=1 / 255) valida

我是TF和Keras的新手。我使用以下代码对模型进行了培训并保存

from tensorflow.keras.preprocessing.image import ImageDataGenerator
import tensorflow as tf
from tensorflow.python.keras.optimizer_v2.rmsprop import RMSprop

train_data_gen = ImageDataGenerator(rescale=1 / 255)
validation_data_gen = ImageDataGenerator(rescale=1 / 255)

# Flow training images in batches of 120 using train_data_gen generator
train_generator = train_data_gen.flow_from_directory(
    'datasets/train/',
    classes=['bad', 'good'],
    target_size=(200, 200),
    batch_size=120,
    class_mode='binary')

validation_generator = validation_data_gen.flow_from_directory(
    'datasets/valid/',
    classes=['bad', 'good'],
    target_size=(200, 200),
    batch_size=19,
    class_mode='binary',
    shuffle=False)

model = tf.keras.models.Sequential([
    tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(200, 200, 3)),
    tf.keras.layers.MaxPooling2D(2, 2),

    tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),

    tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),

    tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),

    tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),
    # Flatten the results to feed into a DNN
    tf.keras.layers.Flatten(),
    # 512 neuron hidden layer
    tf.keras.layers.Dense(512, activation='relu'),
    # Only 1 output neuron. It will contain a value from 0-1
    # where 0 for 1 class ('bad') and 1 for the other ('good')
    tf.keras.layers.Dense(1, activation='sigmoid')])

model.compile(loss='binary_crossentropy',
              optimizer=RMSprop(lr=0.001),
              metrics='accuracy')

model.fit(train_generator,
          steps_per_epoch=10,
          epochs=25,
          verbose=1,
          validation_data=validation_generator,
          validation_steps=8)

print("Evaluating the model :")
model.evaluate(validation_generator)

print("Predicting :")

validation_generator.reset()
predictions = model.predict(validation_generator, verbose=1)
print(predictions)

model.save("models/saved")
然后使用

import tensorflow as tf


def saved_model_to_tflite(model_path, quantize):
    converter = tf.lite.TFLiteConverter.from_saved_model(model_path)
    model_saving_path = "models/converted/model.tflite"
    if quantize:
        converter.optimizations = [tf.lite.Optimize.DEFAULT]
        model_saving_path = "models/converted/model-quantized.tflite"
    tflite_model = converter.convert()
    with open(model_saving_path, 'wb') as f:
        f.write(tflite_model)
然后,利用该模型对单个图像进行了测试

import tensorflow as tf


def run_tflite_model(tflite_file, test_image):

    interpreter = tf.lite.Interpreter(model_path=str(tflite_file))
    interpreter.allocate_tensors()
    print(interpreter.get_input_details())
    input_details = interpreter.get_input_details()[0]
    output_details = interpreter.get_output_details()[0]

    interpreter.set_tensor(input_details["index"], test_image)
    interpreter.invoke()
    output = interpreter.get_tensor(output_details["index"])[0]

    prediction = output.argmax()

    return prediction
main.py

if __name__ == '__main__':


    converted_model = "models/converted/model.tflite"
    bad_image_path = "datasets/experiment/bad/b.png"
    good_image_path = "datasets/experiment/good/g.png"
    img = io.imread(bad_image_path)
    resized = resize(img, (200, 200)).astype('float32')
    prediction = run_tflite_model(converted_model, resized)
    print(prediction)
但是,即使我将图像的大小调整为200乘200,我也会变得越来越好

ValueError: Cannot set tensor: Dimension mismatch. Got 3 but expected 4 for input 0.
如果我打印(解释器。获取输入详细信息())

[{'name':'serving_default_conv2d_input:0','index':0,'shape':数组([1200200,3],dtype=int32),'shape_signature':数组([-1200200,3],dtype=int32),'dtype':(0.0,0),'quantization_参数:{'scales':数组([],dtype=float32),'zero_points':数组([],dtype=int32),'quantized"维:0},'sparsity_parameters':{}]
所以输入的形状似乎是
'shape':数组([12002003]
我确实得到了部分
2002003
似乎
1
批量大小是根据什么来确定的


如何从输入形状中删除批次大小?

您可以使用expand_dims扩展维度,而不是删除图形中的批次大小:

test_image = np.expand_dims(test_image, axis=0)

对于android,您可以使用循环复制值,从浮点[32][32][3]输入数组轻松准备浮点[1][32][32][3]输入数组。

虽然您的建议有效,但我总是得到
0
,尽管图像(坏,好)改变?这是为什么?我可能面临的另一个问题是,在Android中将图像输入到模型时,没有
numpy
因此Android,您可以为批量大小1提供与展平数组相同的浮点数组,因为展平数组为[1,200,200,3]==展平数组为[200,200,3]对于图像分类结果,最好单独上传帖子,以使这篇帖子更加集中。正如你所问,我发布了一个单独的问题
test_image = np.expand_dims(test_image, axis=0)