Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 无法将一批图像准确输入模型。fit_Python_Tensorflow_Keras_Tuples_Numpy Ndarray - Fatal编程技术网

Python 无法将一批图像准确输入模型。fit

Python 无法将一批图像准确输入模型。fit,python,tensorflow,keras,tuples,numpy-ndarray,Python,Tensorflow,Keras,Tuples,Numpy Ndarray,我的模型设计用于训练双重图像。由于数据集非常庞大,我使用tf.data.dataset方法按建议批量获取它们。然而,我很难正确地输入一批图像进行训练。我找到了一些可能的解决办法,但毫无结果。不过,经过这些修改后: ds_train = tf.data.Dataset.zip((tr_inputs, tr_labels)).batch(64) iterator = ds_train.make_one_shot_iterator() next_batch = iterator.get_next()

我的模型设计用于训练双重图像。由于数据集非常庞大,我使用
tf.data.dataset
方法按建议批量获取它们。然而,我很难正确地输入一批图像进行训练。我找到了一些可能的解决办法,但毫无结果。不过,经过这些修改后:

ds_train = tf.data.Dataset.zip((tr_inputs, tr_labels)).batch(64)
iterator = ds_train.make_one_shot_iterator()
next_batch = iterator.get_next()
result = list()
with tf.Session() as sess:
    try:
        while True:
           result.append(sess.run(next_batch))
   except tf.errors.OutOfRangeError:
        pass
train_examples = np.array(list(zip(*result))[0])        # tr_examples[0][0].shape (64, 224, 224, 3)
val_examples = np.array(list(zip(*val_result))[0])      # val_examples[0][0].shape (64, 224, 224, 3)
培训代码片段如下所示:

hist = base_model.fit((tr_examples[0][0], tr_examples[0][1]), epochs=epochs,  verbose=1,
                       validation_data=(val_examples[0][0], val_examples[0][1]), shuffle=True)
以及错误跟踪:

Traceback (most recent call last):
  File "/home/user/00_files/project/DOUBLE_INPUT/dual_input.py", line 177, in <module>
    validation_data=(val_examples[0][0], val_examples[0][1]), shuffle=True)
  File "/home/user/.local/lib/python3.5/site-packages/keras/engine/training.py", line 955, in fit
    batch_size=batch_size)
  File "/home/user/.local/lib/python3.5/site-packages/keras/engine/training.py", line 754, in _standardize_user_data
    exception_prefix='input')
  File "/home/user/.local/lib/python3.5/site-packages/keras/engine/training_utils.py", line 90, in standardize_input_data
    data = [standardize_single_array(x) for x in data]
  File "/home/user/.local/lib/python3.5/site-packages/keras/engine/training_utils.py", line 90, in <listcomp>
    data = [standardize_single_array(x) for x in data]
  File "/home/user/.local/lib/python3.5/site-packages/keras/engine/training_utils.py", line 25, in standardize_single_array
    elif x.ndim == 1:
AttributeError: 'tuple' object has no attribute 'ndim'
应该是:

hist = base_model.fit(tr_examples[0][0], tr_examples[0][1], epochs=epochs,  verbose=1,
                       validation_data=(val_examples[0][0], val_examples[0][1]), shuffle=True)

请注意,
validation\u data
参数需要一个元组,但训练输入/标签对不应该是元组(即删除括号)。

@bit\u如果是您,你不应该对不起作用的答案进行投票…验证数据不一定需要tuple@NicolasGervais但不是我:)为什么要将庞大的数据集转换为数组,以便通过网络传递它们?我在上一篇文章中的建议与此相反。最重要的是,您是否可以升级Python和Tensorflow?熟悉TF1.X的人越来越少,正如我在那篇文章中提到的,我的本地机器和远程服务器上都有。因此,请随时为TF2.x提供您的见解
hist = base_model.fit((tr_examples[0][0], tr_examples[0][1]), epochs=epochs,  verbose=1,
                       validation_data=(val_examples[0][0], val_examples[0][1]), shuffle=True)
hist = base_model.fit(tr_examples[0][0], tr_examples[0][1], epochs=epochs,  verbose=1,
                       validation_data=(val_examples[0][0], val_examples[0][1]), shuffle=True)