Python 使用ImageDataGenerator TensorFlow Keras时会降低精度

Python 使用ImageDataGenerator TensorFlow Keras时会降低精度,python,tensorflow,machine-learning,keras,Python,Tensorflow,Machine Learning,Keras,我已经在这里发了一篇帖子,但答案并没有太大帮助,可能是因为我没有正确地表达问题。现在我对这个问题有了更多的了解,但仍然找不到解决办法 我试图在Tensorflow Keras中建立一个卷积神经网络来预测CIFAR100数据集。我设法以60%的验证准确率获得了令人满意的结果,并希望添加图像增强 我注意到准确度显著下降,因此我决定不增加任何数据,而是尝试看看如果我使用ImageDataGenerator向模型提供数据,但没有任何增加,结果是否会保持不变 精度下降仍然存在,我试图检查ImageData

我已经在这里发了一篇帖子,但答案并没有太大帮助,可能是因为我没有正确地表达问题。现在我对这个问题有了更多的了解,但仍然找不到解决办法

我试图在Tensorflow Keras中建立一个卷积神经网络来预测CIFAR100数据集。我设法以60%的验证准确率获得了令人满意的结果,并希望添加图像增强

我注意到准确度显著下降,因此我决定不增加任何数据,而是尝试看看如果我使用ImageDataGenerator向模型提供数据,但没有任何增加,结果是否会保持不变

精度下降仍然存在,我试图检查ImageDataGenerator传递数据的方式,但一切似乎都正常,图像的标签也似乎正确。我还注意到损失仍然在减少,这与我不使用ImageDataGenerator时的情况类似

datagen = ImageDataGenerator(
    
)

datagen.fit(train_images)

history = model.fit(datagen.flow(train_images, train_labels, batch_size=128), shuffle=False, epochs=250, validation_data=(test_images, test_labels), callbacks=[callbacks.EarlyStopping(patience=10)])

# Without ImageDataGenerator
# history = model.fit(train_images, train_labels, batch_size=128, epochs=250, validation_data=(test_images, test_labels), callbacks=[callbacks.EarlyStopping(patience=10)])
我认为架构并不重要,因为问题在于ImageDataGenerator,但如果任何灵魂想要检查代码和输出,这里是google collab笔记本链接:

我真的不知道该怎么办了

编辑: 精度从~0.6 w降至0.01 w

我已被告知提供问题中的所有内容,因此以下是结果:

使用空ImageDataGenerator时产生的结果(由于进展不大而中断)

因此,为了澄清,没有进行数据扩充,ImageDataGenerator是空的。

我尝试将datagen.flow的参数shuffle设置为False,但对结果几乎没有影响

>Downloading data from https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz
>
>169009152/169001437 [==============================] - 2s 0us/step
>
>Epoch 1/250
>
>  2/390 [..............................] - ETA: 11s - loss: 5.3008 - accuracy: 0.0000e+00WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0238s vs `on_train_batch_end` time: 0.0366s). Check your callbacks.
>
>390/390 [==============================] - 29s 74ms/step - loss: 4.5465 - accuracy: 0.0041 - val_loss: 4.6752 - val_accuracy: 0.0000e+00
>
>Epoch 2/250
>
>390/390 [==============================] - 28s 71ms/step - loss: 4.1575 - accuracy: 0.0067 - val_loss: 4.5212 - val_accuracy: 0.0019
>
>Epoch 3/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 3.9204 - accuracy: 0.0115 - val_loss: 4.3019 - val_accuracy: 0.0034
>
>Epoch 4/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 3.6618 - accuracy: 0.0180 - val_loss: 3.8335 - val_accuracy: 0.0383
>
>Epoch 5/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 3.3415 - accuracy: 0.0174 - val_loss: 3.3168 - val_accuracy: 0.0369
>
>Epoch 6/250
>
>390/390 [==============================] - 28s 71ms/step - loss: 3.0612 - accuracy: 0.0132 - val_loss: 3.3109 - val_accuracy: 0.0076
>
>Epoch 7/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 2.8365 - accuracy: 0.0121 - val_loss: 3.0244 - val_accuracy: 0.0249
>
>Epoch 8/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 2.6400 - accuracy: 0.0120 - val_loss: 2.7754 - val_accuracy: 0.0232
>
>Epoch 9/250
>
>390/390 [==============================] - 28s 71ms/step - loss: 2.4838 - accuracy: 0.0110 - val_loss: 2.7786 - val_accuracy: 0.0085
>
>Epoch 10/250
>
>390/390 [==============================] - 28s 71ms/step - loss: 2.3305 - accuracy: 0.0102 - val_loss: 2.2827 - val_accuracy: 0.0191
>
>Epoch 11/250
>
>390/390 [==============================] - 28s 71ms/step - loss: 2.1901 - accuracy: 0.0107 - val_loss: 2.2275 - val_accuracy: 0.0089
>
>Epoch 12/250
>
>390/390 [==============================] - 28s 71ms/step - loss: 2.0822 - accuracy: 0.0104 - val_loss: 2.1312 - val_accuracy: 0.0197
>
>Epoch 13/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.9752 - accuracy: 0.0106 - val_loss: 2.2580 - val_accuracy: 0.0253
>
>Epoch 14/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.8751 - accuracy: 0.0105 - val_loss: 1.9996 - val_accuracy: 0.0122
>
>Epoch 15/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.7874 - accuracy: 0.0103 - val_loss: 2.0046 - val_accuracy: 0.0085
>
>Epoch 16/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.7062 - accuracy: 0.0099 - val_loss: 1.9315 - val_accuracy: 0.0140
>
>Epoch 17/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.6240 - accuracy: 0.0102 - val_loss: 1.8867 - val_accuracy: 0.0079
>
>Epoch 18/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.5656 - accuracy: 0.0099 - val_loss: 1.8539 - val_accuracy: 0.0117
>
>Epoch 19/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.4992 - accuracy: 0.0101 - val_loss: 1.8715 - val_accuracy: 0.0124
>
>Epoch 20/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.4285 - accuracy: 0.0102 - val_loss: 1.7864 - val_accuracy: 0.0092
>
>Epoch 21/250
>
>390/390 [==============================] - 27s 70ms/step - loss: 1.3764 - accuracy: 0.0100 - val_loss: 1.8202 - val_accuracy: 0.0119
>
>Epoch 22/250
>
>159/390 [===========>..................] - ETA: 14s - loss: 1.2974 - accuracy: 0.0106
>
>---------------------------------------------------------------------------
>
>KeyboardInterrupt
不带ImageDataGenerator的结果

>Epoch 1/250
>
>  2/391 [..............................] - ETA: 17s - loss: 5.4772 - accuracy: 0.0078WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0320s vs `on_train_batch_end` time: 0.0579s). Check your callbacks.
>
>391/391 [==============================] - 26s 67ms/step - loss: 4.5878 - accuracy: 0.0207 - val_loss: 4.7042 - val_accuracy: 0.0134
>
>Epoch 2/250
>
>391/391 [==============================] - 26s 67ms/step - loss: 4.2055 - accuracy: 0.0522 - val_loss: 4.2270 - val_accuracy: 0.0538
>
>Epoch 3/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 3.8648 - accuracy: 0.0883 - val_loss: 4.1179 - val_accuracy: 0.0814
>
>Epoch 4/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 3.5519 - accuracy: 0.1421 - val_loss: 3.8452 - val_accuracy: 0.1325
>Epoch 5/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 3.2509 - accuracy: 0.1952 - val_loss: 3.3625 - val_accuracy: 0.1882
>
>Epoch 6/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.9928 - accuracy: 0.2408 - val_loss: 3.2708 - val_accuracy: 0.2161
>
>Epoch 7/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.7977 - accuracy: 0.2809 - val_loss: 2.7619 - val_accuracy: 0.3035
>
>Epoch 8/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.6131 - accuracy: 0.3187 - val_loss: 2.5414 - val_accuracy: 0.3501
>
>Epoch 9/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.4598 - accuracy: 0.3517 - val_loss: 2.7046 - val_accuracy: 0.3255
>
>Epoch 10/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.3132 - accuracy: 0.3882 - val_loss: 2.2640 - val_accuracy: 0.4070
>
>Epoch 11/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.1848 - accuracy: 0.4189 - val_loss: 2.1943 - val_accuracy: 0.4327
>
>Epoch 12/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 2.0751 - accuracy: 0.4445 - val_loss: 2.2010 - val_accuracy: 0.4361
>
>Epoch 13/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.9770 - accuracy: 0.4687 - val_loss: 2.1503 - val_accuracy: 0.4551
>
>Epoch 14/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.8800 - accuracy: 0.4931 - val_loss: 2.1343 - val_accuracy: 0.4603
>
>Epoch 15/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.7966 - accuracy: 0.5125 - val_loss: 2.0326 - val_accuracy: 0.4885
>
>Epoch 16/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.7115 - accuracy: 0.5345 - val_loss: 2.0095 - val_accuracy: 0.4921
>
>Epoch 17/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.6370 - accuracy: 0.5557 - val_loss: 1.9143 - val_accuracy: 0.5168
>
>Epoch 18/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.5570 - accuracy: 0.5735 - val_loss: 1.8116 - val_accuracy: 0.5317
>
>Epoch 19/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.5038 - accuracy: 0.5871 - val_loss: 1.7452 - val_accuracy: 0.5520
>
>Epoch 20/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.4433 - accuracy: 0.6041 - val_loss: 1.8036 - val_accuracy: 0.5433
>
>Epoch 21/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.3753 - accuracy: 0.6204 - val_loss: 1.8993 - val_accuracy: 0.5321
>
>Epoch 22/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.3242 - accuracy: 0.6343 - val_loss: 1.9099 - val_accuracy: 0.5382
>
>Epoch 23/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.2704 - accuracy: 0.6474 - val_loss: 1.7647 - val_accuracy: 0.5667
>
>Epoch 24/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.2367 - accuracy: 0.6576 - val_loss: 1.7773 - val_accuracy: 0.5657
>
>Epoch 25/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.1795 - accuracy: 0.6715 - val_loss: 1.7160 - val_accuracy: 0.5766
>
>Epoch 26/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.1373 - accuracy: 0.6827 - val_loss: 1.7304 - val_accuracy: 0.5774
>
>Epoch 27/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.1082 - accuracy: 0.6923 - val_loss: 1.9430 - val_accuracy: 0.5465
>
>Epoch 28/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.0601 - accuracy: 0.7011 - val_loss: 1.8539 - val_accuracy: 0.5669
>
>Epoch 29/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 1.0185 - accuracy: 0.7152 - val_loss: 1.7887 - val_accuracy: 0.5778
>
>Epoch 30/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.9888 - accuracy: 0.7230 - val_loss: 1.7522 - val_accuracy: 0.5884
>
>Epoch 31/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.9584 - accuracy: 0.7310 - val_loss: 1.7597 - val_accuracy: 0.5903
>
>Epoch 32/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.9328 - accuracy: 0.7392 - val_loss: 1.7132 - val_accuracy: 0.5991
>
>Epoch 33/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.8958 - accuracy: 0.7499 - val_loss: 1.7338 - val_accuracy: 0.6036
>
>Epoch 34/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.8724 - accuracy: 0.7571 - val_loss: 1.7104 - val_accuracy: 0.6079
>
>Epoch 35/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.8450 - accuracy: 0.7624 - val_loss: 1.7668 - val_accuracy: 0.6038
>
>Epoch 36/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.8050 - accuracy: 0.7744 - val_loss: 1.9853 - val_accuracy: 0.5697
>
>Epoch 37/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.8056 - accuracy: 0.7736 - val_loss: 1.8849 - val_accuracy: 0.5859
>
>Epoch 38/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.7700 - accuracy: 0.7839 - val_loss: 1.8189 - val_accuracy: 0.6049
>
>Epoch 39/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.7545 - accuracy: 0.7874 - val_loss: 1.8237 - val_accuracy: 0.5989
>
>Epoch 40/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.7337 - accuracy: 0.7918 - val_loss: 1.8901 - val_accuracy: 0.5918
>
>Epoch 41/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.7108 - accuracy: 0.8002 - val_loss: 1.8254 - val_accuracy: 0.6090
>
>Epoch 42/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.6897 - accuracy: 0.8039 - val_loss: 1.8526 - val_accuracy: 0.6094
>
>Epoch 43/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.6723 - accuracy: 0.8099 - val_loss: 1.9535 - val_accuracy: 0.5924
>
>Epoch 44/250
>
>391/391 [==============================] - 26s 66ms/step - loss: 0.6665 - accuracy: 0.8138 - val_loss: 1.8447 - val_accuracy: 0.6037
>313/313 - 3s - loss: 1.8447 - accuracy: 0.6037
>0.6036999821662903
编辑2:
我知道我的架构是有缺陷的,任何帮助都是值得赞赏的,但我会问你是否可以帮助IMAGEDATA发电机问题,我认为已经提供了所有相关的信息。

< P>我唯一看到的问题是,你已经配置了洗牌在发电机上关闭。其他一切都很好。图像或标签没有变化。请注意,默认情况下,model.fit将洗牌输入数据,但在使用生成器时,必须将生成器配置为洗牌。因此,当您没有提供生成器时,输入数据正在被洗牌,而当您使用生成器时,输入数据没有被洗牌。这个问题似乎与讨论中遇到的问题类似,它解决了fit和现在已弃用的fit_生成器方法之间的差异

这是一本我经常翻修的笔记本

当配置为洗牌生成器时,我得到了与直接提供图像相同的合理收敛

带发电机 无发电机 比较

您所说的准确度有多大的下降?你的问题中没有这方面的信息。此外,协作链接也没有用,所有信息都应该在你的问题文本中。我会给你一个扰流器,架构确实很重要,因为你的网络中存在一个大问题,它无法正常工作。我现在已经在帖子中提供了结果。还有架构,正如我所说,它有一些问题。当然,当考虑网络的整体性能时,架构确实很重要,但在这种情况下,唯一的区别是ImageDataGenerator的使用。如果没有发电机,网络将按预期运行,因此我认为发电机是问题所在,如果有任何混淆,请道歉。哇,感谢您提供了全面的答案。不幸的是,我关闭shuffle的唯一原因是,当shuffle设置为True时,它也不起作用,所以我尝试关闭它以查看是否会有任何不同。因此,我再次查看了您的原始代码,正如您所说,重新打开shuffle并没有起到作用。我还做了一件我起初没有想到会影响它的事情,但它似乎是对您自己的代码的最小编辑。这就是重塑标签数组,使其成为1D而不是2D
tf。重塑(train_labels,(-1))
。你能试一下,看看结果是否可以重复吗?我想知道那句话是干什么用的。不,看起来它仍然不起作用。然而,你是上帝。以前从来没有人像现在这样致力于帮助我。非常感谢你。事实上,我犯了一个错误,而你的修复确实有效。我对你感激不尽,你太棒了。但是你是怎么想的,为什么?为什么在使用发电机时需要这样做?看起来网络学习速度较慢,收敛精度较低,但肯定比以前好
Epoch 1/10
391/391 [==============================] - 27s 69ms/step - loss: 4.6436 - accuracy: 0.0140 - val_loss: 4.6110 - val_accuracy: 0.0084
Epoch 2/10
391/391 [==============================] - 26s 67ms/step - loss: 4.4306 - accuracy: 0.0373 - val_loss: 4.5542 - val_accuracy: 0.0237
Epoch 3/10
391/391 [==============================] - 26s 67ms/step - loss: 4.2667 - accuracy: 0.0587 - val_loss: 4.2758 - val_accuracy: 0.0590
Epoch 4/10
391/391 [==============================] - 26s 67ms/step - loss: 4.1242 - accuracy: 0.0798 - val_loss: 4.1708 - val_accuracy: 0.0725
Epoch 5/10
391/391 [==============================] - 26s 68ms/step - loss: 4.0096 - accuracy: 0.0980 - val_loss: 3.9277 - val_accuracy: 0.1188
Epoch 6/10
391/391 [==============================] - 26s 68ms/step - loss: 3.8621 - accuracy: 0.1288 - val_loss: 4.5334 - val_accuracy: 0.0659
Epoch 7/10
391/391 [==============================] - 26s 68ms/step - loss: 3.7434 - accuracy: 0.1460 - val_loss: 4.5092 - val_accuracy: 0.0835
Epoch 8/10
391/391 [==============================] - 27s 68ms/step - loss: 3.5886 - accuracy: 0.1769 - val_loss: 3.8606 - val_accuracy: 0.1486
Epoch 9/10
391/391 [==============================] - 27s 69ms/step - loss: 3.4937 - accuracy: 0.1975 - val_loss: 3.9907 - val_accuracy: 0.1236
Epoch 10/10
391/391 [==============================] - 27s 70ms/step - loss: 3.3655 - accuracy: 0.2190 - val_loss: 3.4287 - val_accuracy: 0.2202
Epoch 1/10
391/391 [==============================] - 27s 69ms/step - loss: 4.6348 - accuracy: 0.0154 - val_loss: 4.5947 - val_accuracy: 0.0152
Epoch 2/10
391/391 [==============================] - 27s 68ms/step - loss: 4.4097 - accuracy: 0.0398 - val_loss: 4.4070 - val_accuracy: 0.0419
Epoch 3/10
391/391 [==============================] - 26s 68ms/step - loss: 4.2278 - accuracy: 0.0643 - val_loss: 4.4637 - val_accuracy: 0.0481
Epoch 4/10
391/391 [==============================] - 26s 68ms/step - loss: 4.0906 - accuracy: 0.0842 - val_loss: 4.3163 - val_accuracy: 0.0625
Epoch 5/10
391/391 [==============================] - 27s 68ms/step - loss: 3.9589 - accuracy: 0.1033 - val_loss: 4.3802 - val_accuracy: 0.0693
Epoch 6/10
391/391 [==============================] - 27s 68ms/step - loss: 3.8119 - accuracy: 0.1318 - val_loss: 3.8241 - val_accuracy: 0.1345
Epoch 7/10
391/391 [==============================] - 26s 68ms/step - loss: 3.7324 - accuracy: 0.1447 - val_loss: 3.6602 - val_accuracy: 0.1598
Epoch 8/10
391/391 [==============================] - 27s 68ms/step - loss: 3.6160 - accuracy: 0.1669 - val_loss: 3.6975 - val_accuracy: 0.1573
Epoch 9/10
391/391 [==============================] - 26s 68ms/step - loss: 3.4929 - accuracy: 0.1893 - val_loss: 3.5784 - val_accuracy: 0.1956
Epoch 10/10
391/391 [==============================] - 26s 68ms/step - loss: 3.4052 - accuracy: 0.2061 - val_loss: 3.3669 - val_accuracy: 0.2298