Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/286.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 即使在keras中进行正则化,验证精度在某一点后也开始下降_Python_Tensorflow_Keras - Fatal编程技术网

Python 即使在keras中进行正则化,验证精度在某一点后也开始下降

Python 即使在keras中进行正则化,验证精度在某一点后也开始下降,python,tensorflow,keras,Python,Tensorflow,Keras,我建立了一个模型,将图像分为10类。我使用的是一个小数据集(每个类别600张左右的图片)。验证精度不超过60。我已经尝试过正规化和退出层,但仍然无法改善。我在tensorflow for Peets 2上使用迁移学习进行了尝试,最终的准确度为75,所以我认为问题不在于数据集。我也尝试过类似问题的解决方案(添加正则化、dpropout、将softmax更改为sigmoid),但似乎不起作用 附言:我是深度学习的初学者 我的模型: model=tf.keras.models.Sequenti

我建立了一个模型,将图像分为10类。我使用的是一个小数据集(每个类别600张左右的图片)。验证精度不超过60。我已经尝试过正规化和退出层,但仍然无法改善。我在tensorflow for Peets 2上使用迁移学习进行了尝试,最终的准确度为75,所以我认为问题不在于数据集。我也尝试过类似问题的解决方案(添加正则化、dpropout、将softmax更改为sigmoid),但似乎不起作用

附言:我是深度学习的初学者

我的模型:

    model=tf.keras.models.Sequential()
    model.add(Conv2D(32,(3,3),input_shape=X.shape[1:]))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2,2)))

    model.add(Conv2D(32,(3,3)))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2,2)))

    model.add(Conv2D(64,(3,3)))
    model.add(Activation("relu"))
    model.add(MaxPooling2D(pool_size=(2,2)))

    model.add(Flatten())
    model.add(Dense(64,kernel_regularizer=regularizers.l2(0.01)))
    model.add(Activation("relu"))

    model.add(Dense(10,kernel_regularizer=regularizers.l2(0.01)))
    model.add(Activation('softmax'))

    model.compile(loss="sparse_categorical_crossentropy",optimizer="adam",metrics=['accuracy'])
    model.fit(X,Y,batch_size=16,validation_split=0.1,epochs=50,verbose=2)
    model.save('tm1.h5')
输出:

Train on 5877 samples, validate on 654 samples
Epoch 1/50
5877/5877 - 38s - loss: 2.2745 - accuracy: 0.2277 - val_loss: 2.0920 - val_accuracy: 0.2477
Epoch 2/50
5877/5877 - 17s - loss: 1.9706 - accuracy: 0.3362 - val_loss: 1.9955 - val_accuracy: 0.3318
Epoch 3/50
5877/5877 - 18s - loss: 1.8413 - accuracy: 0.4056 - val_loss: 1.9985 - val_accuracy: 0.3180
Epoch 4/50
5877/5877 - 17s - loss: 1.7733 - accuracy: 0.4514 - val_loss: 1.7391 - val_accuracy: 0.4526
Epoch 5/50
5877/5877 - 14s - loss: 1.6771 - accuracy: 0.4943 - val_loss: 1.7292 - val_accuracy: 0.4297
Epoch 6/50
5877/5877 - 14s - loss: 1.6172 - accuracy: 0.5159 - val_loss: 1.6708 - val_accuracy: 0.5076
Epoch 7/50
5877/5877 - 14s - loss: 1.5484 - accuracy: 0.5455 - val_loss: 1.6793 - val_accuracy: 0.4878
Epoch 8/50
5877/5877 - 13s - loss: 1.4945 - accuracy: 0.5642 - val_loss: 1.5690 - val_accuracy: 0.5535
Epoch 9/50
5877/5877 - 13s - loss: 1.4465 - accuracy: 0.5955 - val_loss: 1.5932 - val_accuracy: 0.5520
Epoch 10/50
5877/5877 - 12s - loss: 1.4056 - accuracy: 0.6149 - val_loss: 1.5437 - val_accuracy: 0.5673
Epoch 11/50
5877/5877 - 12s - loss: 1.3573 - accuracy: 0.6362 - val_loss: 1.5647 - val_accuracy: 0.5810
Epoch 12/50
5877/5877 - 12s - loss: 1.3086 - accuracy: 0.6701 - val_loss: 1.5582 - val_accuracy: 0.5933
Epoch 13/50
5877/5877 - 13s - loss: 1.2784 - accuracy: 0.6828 - val_loss: 1.5995 - val_accuracy: 0.5749
Epoch 14/50
5877/5877 - 13s - loss: 1.2406 - accuracy: 0.7019 - val_loss: 1.6150 - val_accuracy: 0.6131
Epoch 15/50
5877/5877 - 15s - loss: 1.1769 - accuracy: 0.7351 - val_loss: 1.7797 - val_accuracy: 0.5382
Epoch 16/50
5877/5877 - 14s - loss: 1.1676 - accuracy: 0.7422 - val_loss: 1.8158 - val_accuracy: 0.5642
Epoch 17/50
5877/5877 - 13s - loss: 1.1088 - accuracy: 0.7708 - val_loss: 1.7937 - val_accuracy: 0.5765
Epoch 18/50
5877/5877 - 15s - loss: 1.0763 - accuracy: 0.7885 - val_loss: 1.9044 - val_accuracy: 0.5612
Epoch 19/50
5877/5877 - 19s - loss: 1.0481 - accuracy: 0.8007 - val_loss: 1.8861 - val_accuracy: 0.5795
Epoch 20/50
5877/5877 - 14s - loss: 0.9871 - accuracy: 0.8222 - val_loss: 2.0031 - val_accuracy: 0.5765
Epoch 21/50
5877/5877 - 13s - loss: 0.9629 - accuracy: 0.8356 - val_loss: 2.0946 - val_accuracy: 0.5688
Epoch 22/50
5877/5877 - 15s - loss: 0.9392 - accuracy: 0.8455 - val_loss: 2.0742 - val_accuracy: 0.5795
Epoch 23/50
5877/5877 - 15s - loss: 0.9087 - accuracy: 0.8603 - val_loss: 2.1889 - val_accuracy: 0.5642
Epoch 24/50
5877/5877 - 16s - loss: 0.9055 - accuracy: 0.8583 - val_loss: 2.4053 - val_accuracy: 0.5489
Epoch 25/50
5877/5877 - 14s - loss: 0.8826 - accuracy: 0.8663 - val_loss: 2.3087 - val_accuracy: 0.5398
Epoch 26/50
5877/5877 - 14s - loss: 0.8849 - accuracy: 0.8724 - val_loss: 2.4014 - val_accuracy: 0.5428
Epoch 27/50
5877/5877 - 17s - loss: 0.8603 - accuracy: 0.8758 - val_loss: 2.3956 - val_accuracy: 0.5566
Epoch 28/50
5877/5877 - 15s - loss: 0.8523 - accuracy: 0.8770 - val_loss: 2.3809 - val_accuracy: 0.5520
Epoch 29/50
5877/5877 - 14s - loss: 0.8500 - accuracy: 0.8846 - val_loss: 2.5112 - val_accuracy: 0.5505
Epoch 30/50
5877/5877 - 12s - loss: 0.8411 - accuracy: 0.8863 - val_loss: 2.2699 - val_accuracy: 0.5459
Epoch 31/50
5877/5877 - 13s - loss: 0.8405 - accuracy: 0.8903 - val_loss: 2.4893 - val_accuracy: 0.5550
Epoch 32/50
5877/5877 - 13s - loss: 0.8420 - accuracy: 0.8926 - val_loss: 2.4964 - val_accuracy: 0.5489
Epoch 33/50
5877/5877 - 12s - loss: 0.8047 - accuracy: 0.8998 - val_loss: 2.6824 - val_accuracy: 0.5505
Epoch 34/50
5877/5877 - 15s - loss: 0.8118 - accuracy: 0.9028 - val_loss: 2.4617 - val_accuracy: 0.5535
Epoch 35/50
5877/5877 - 13s - loss: 0.8001 - accuracy: 0.9098 - val_loss: 2.2837 - val_accuracy: 0.5489
Epoch 36/50
5877/5877 - 15s - loss: 0.7888 - accuracy: 0.9030 - val_loss: 2.4703 - val_accuracy: 0.5703
Epoch 37/50
5877/5877 - 16s - loss: 0.7769 - accuracy: 0.9095 - val_loss: 2.4717 - val_accuracy: 0.5719
Epoch 38/50
5877/5877 - 15s - loss: 0.7812 - accuracy: 0.9057 - val_loss: 2.6211 - val_accuracy: 0.5443
Epoch 39/50
5877/5877 - 14s - loss: 0.7878 - accuracy: 0.9083 - val_loss: 2.5498 - val_accuracy: 0.5749
Epoch 40/50
5877/5877 - 14s - loss: 0.8238 - accuracy: 0.8977 - val_loss: 2.7981 - val_accuracy: 0.5398
Epoch 41/50
5877/5877 - 15s - loss: 0.7833 - accuracy: 0.9091 - val_loss: 2.6674 - val_accuracy: 0.5443
Epoch 42/50
5877/5877 - 13s - loss: 0.7170 - accuracy: 0.9309 - val_loss: 2.6951 - val_accuracy: 0.5703
Epoch 43/50
5877/5877 - 12s - loss: 0.7493 - accuracy: 0.9163 - val_loss: 2.4696 - val_accuracy: 0.5703
Epoch 44/50
5877/5877 - 13s - loss: 0.7903 - accuracy: 0.9056 - val_loss: 2.8673 - val_accuracy: 0.5336
Epoch 45/50
5877/5877 - 14s - loss: 0.7861 - accuracy: 0.9144 - val_loss: 2.6287 - val_accuracy: 0.5382
Epoch 46/50
5877/5877 - 16s - loss: 0.7284 - accuracy: 0.9248 - val_loss: 2.6651 - val_accuracy: 0.5367
Epoch 47/50
5877/5877 - 13s - loss: 0.7216 - accuracy: 0.9246 - val_loss: 2.5384 - val_accuracy: 0.5520
Epoch 48/50
5877/5877 - 13s - loss: 0.7890 - accuracy: 0.9044 - val_loss: 2.7023 - val_accuracy: 0.5398
Epoch 49/50
5877/5877 - 13s - loss: 0.7362 - accuracy: 0.9270 - val_loss: 2.9077 - val_accuracy: 0.5122
Epoch 50/50
5877/5877 - 13s - loss: 0.7080 - accuracy: 0.9309 - val_loss: 2.7464 - val_accuracy: 0.5627

这是一个非常明显的过度装修的迹象。这就是谷歌搜索的关键词

无论何时,只要训练数据不断改善,而验证数据没有改善或变得更糟,您就在处理过度拟合问题

过度装配的解决方案包括:

  • 更多数据
  • 更简单的模型(变量更少)
  • 更具代表性的数据(看起来你有一个很好的代表性)
  • 正规化和辍学(很棒的工作!)
我建议首先使用一个更简单的模型。只有一半的尺寸,看看问题是好还是坏


同样对于故障排除,我发现在优化器中使用SGD比使用adam容易得多。Adam收敛得更快,但当您不了解最初发生的情况时,最好使用SGD。

您尝试了哪些解决方案?它们怎么不起作用?请重复介绍教程中的内容。您的模型或其他尝试没有可靠的来源。你怎么这么肯定这个模型能够达到SotA(最先进的)精度?“显而易见”的经验答案是,你的模型与问题并不匹配——它在大约9个时代后达到了低于标准的精度,然后螺旋下降到过度拟合。我在问题主体中添加了我尝试过的解决方案。简单的模型是指更少的层吗?我会使用更少的单元64->32和32->16。基本上,我们的想法是,如果你的模型不能非常精确地表示事物(过度拟合),那么你的模型更有可能形成一个更通用的解决方案。我让我的模型工作得更好,我添加了更多的卷积层(现在总共五个cnn层),并删除了密集层。