Python Keras CNN即时过度拟合,而不是数据集问题
我们一直在尝试建立一个CNN来对MFCC数据进行分类,但该模型马上就过拟合了 数据:Python Keras CNN即时过度拟合,而不是数据集问题,python,tensorflow,keras,cnn,overfitting-underfitting,Python,Tensorflow,Keras,Cnn,Overfitting Underfitting,我们一直在尝试建立一个CNN来对MFCC数据进行分类,但该模型马上就过拟合了 数据: 18000个文件(80%培训,20%测试) 5个标签 数据中的5个类的数量都相等。创建这个模型是为了处理比18k多得多的文件,所以我被告知要尽可能减少网络,这可能会有所帮助 将过滤器从(3,3)减少到(1,1),尝试减少隐藏的神经元数量,甚至减少层数量。我只是被卡住了,有人有什么想法吗 无论发生什么情况,当使用测试数据测量精度时,我从未获得高于60-65%的精度 型号代码: time_start_train
- 18000个文件(80%培训,20%测试)
- 5个标签
time_start_train = time.time()
i = Input(shape=(feature_count,feature_count,1))
m = Conv2D(16, d, activation='elu', padding='same')(i)
m = MaxPooling2D()(m)
m = Conv2D(32, d, activation='elu', padding='same')(m)
m = MaxPooling2D()(m)
m = Conv2D(64, d, activation='elu', padding='same')(m)
m = MaxPooling2D()(m)
m = Conv2D(128, d, activation='elu', padding='same')(m)
m = MaxPooling2D()(m)
m = Conv2D(256, d, activation='elu', padding='same')(m)
m = MaxPooling2D()(m)
m = Flatten()(m)
m = Dense(512, activation='elu')(m)
m = Dropout(0.2)(m)
o = Dense(out_dim, activation='softmax')(m)
model = Model(inputs=i, outputs=o)
model.compile(loss='categorical_crossentropy', optimizer=Nadam(lr=1e-3), metrics=['accuracy'])
history = model.fit(data_train[0], data_train[1], epochs=10, verbose=1, validation_split = 0.1, shuffle=True)
模型摘要:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 192, 192, 1) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 192, 192, 16) 32
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 96, 96, 16) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 96, 96, 32) 544
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 48, 48, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 48, 48, 64) 2112
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 24, 24, 64) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 24, 24, 128) 8320
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 12, 12, 128) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 12, 12, 256) 33024
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 6, 6, 256) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 9216) 0
_________________________________________________________________
dense_1 (Dense) (None, 512) 4719104
_________________________________________________________________
dropout_1 (Dropout) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 5) 2565
=================================================================
Total params: 4,765,701
Trainable params: 4,765,701
Non-trainable params: 0
尝试应用L1/L2正则化。如果您对ML/DL模型没有深入的了解,请使用AUTOML而不是KERAS。在AUTOML中,不需要考虑太多不同的参数。在这段代码中有什么具体的建议吗?检查这里:刚刚实现了它,结果大致相同。您也尝试过L1吗?使用不同的值调整参数。也可以尝试l1和l2。结果应该会来的。为什么你认为它太合适了?请显示每个Epochy的培训损失和验证损失。我们的模型在其中一层中仍有512个隐藏单元。试着只留下16个隐藏单位?刚刚做了,同样的结果。嗯。。。过度拟合是指验证损失开始增加的时候,这里只是一段时间的停滞期,也许只是训练它更长一点?尝试了100个时代,相同的最终准确率结果(大约50-60%)。好的,对不起,我最初的建议不够强。留下1个过滤器和1个隐藏单元,然后使用kernel_regularizer=regularizers.l2(10)。遗憾的是,为了研究目的,我需要为uni完成这个系统。