Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/blackberry/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何纠正/改进我的CNN模式?如何处理验证准确性冻结问题?_Python_Conv Neural Network_Tensorflow2.0_Tensorflow Lite - Fatal编程技术网

Python 如何纠正/改进我的CNN模式?如何处理验证准确性冻结问题?

Python 如何纠正/改进我的CNN模式?如何处理验证准确性冻结问题?,python,conv-neural-network,tensorflow2.0,tensorflow-lite,Python,Conv Neural Network,Tensorflow2.0,Tensorflow Lite,验证集精度冻结在0.0909。这不合身吗?如何解决该问题以获得更好的模型精度。该模型稍后将转换为tflite,部署在android上 我的模型: model = Sequential([ Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same', input_shape=(224, 224, 3)), MaxPool2D(pool_size=(2, 2), strides=2), Conv2D(filters

验证集精度冻结在0.0909。这不合身吗?如何解决该问题以获得更好的模型精度。该模型稍后将转换为tflite,部署在android上

我的模型:

model = Sequential([
Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same', input_shape=(224, 224, 3)),
MaxPool2D(pool_size=(2, 2), strides=2),
Conv2D(filters=64, kernel_size=(3, 3), activation='relu', padding='same'),
MaxPool2D(pool_size=(2, 2), strides=2),
Conv2D(filters=128, kernel_size=(3, 3), activation='relu', padding='same'),
MaxPool2D(pool_size=(2, 2), strides=2),
Flatten(),
Dense(units=train_batches.num_classes, activation='softmax')
])


图层(类型)输出形状参数# conv2d(conv2d)(无、224、224、32)896


最大池2D(最大池2D)(无、112、112、32)0


conv2d_1(conv2d)(无、112、112、64)18496


最大池2D_1(最大池2(无、56、56、64)0


conv2d_2(conv2d)(无、56、56、128)73856


最大池2D_2(最大池2(无、28、28、128)0


展平(展平)(无,100352)0


致密(致密)(无,11)1103883 总参数:1197131 可培训参数:1197131 不可训练参数:0



尝试使用较低的学习率。同时检查您的数据集。我的意思是,如果您使用的数据集较小,则使用图像增强来增加它,以便模型可以更好地学习。使用批量归一化、正则化技术和LR调度程序,因为您的梯度下降正在下降到局部极小值。

感谢这很有帮助
model.summary()
model.compile(optimizer=Adam(learning_rate=0.01), loss=categorical_crossentropy, metrics=['accuracy'])

model.fit(x=train_batches, validation_data=valid_batches, epochs=10, verbose=2)

Epoch 1/10
53/53 - 31s - loss: 273.5211 - accuracy: 0.0777 - val_loss: 2.3989 - val_accuracy: 0.0909
Epoch 2/10
53/53 - 27s - loss: 2.4001 - accuracy: 0.0928 - val_loss: 2.3986 - val_accuracy: 0.0909
Epoch 3/10
53/53 - 28s - loss: 2.4004 - accuracy: 0.0795 - val_loss: 2.3986 - val_accuracy: 0.0909
Epoch 4/10
53/53 - 29s - loss: 2.4006 - accuracy: 0.0739 - val_loss: 2.3989 - val_accuracy: 0.0909
Epoch 5/10
53/53 - 29s - loss: 2.3999 - accuracy: 0.0720 - val_loss: 2.3986 - val_accuracy: 0.0909
Epoch 6/10
53/53 - 28s - loss: 2.4004 - accuracy: 0.0720 - val_loss: 2.3986 - val_accuracy: 0.0909
Epoch 7/10
53/53 - 28s - loss: 2.4004 - accuracy: 0.0682 - val_loss: 2.3993 - val_accuracy: 0.0909
Epoch 8/10
53/53 - 29s - loss: 2.3995 - accuracy: 0.0871 - val_loss: 2.3986 - val_accuracy: 0.0909  
Epoch 9/10
53/53 - 29s - loss: 2.4008 - accuracy: 0.0852 - val_loss: 2.3988 - val_accuracy: 0.0909
Epoch 10/10
53/53 - 28s - loss: 2.4004 - accuracy: 0.0833 - val_loss: 2.3991 - val_accuracy: 0.0909