Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Tensorflow CNN中的验证准确性没有提高_Tensorflow_Keras_Deep Learning_Computer Vision_Conv Neural Network - Fatal编程技术网

Tensorflow CNN中的验证准确性没有提高

Tensorflow CNN中的验证准确性没有提高,tensorflow,keras,deep-learning,computer-vision,conv-neural-network,Tensorflow,Keras,Deep Learning,Computer Vision,Conv Neural Network,我有一个像阿列克斯内特一样的CNN试图预测装饰品的种类。列车精度和损耗分别单调增加和减少。但是,测试精度在0.50左右波动 我试图改变各种超参数,改变批量大小,使用数据增强,将数据更改为灰度,因为它只是石头图片,添加了丢失、正则化、高斯噪声,改变了密集层中的单位计数,但验证精度仍然没有改变 我不知道该做什么以及如何改进我的模型。请帮帮我 from keras.preprocessing.image import ImageDataGenerator train_datagen=ImageDat

我有一个像阿列克斯内特一样的CNN试图预测装饰品的种类。列车精度和损耗分别单调增加和减少。但是,测试精度在0.50左右波动

我试图改变各种超参数,改变批量大小,使用数据增强,将数据更改为灰度,因为它只是石头图片,添加了丢失、正则化、高斯噪声,改变了密集层中的单位计数,但验证精度仍然没有改变

我不知道该做什么以及如何改进我的模型。请帮帮我

from keras.preprocessing.image import ImageDataGenerator

train_datagen=ImageDataGenerator (rescale = 1/255, 
                                  featurewise_center =True,
                                  shear_range= 0.2, 
                                  zoom_range=0.2, 
                                  rotation_range=90,
                                  width_shift_range=0.1,
                                  height_shift_range=0.1,
                                  fill_mode = 'nearest',
                                  vertical_flip = True,
                                  horizontal_flip=True)

training_set=train_datagen.flow_from_directory('/content/drive/My Drive/DATASET1/train', 
                                               target_size= (224,224),
                                               batch_size= 128,
                                               color_mode='grayscale',

                                               class_mode='categorical')

test_datagen=ImageDataGenerator ( rescale = 1/255, 
                                  featurewise_center =True,
                                  #shear_range= 0.2, 
                                  #zoom_range=0.2, 
                                  #horizontal_flip=True
                                )

test_set=test_datagen.flow_from_directory('/content/drive/My Drive/DATASET1/val', 
                                               target_size= (224,224),
                                               batch_size= 48,
                                               color_mode='grayscale',
                                               class_mode='categorical')

不确定,因为您没有从数据中给出任何示例,但可能是您的模型对于手头的任务来说太复杂了。在类似的问题中,通常不需要密集层。你试过使用(小得多)的完全卷积网络吗?谢谢你的回答。我尝试减小conv层和完全连接层的大小,效果更好。但即使验证精度提高到0.91,验证精度和验证损失仍然波动很大。与训练图相比,这些图看起来很奇怪。@iscrime,你能用修改后的代码更新你的问题吗。另外,请分享培训和验证、准确性和损失的图,以便我们能够准确地理解问题。谢谢@你的问题解决了吗?如果没有,您可以尝试使用
flow\u from\u directory
方法中的
shuffle=True
对数据进行洗牌。另外,请添加与
model.compile
model.fit
对应的代码,以便我们可以帮助您。谢谢不确定,因为您没有从数据中给出任何示例,但可能是您的模型对于手头的任务来说太复杂了。在类似的问题中,通常不需要密集层。你试过使用(小得多)的完全卷积网络吗?谢谢你的回答。我尝试减小conv层和完全连接层的大小,效果更好。但即使验证精度提高到0.91,验证精度和验证损失仍然波动很大。与训练图相比,这些图看起来很奇怪。@iscrime,你能用修改后的代码更新你的问题吗。另外,请分享培训和验证、准确性和损失的图,以便我们能够准确地理解问题。谢谢@你的问题解决了吗?如果没有,您可以尝试使用
flow\u from\u directory
方法中的
shuffle=True
对数据进行洗牌。另外,请添加与
model.compile
model.fit
对应的代码,以便我们可以帮助您。谢谢
model = Sequential()

# 1st Convolutional Layer
model.add(Conv2D(filters=96, input_shape=(224,224,1), kernel_size=(11,11), strides=(4,4), padding="same", activation = "relu"))

# Max Pooling
model.add(MaxPooling2D(pool_size=(3,3), strides=(2,2), padding="valid"))

# Batch Normalisation before passing it to the next layer
model.add(BatchNormalization())

# 2nd Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(11,11), strides=(1,1), padding="same", activation = "relu"))

# Max Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding="valid"))

# Batch Normalisation
model.add(BatchNormalization())

# 3rd Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding="same", activation = "relu"))

# Batch Normalisation
model.add(BatchNormalization())

# 4th Convolutional Layer
model.add(Conv2D(filters=384, kernel_size=(3,3), strides=(1,1), padding="same", activation = "relu"))
# Batch Normalisation
model.add(BatchNormalization())

# 5th Convolutional Layer
model.add(Conv2D(filters=256, kernel_size=(3,3), strides=(1,1), padding="same", activation = "relu"))

# Max Pooling
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding="valid"))

# Batch Normalisation
model.add(BatchNormalization())

# Passing it to a Fully Connected layer
model.add(Flatten())
# 1st Fully Connected Layer
regularizer =keras.regularizers.l2(l=0.0005)
model.add(GaussianNoise(0.1))
model.add(Dense(units = 4096, activation = "relu", kernel_regularizer = regularizer))

# Add Dropout to prevent overfitting
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None))

# 2nd Fully Connected Layer
regularizer =keras.regularizers.l2(l=0.0005)
model.add(GaussianNoise(0.1))
model.add(Dense(units = 2048, activation = "relu", kernel_regularizer = regularizer ))

# Add Dropout
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())

# 3rd Fully Connected Layer
regularizer =keras.regularizers.l2(l=0.0005)
model.add(GaussianNoise(0.1))
model.add(Dense(2048, activation = "relu", kernel_regularizer = regularizer))
# Add Dropout
model.add(Dropout(0.4))
# Batch Normalisation
model.add(BatchNormalization())


# Output Layer
model.add(Dense(2, activation = "softmax")) #As we have two classes
Epoch 1/20
/usr/local/lib/python3.6/dist-packages/keras_preprocessing/image/image_data_generator.py:716: UserWarning: This ImageDataGenerator specifies `featurewise_center`, but it hasn't been fit on any training data. Fit it first by calling `.fit(numpy_data)`.
  warnings.warn('This ImageDataGenerator specifies ')
5/5 [==============================] - 9s 2s/step - loss: 6.2275 - accuracy: 0.5244 - val_loss: 5.9162 - val_accuracy: 0.4985

Epoch 00001: val_accuracy improved from -inf to 0.49853, saving model to alexnet_1.h5
Epoch 2/20
5/5 [==============================] - 7s 1s/step - loss: 6.1302 - accuracy: 0.6031 - val_loss: 5.9220 - val_accuracy: 0.5103

Epoch 00002: val_accuracy improved from 0.49853 to 0.51032, saving model to alexnet_1.h5
Epoch 3/20
5/5 [==============================] - 5s 1s/step - loss: 6.1390 - accuracy: 0.6250 - val_loss: 6.0433 - val_accuracy: 0.4932

Epoch 00003: val_accuracy did not improve from 0.51032
Epoch 4/20
5/5 [==============================] - 6s 1s/step - loss: 6.0528 - accuracy: 0.6429 - val_loss: 5.9255 - val_accuracy: 0.4985

Epoch 00004: val_accuracy did not improve from 0.51032
Epoch 5/20
5/5 [==============================] - 7s 1s/step - loss: 6.0935 - accuracy: 0.6094 - val_loss: 5.9714 - val_accuracy: 0.4926

Epoch 00005: val_accuracy did not improve from 0.51032
Epoch 6/20
5/5 [==============================] - 5s 1s/step - loss: 6.0139 - accuracy: 0.6447 - val_loss: 5.5711 - val_accuracy: 0.4932

Epoch 00006: val_accuracy did not improve from 0.51032
Epoch 7/20
5/5 [==============================] - 5s 1s/step - loss: 6.0250 - accuracy: 0.6353 - val_loss: 5.9171 - val_accuracy: 0.5133

Epoch 00007: val_accuracy improved from 0.51032 to 0.51327, saving model to alexnet_1.h5
Epoch 8/20
5/5 [==============================] - 7s 1s/step - loss: 6.0012 - accuracy: 0.6422 - val_loss: 6.0526 - val_accuracy: 0.4749

Epoch 00008: val_accuracy did not improve from 0.51327
Epoch 9/20
5/5 [==============================] - 6s 1s/step - loss: 5.9814 - accuracy: 0.6635 - val_loss: 5.4898 - val_accuracy: 0.4966

Epoch 00009: val_accuracy did not improve from 0.51327
Epoch 10/20
5/5 [==============================] - 5s 906ms/step - loss: 5.9613 - accuracy: 0.6769 - val_loss: 6.1255 - val_accuracy: 0.4956

Epoch 00010: val_accuracy did not improve from 0.51327
Epoch 11/20
5/5 [==============================] - 6s 1s/step - loss: 5.9888 - accuracy: 0.6484 - val_loss: 6.2377 - val_accuracy: 0.4956

Epoch 00011: val_accuracy did not improve from 0.51327
Epoch 12/20
5/5 [==============================] - 5s 1s/step - loss: 6.0045 - accuracy: 0.6767 - val_loss: 5.4328 - val_accuracy: 0.4932

Epoch 00012: val_accuracy did not improve from 0.51327
Epoch 13/20
5/5 [==============================] - 5s 1s/step - loss: 5.9569 - accuracy: 0.6654 - val_loss: 5.9874 - val_accuracy: 0.4985

Epoch 00013: val_accuracy did not improve from 0.51327
Epoch 14/20
5/5 [==============================] - 7s 1s/step - loss: 5.8978 - accuracy: 0.6859 - val_loss: 6.2074 - val_accuracy: 0.4897

Epoch 00014: val_accuracy did not improve from 0.51327
Epoch 15/20
5/5 [==============================] - 5s 1s/step - loss: 6.0063 - accuracy: 0.6792 - val_loss: 5.3235 - val_accuracy: 0.4966

Epoch 00015: val_accuracy did not improve from 0.51327
Epoch 16/20
5/5 [==============================] - 6s 1s/step - loss: 5.8966 - accuracy: 0.7068 - val_loss: 6.1324 - val_accuracy: 0.5015

Epoch 00016: val_accuracy did not improve from 0.51327
Epoch 17/20
5/5 [==============================] - 7s 1s/step - loss: 5.9352 - accuracy: 0.6562 - val_loss: 6.2356 - val_accuracy: 0.4867

Epoch 00017: val_accuracy did not improve from 0.51327
Epoch 18/20
5/5 [==============================] - 6s 1s/step - loss: 5.9475 - accuracy: 0.6391 - val_loss: 7.9573 - val_accuracy: 0.4966

Epoch 00018: val_accuracy did not improve from 0.51327
Epoch 19/20
5/5 [==============================] - 5s 1s/step - loss: 5.9627 - accuracy: 0.6898 - val_loss: 6.0916 - val_accuracy: 0.4985

Epoch 00019: val_accuracy did not improve from 0.51327
Epoch 20/20
5/5 [==============================] - 6s 1s/step - loss: 5.8621 - accuracy: 0.6974 - val_loss: 6.3277 - val_accuracy: 0.4926

Epoch 00020: val_accuracy did not improve from 0.51327