Python Tensorflow模型验证精度不增加
我已经建立了一个tensorflow模型,在不同的时期,我的验证准确度没有变化,这让我相信我的设置有问题。下面是我的代码Python Tensorflow模型验证精度不增加,python,tensorflow,image-processing,conv-neural-network,Python,Tensorflow,Image Processing,Conv Neural Network,我已经建立了一个tensorflow模型,在不同的时期,我的验证准确度没有变化,这让我相信我的设置有问题。下面是我的代码 from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D from keras.layers import Activation, Dropout, Flatten, Dense from keras.preprocessing.image import ImageDat
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras import regularizers
import tensorflow as tf
model = Sequential()
model.add(Conv2D(16, (3, 3), input_shape=(299, 299,3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))
model.add(Conv2D(32, (3, 3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))
model.add(Conv2D(64, (3, 3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))
model.add(Conv2D(64, (3, 3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))
# this converts our 3D feature maps to 1D feature vectors
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
batch_size=32
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1./255,
# shear_range=0.2,
# zoom_range=0.2,
horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1./255)
# this is a generator that will read pictures found in
# subfolers of 'data/train', and indefinitely generate
# batches of augmented image data
train_generator = train_datagen.flow_from_directory(
'Documents/Training', # this is the target directory
target_size=(299, 299), #all images will be resized to 299
batch_size=batch_size,
class_mode='binary') # since we use binary_crossentropy loss, we need binary labels
# this is a similar generator, for validation data
validation_generator = test_datagen.flow_from_directory(
'Documents/Dev',
target_size=(299, 299),
batch_size=batch_size,
class_mode='binary')
#w1 = tf.Variable(tf.truncated_normal([784, 30], stddev=0.1))
model.fit_generator(
train_generator,
steps_per_epoch=50 // batch_size,
verbose = 1,
epochs=10,
validation_data=validation_generator,
validation_steps=8 // batch_size)
当我运行时,会产生以下输出。就我的架构或数据生成步骤而言,这里缺少什么?我已经提到了,但没有用
Epoch 1/10
3/3 [==============================] - 2s 593ms/step - loss: 0.6719 - accuracy: 0.6250 - val_loss: 0.8198 - val_accuracy: 0.5000
Epoch 2/10
3/3 [==============================] - 2s 607ms/step - loss: 0.6521 - accuracy: 0.6667 - val_loss: 0.8518 - val_accuracy: 0.5000
Epoch 3/10
3/3 [==============================] - 2s 609ms/step - loss: 0.6752 - accuracy: 0.6250 - val_loss: 0.7129 - val_accuracy: 0.5000
Epoch 4/10
3/3 [==============================] - 2s 611ms/step - loss: 0.6841 - accuracy: 0.6250 - val_loss: 0.7010 - val_accuracy: 0.5000
Epoch 5/10
3/3 [==============================] - 2s 608ms/step - loss: 0.6977 - accuracy: 0.5417 - val_loss: 0.6551 - val_accuracy: 0.5000
Epoch 6/10
3/3 [==============================] - 2s 607ms/step - loss: 0.6508 - accuracy: 0.7083 - val_loss: 0.5752 - val_accuracy: 0.5000
Epoch 7/10
3/3 [==============================] - 2s 615ms/step - loss: 0.6596 - accuracy: 0.6875 - val_loss: 0.9326 - val_accuracy: 0.5000
Epoch 8/10
3/3 [==============================] - 2s 604ms/step - loss: 0.7022 - accuracy: 0.6458 - val_loss: 0.6976 - val_accuracy: 0.5000
Epoch 9/10
3/3 [==============================] - 2s 591ms/step - loss: 0.6331 - accuracy: 0.7292 - val_loss: 0.9571 - val_accuracy: 0.5000
Epoch 10/10
3/3 [==============================] - 2s 595ms/step - loss: 0.6085 - accuracy: 0.7292 - val_loss: 0.6029 - val_accuracy: 0.5000
Out[24]: <keras.callbacks.callbacks.History at 0x1ee4e3a8f08>
1/10纪元
3/3[=====================================================================2s 593ms/步-损耗:0.6719-精度:0.6250-val_损耗:0.8198-val_精度:0.5000
纪元2/10
3/3[=====================================================================2s 607ms/步-损耗:0.6521-精度:0.6667-val_损耗:0.8518-val_精度:0.5000
纪元3/10
3/3[==================================================================2s 609ms/步-损耗:0.6752-精度:0.6250-val_损耗:0.7129-val_精度:0.5000
纪元4/10
3/3[=====================================================================2s 611ms/步-损耗:0.6841-精度:0.6250-val_损耗:0.7010-val_精度:0.5000
纪元5/10
3/3[========================================================================2s 608ms/步-损耗:0.6977-精度:0.5417-val_损耗:0.6551-val_精度:0.5000
纪元6/10
3/3[==================================================================2s 607ms/步-损耗:0.6508-精度:0.7083-val_损耗:0.5752-val_精度:0.5000
纪元7/10
3/3[===============================================================2s 615ms/步-损耗:0.6596-精度:0.6875-val_损耗:0.9326-val_精度:0.5000
纪元8/10
3/3[=====================================================================2s 604ms/步长-损耗:0.7022-精度:0.6458-val_损耗:0.6976-val_精度:0.5000
纪元9/10
3/3[=====================================================================2s 591ms/步长-损耗:0.6331-精度:0.7292-val_损耗:0.9571-val_精度:0.5000
纪元10/10
3/3[========================================================================2s 595ms/步-损耗:0.6085-精度:0.7292-val_损耗:0.6029-val_精度:0.5000
出[24]:
您正在设置每个历元的训练步骤=50//32=1。那么你只有50张训练图像吗?同样地,对于验证,步骤=8//32=0。您是否只有8个验证图像?当您执行程序时,培训和验证生成器会打印出他们找到的多少图像?你需要更多的图像。尝试将批处理大小设置为1我有8500张培训图像和500张验证图像。步骤的值应该是什么作为起点?我不确定我是从哪里得到这个逻辑的,我一直在玩弄很多东西,试图弄清楚到底发生了什么。试试batch_size=50,steps per epoch=170,这样170 X 50=8500,你就可以每个epoch检查一次你的训练集。将验证批大小设置为50,步骤设置为10,以便每个历元检查一次验证集