Python keras cnn模型仅预测所有测试图像的一类
我正在尝试构建一个包含两个类的图像分类模型,其中有(1)个类,也有(0)个类。我可以建立模型,精度为1。这太好了,不可能是真的(这是一个问题),但当我使用predict_generator时,因为我的图像在文件夹中,它只返回1个类0(没有类)。似乎有一个问题,但我不能解决它,我看了很多文章,但我仍然无法解决这个问题Python keras cnn模型仅预测所有测试图像的一类,python,keras,conv-neural-network,Python,Keras,Conv Neural Network,我正在尝试构建一个包含两个类的图像分类模型,其中有(1)个类,也有(0)个类。我可以建立模型,精度为1。这太好了,不可能是真的(这是一个问题),但当我使用predict_generator时,因为我的图像在文件夹中,它只返回1个类0(没有类)。似乎有一个问题,但我不能解决它,我看了很多文章,但我仍然无法解决这个问题 image_shape = (220, 525, 3) #height, width, channels img_width = 96 img_height = 96 channel
image_shape = (220, 525, 3) #height, width, channels
img_width = 96
img_height = 96
channels = 3
epochs = 10
no_train_images = 11957 #!ls ../data/train/* | wc -l
no_test_images = 652 #!ls ../data/test/* | wc -l
no_valid_images = 6156 #!ls ../data/test/* | wc -l
train_dir = '../data/train/'
test_dir = '../data/test/'
valid_dir = '../data/valid/'
test folder structure is the following:
test/test_folder/images_from_both_classes.jpg
#!ls ../data/train/without/ | wc -l 5606 #theres no class inbalance
#!ls ../data/train/with/ | wc -l 6351
#!ls ../data/valid/without/ | wc -l 2899
#!ls ../data/valid/with/ | wc -l 3257
classification_model = Sequential()
# First layer with 2D convolution (32 filters, (3, 3) kernel size 3x3, input_shape=(img_width, img_height, channels))
classification_model.add(Conv2D(32, (3, 3), input_shape=input_shape))
# Activation Function = ReLu increases the non-linearity
classification_model.add(Activation('relu'))
# Max-Pooling layer with the size of the grid 2x2
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
# Randomly disconnets some nodes between this layer and the next
classification_model.add(Dropout(0.2))
classification_model.add(Conv2D(32, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.2))
classification_model.add(Conv2D(64, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.25))
classification_model.add(Conv2D(64, (3, 3)))
classification_model.add(Activation('relu'))
classification_model.add(MaxPooling2D(pool_size=(2, 2)))
classification_model.add(Dropout(0.3))
classification_model.add(Flatten())
classification_model.add(Dense(64))
classification_model.add(Activation('relu'))
classification_model.add(Dropout(0.5))
classification_model.add(Dense(1))
classification_model.add(Activation('sigmoid'))
# Using binary_crossentropy as we only have 2 classes
classification_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
batch_size = 32
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
rescale=1. / 255,
zoom_range=0.2)
# this is the augmentation configuration we will use for testing:
# only rescaling
valid_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary',
shuffle = True)
valid_generator = valid_datagen.flow_from_directory(
valid_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary',
shuffle = False)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size = (img_width, img_height),
batch_size = 1,
class_mode = None,
shuffle = False)
mpd = classification_model.fit_generator(
train_generator,
steps_per_epoch = no_train_images // batch_size, # number of images per epoch
epochs = epochs, # number of iterations over the entire data
validation_data = valid_generator,
validation_steps = no_valid_images // batch_size)
纪元1/10
373/373[===============================================================119s 320ms/步长-损耗:0.5214-附件:0.7357-val_损耗:0.2720-val_附件:0.8758
纪元2/10
373/373[===========================================================================-120s 322ms/阶跃-损耗:0.2485-附件:0.8935-val_损耗:0.0568-val_附件:0.9829
纪元3/10
373/373[==================================================================-130s 350ms/步长-损耗:0.1427-附件:0.9435-val_损耗:0.0410-val_附件:0.9796
纪元4/10
373/373[============================================================127s 341ms/阶跃-损耗:0.1053-附件:0.9623-val_损耗:0.0197-val_附件:0.9971
纪元5/10
373/373[===============================================================126s 337ms/步长-损耗:0.0817-附件:0.9682-val_损耗:0.0136-val_附件:0.9948
纪元6/10
373/373[================================================123s 329ms/步长-损耗:0.0665-附件:0.9754-val_损耗:0.0116-val_附件:0.9985
纪元7/10
373/373[===============================================================-140s 376ms/步长-损耗:0.0518-附件:0.9817-val_损耗:0.0035-val_附件:0.9997
纪元8/10
373/373[==================================================================144s 386ms/步长-损耗:0.0539-附件:0.9832-val_损耗:8.9459e-04-val_附件:1.0000
valid_generator.batch_size=1
score = classification_model.evaluate_generator(valid_generator,
no_test_images/batch_size, pickle_safe=False)
test_generator.reset()
scores=classification_model.predict_generator(test_generator, len(test_generator))
print("Loss: ", score[0], "Accuracy: ", score[1])
predicted_class_indices=np.argmax(scores,axis=1)
print(predicted_class_indices)
labels = (train_generator.class_indices)
labelss = dict((v,k) for k,v in labels.items())
predictions = [labelss[k] for k in predicted_class_indices]
filenames=test_generator.filenames
results=pd.DataFrame({"Filename":filenames,
"Predictions":predictions})
print(results)
纪元9/10
373/373[===============================================================-122s 327ms/步长-损耗:0.0434-附件:0.9850-val_损耗:0.0023-val_附件:0.9997
纪元10/10
373/373[================================================125s 336ms/步长-损耗:0.0513-附件:0.9844-val_损耗:0.0014-val_附件:1.0000
valid_generator.batch_size=1
score = classification_model.evaluate_generator(valid_generator,
no_test_images/batch_size, pickle_safe=False)
test_generator.reset()
scores=classification_model.predict_generator(test_generator, len(test_generator))
print("Loss: ", score[0], "Accuracy: ", score[1])
predicted_class_indices=np.argmax(scores,axis=1)
print(predicted_class_indices)
labels = (train_generator.class_indices)
labelss = dict((v,k) for k,v in labels.items())
predictions = [labelss[k] for k in predicted_class_indices]
filenames=test_generator.filenames
results=pd.DataFrame({"Filename":filenames,
"Predictions":predictions})
print(results)
损失:5.404246180551993e-06精度:1.0
打印(预测的类索引)-所有0
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
Filename Predictions
0 test_folder/video_3_frame10.jpg without
1 test_folder/video_3_frame1001.jpg without
2 test_folder/video_3_frame1006.jpg without
3 test_folder/video_3_frame1008.jpg without
4 test_folder/video_3_frame1009.jpg without
5 test_folder/video_3_frame1010.jpg without
6 test_folder/video_3_frame1013.jpg without
7 test_folder/video_3_frame1014.jpg without
8 test_folder/video_3_frame1022.jpg without
9 test_folder/video_3_frame1023.jpg without
10 test_folder/video_3_frame103.jpg without
11 test_folder/video_3_frame1036.jpg without
12 test_folder/video_3_frame1039.jpg without
13 test_folder/video_3_frame104.jpg without
14 test_folder/video_3_frame1042.jpg without
15 test_folder/video_3_frame1043.jpg without
16 test_folder/video_3_frame1048.jpg without
17 test_folder/video_3_frame105.jpg without
18 test_folder/video_3_frame1051.jpg without
19 test_folder/video_3_frame1052.jpg without
20 test_folder/video_3_frame1054.jpg without
21 test_folder/video_3_frame1055.jpg without
22 test_folder/video_3_frame1057.jpg without
23 test_folder/video_3_frame1059.jpg without
24 test_folder/video_3_frame1060.jpg without
…只是一些输出,但所有650+都没有类
这是输出,如您所见,对于不带类,所有预测值均为0
这是我第一次尝试使用Keras和CNN,因此我非常感谢您的帮助
更新
我已经解决了这个问题。我目前正在研究精度,但主要问题现在已经解决
这是导致问题的线路
predicted_class_indices=np.argmax(scores,axis=1)
argmax将返回结果的索引位置,但由于我使用的是二进制类,在我的最后一层中,我得到了1。它将只返回一个值,因此它将始终返回第一个类(0作为索引位置)。由于只设置了网络,返回一个类
更改以下内容修复了我的问题
您应该更改此行:
test_datagen = ImageDataGenerator()
作者:
如果您没有以与训练集/有效集相同的方式预处理测试集,您将无法获得预期的结果请尝试使用更多的历元(例如50个历元),给它更多的时间。
同时更改学习速率(每次尝试时将其除以10)和其他正则化参数。您是说“带”和“不带”已经切换了吗?至于您的测试生成器,似乎您忘记了规范化/预处理(1/255)。不,它总是预测没有类0。我尝试了你的建议,下面是相同的,但我仍然有相同的问题。测试生成器有一个问题。这可能是因为数据与您的培训/有效性完全不同,我对此表示怀疑。您说您将预处理元素添加到生成器(1/255),这是另一件大事。我很好奇,如果你单独做,len(测试生成器)会产生什么?我通常使用test_generator.filename。我怀疑这会有什么不同,但在这一点上我不确定。您可以做的另一件事是检查有效的\u生成器,使用“预测\u生成器”而不是“评估\u生成器”为了确保predict_generator不是问题所在。我添加了这个,但仍然是相同的情况。为什么test_generator中的类模式没有像其他模式一样设置为“binary”(二进制)?这是为了预测,所以图像不需要有标签。不管怎样,这没什么区别。你能检查变量“分数”的输出吗?