Python 使用google colab中的_目录中的datagen.flow_,培训和验证的准确性没有提高

Python 使用google colab中的_目录中的datagen.flow_,培训和验证的准确性没有提高,python,jupyter-notebook,classification,cnn,data-generation,Python,Jupyter Notebook,Classification,Cnn,Data Generation,CNN模型将分为两类,训练样本=5974,验证样本=1987。 我正在使用来自目录的datagen.flow,我的模型将从单独的测试集进行预测我正在Google Colab中运行200 epoch的代码,但是在5 epoch之后,训练和验证的准确性没有提高。 准确度 纪元45/200 186/186[===========================================================================-138s 744ms/步长-损耗:0.6931

CNN模型将分为两类,训练样本=5974,验证样本=1987。

我正在使用来自目录的datagen.flow,我的模型将从单独的测试集进行预测我正在Google Colab中运行200 epoch的代码,但是在5 epoch之后,训练和验证的准确性没有提高。

准确度

纪元45/200 186/186[===========================================================================-138s 744ms/步长-损耗:0.6931-acc:0.4983-val_损耗:0.6931-val_acc:0.5000

纪元46/200 186/186[===========================================================================-137s 737ms/阶跃-损耗:0.6931-acc:0.4990-val_损耗:0.6931-val_acc:0.5000

纪元47/200 186/186[========================================================================-142s 761ms/阶跃-损耗:0.6931-acc:0.4987-val_损耗:0.6931-val_acc:0.5000

纪元48/200 186/186[=====================================================================-140s 752ms/阶跃-损耗:0.6931-附件:0.4993-val_损耗:0.6931-val_附件:0.5005

train_data_path = "/content/drive/My Drive/snk_tod/train"
valid_data_path = "/content/drive/My Drive/snk_tod/valid"
test_data_path = "/content/drive/My Drive/snk_tod/test"


img_rows = 100
img_cols = 100
epochs = 200
print(epochs)
batch_size = 32

num_of_train_samples = 5974
num_of_valid_samples =  1987 


#Image Generator
train_datagen = ImageDataGenerator(rescale=1. / 255,
                                   rotation_range=40,
                                   width_shift_range=0.2,
                                   height_shift_range=0.2,
                                   shear_range=0.2,
                                   zoom_range=0.2,
                                   horizontal_flip=True,
                                   fill_mode='nearest')


valid_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(train_data_path,
                                                    target_size=(img_rows, img_cols),
                                                    batch_size=batch_size,
                                                    shuffle=True,
                                                    class_mode='categorical')

validation_generator = valid_datagen.flow_from_directory(valid_data_path,
                                                        target_size=(img_rows, img_cols),
                                                        batch_size=batch_size,
                                                        shuffle=True,
                                                        class_mode='categorical')
test_generator = test_datagen.flow_from_directory(test_data_path,
                                                        target_size=(img_rows, img_cols),
                                                        batch_size=batch_size,
                                                        shuffle=False,
                                                        class_mode='categorical')

 model = Sequential()

model.add(Conv2D((32), (3, 3), input_shape=(img_rows, img_cols, 3), kernel_initializer="glorot_uniform", bias_initializer="zeros"))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))

model.add(Conv2D((32), (3, 3),kernel_initializer="glorot_uniform", bias_initializer="zeros"))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))

model.add(Conv2D((64), (3, 3),kernel_initializer="glorot_uniform", bias_initializer="zeros"))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))

model.add(Conv2D((64), (3, 3),kernel_initializer="glorot_uniform", bias_initializer="zeros"))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))

model.add(Flatten())  # this converts our 3D feature maps to 1D feature vectors

model.add(Dropout(0.5))

model.add(Dense(512))

model.add(Dense(2))
model.add(Activation('sigmoid'))

model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['acc'])

#Train
history=model.fit_generator(train_generator,
                    steps_per_epoch=num_of_train_samples // batch_size,
                    epochs=epochs,
                    validation_data=validation_generator,
                    validation_steps=num_of_valid_samples // batch_size) 
纪元49/200 186/186[=====================================================================-139s 745ms/阶跃-损耗:0.6931-acc:0.4976-val_损耗:0.6931-val_acc:0.5010

纪元50/200 186/186[===============================================================-143s 768ms/阶跃-损耗:0.6931-附件:0.4992-val_损耗:0.6931-val_附件:0.5000

纪元51/200 186/186[=====================================================================-140s 755ms/阶跃-损耗:0.6931-acc:0.4980-val_损耗:0.6931-val_acc:0.5000

纪元52/200 186/186[=====================================================================-141s 758ms/阶跃-损耗:0.6931-acc:0.4990-val_损耗:0.6931-val_acc:0.4995

纪元53/200 186/186[==================================================================-141s 759ms/步-损耗:0.6931-附件:0.4985-val_损耗:0.6931-val_附件:0.5000

纪元54/200 186/186[============================================================-143s 771ms/阶跃-损耗:0.6931-附件:0.4987-val_损耗:0.6931-val_附件:0.4995

纪元55/200 186/186[============================================================-143s 771ms/阶跃-损耗:0.6931-acc:0.4992-val\u损耗:0.6931-val\u acc:0.5005

train_data_path = "/content/drive/My Drive/snk_tod/train"
valid_data_path = "/content/drive/My Drive/snk_tod/valid"
test_data_path = "/content/drive/My Drive/snk_tod/test"


img_rows = 100
img_cols = 100
epochs = 200
print(epochs)
batch_size = 32

num_of_train_samples = 5974
num_of_valid_samples =  1987 


#Image Generator
train_datagen = ImageDataGenerator(rescale=1. / 255,
                                   rotation_range=40,
                                   width_shift_range=0.2,
                                   height_shift_range=0.2,
                                   shear_range=0.2,
                                   zoom_range=0.2,
                                   horizontal_flip=True,
                                   fill_mode='nearest')


valid_datagen = ImageDataGenerator(rescale=1. / 255)
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(train_data_path,
                                                    target_size=(img_rows, img_cols),
                                                    batch_size=batch_size,
                                                    shuffle=True,
                                                    class_mode='categorical')

validation_generator = valid_datagen.flow_from_directory(valid_data_path,
                                                        target_size=(img_rows, img_cols),
                                                        batch_size=batch_size,
                                                        shuffle=True,
                                                        class_mode='categorical')
test_generator = test_datagen.flow_from_directory(test_data_path,
                                                        target_size=(img_rows, img_cols),
                                                        batch_size=batch_size,
                                                        shuffle=False,
                                                        class_mode='categorical')

 model = Sequential()

model.add(Conv2D((32), (3, 3), input_shape=(img_rows, img_cols, 3), kernel_initializer="glorot_uniform", bias_initializer="zeros"))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))

model.add(Conv2D((32), (3, 3),kernel_initializer="glorot_uniform", bias_initializer="zeros"))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))

model.add(Conv2D((64), (3, 3),kernel_initializer="glorot_uniform", bias_initializer="zeros"))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))

model.add(Conv2D((64), (3, 3),kernel_initializer="glorot_uniform", bias_initializer="zeros"))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))

model.add(Flatten())  # this converts our 3D feature maps to 1D feature vectors

model.add(Dropout(0.5))

model.add(Dense(512))

model.add(Dense(2))
model.add(Activation('sigmoid'))

model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['acc'])

#Train
history=model.fit_generator(train_generator,
                    steps_per_epoch=num_of_train_samples // batch_size,
                    epochs=epochs,
                    validation_data=validation_generator,
                    validation_steps=num_of_valid_samples // batch_size) 

培训和测试部分的文件结构是什么?我有两个单独的文件夹:培训和测试。内训我有两个文件夹“蛇”和“蟾蜍”。同样在testing文件夹中,还有两个单独的文件夹“snake”和“toad”。这就是你想要的吗?辍学层太多了。如果要使用
categorical\u crossentropy
,请将
sigmoid
更改为
softmax
,除去展平后的一个。如果输出之和不是1,那么首先不应该让它运行。我真的很好奇为什么它没有出错,你也应该把
激活
放在
批量规范化