Keras 虽然训练图像分割网络的UNET-验证损失在经过一段时间后并没有得到减少,但保持不变
我正在figshare的脑瘤数据集上训练UNET图像分割网络,它训练得很好,训练丢失和训练骰子分数都在相应地改变,或者与验证丢失和验证骰子分数的音调相同。表示没有过度安装的问题。但在大约40个时代之后绩效指标没有改善。它在损失0.58和骰子得分0.47之间切换如何解决这个问题?请推荐我。 下面是我的UNET网络-Keras 虽然训练图像分割网络的UNET-验证损失在经过一段时间后并没有得到减少,但保持不变,keras,image-segmentation,loss,unity3d-unet,Keras,Image Segmentation,Loss,Unity3d Unet,我正在figshare的脑瘤数据集上训练UNET图像分割网络,它训练得很好,训练丢失和训练骰子分数都在相应地改变,或者与验证丢失和验证骰子分数的音调相同。表示没有过度安装的问题。但在大约40个时代之后绩效指标没有改善。它在损失0.58和骰子得分0.47之间切换如何解决这个问题?请推荐我。 下面是我的UNET网络- def unet(pretrained_weights = None,input_size = (512,512,3)): inputs = Input(input_size)
def unet(pretrained_weights = None,input_size = (512,512,3)):
inputs = Input(input_size)
conv1 = Convolution2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(inputs)
conv1 = BatchNormalization()(conv1)
#conv1 = Dropout(0.2)(conv1)
conv1 = Convolution2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
conv1 = BatchNormalization()(conv1)
#conv1 = Dropout(0.2)(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Convolution2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool1)
conv2 = BatchNormalization()(conv2)
#conv2 = Dropout(0.1)(conv2)
conv2 = Convolution2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv2)
conv2 = BatchNormalization()(conv2)
#conv2 = Dropout(0.1)(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
conv3 = Convolution2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool2)
conv3 = BatchNormalization()(conv3)
#conv3 = Dropout(0.1)(conv3)
conv3 = Convolution2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv3)
conv3 = BatchNormalization()(conv3)
#conv3 = Dropout(0.1)(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Convolution2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool3)
conv4 = BatchNormalization()(conv4)
#conv4 = Dropout(0.1)(conv4)
conv4 = Convolution2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
conv4 = BatchNormalization()(conv4)
#conv4 = Dropout(0.5)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
conv5 = Convolution2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(pool4)
conv5 = BatchNormalization()(conv5)
#conv5 = Dropout(0.1)(conv5)
conv5 = Convolution2D(1024, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
conv5 = BatchNormalization()(conv5)
#conv5 = Dropout(0.5)(conv5)
up6 = Convolution2D(512, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv5))
merge6 = concatenate([conv4,up6], axis = 3)
conv6 = Convolution2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
conv6 = BatchNormalization()(conv6)
#conv6 = Dropout(0.1)(conv6)
conv6 = Convolution2D(512, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)
conv6 = BatchNormalization()(conv6)
#conv6 = Dropout(0.1)(conv6)
up7 = Convolution2D(256, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv6))
merge7 = concatenate([conv3,up7], axis = 3)
conv7 = Convolution2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
conv7 = BatchNormalization()(conv7)
#conv7 = Dropout(0.1)(conv7)
conv7 = Convolution2D(256, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)
conv7 = BatchNormalization()(conv7)
#conv7 = Dropout(0.1)(conv7)
up8 = Convolution2D(128, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv7))
merge8 = concatenate([conv2,up8], axis = 3)
conv8 = Convolution2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
conv8 = BatchNormalization()(conv8)
#conv8 = Dropout(0.1)(conv8)
conv8 = Convolution2D(128, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)
conv8 = BatchNormalization()(conv8)
#conv8 = Dropout(0.1)(conv8)
up9 = Convolution2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(conv8))
merge9 = concatenate([conv1,up9], axis = 3)
conv9 = Convolution2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
conv9 = BatchNormalization()(conv9)
#conv9 = Dropout(0.2)(conv9)
conv9 = Convolution2D(64, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv9 = BatchNormalization()(conv9)
#conv9 = Dropout(0.2)(conv9)
conv9 = Convolution2D(2, 3, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv9 = BatchNormalization()(conv9)
#conv9 = Dropout(0.2)(conv9)
conv10 = Convolution2D(1, 1, activation = 'sigmoid')(conv9)
model = Model(input = inputs, output = conv10)
#model.summary()
if(pretrained_weights):
model.load_weights(pretrained_weights)
return model
已初始化回调详细信息。起始LR=1e-4
callbacks = [EarlyStopping(monitor='val_loss',mode="min", patience=30,verbose=1,min_delta=1e-4),
ReduceLROnPlateau(monitor='val_loss',mode="min", factor=0.1,patience=8,verbose=1),
ModelCheckpoint(monitor='val_loss',mode="min",
filepath='weights/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-{epoch:03d}-{dice_coef:.6f}--{val_loss:.6f}.hdf5',save_weights_only=True, verbose=1),
CSVLogger('weights/anmol/1/UNET_mixed_loss_monitor_DC_new.csv')]
我的用户定义的骰子分数和损失函数。我在这里用过掷骰子
def dice_coef(y_true, y_pred, smooth=1):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice_loss(y_true, y_pred):
loss = 1 - dice_coef(y_true, y_pred)
return loss
def dice_coef_loss(y_true, y_pred):
return binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred)
使用2605张图像进行培训,306张图像进行验证。
一些培训日志如下所示
Epoch 00041:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-041-0.466533-0.511900.hdf5
纪元42/100
1302/1302[========================================================================1063s 817ms/步-损耗:0.5939-骰子系数:0.4658-val_损耗:0.5076-val_骰子系数:0.5430
Epoch 00042:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-042-0.465990--0.507603.hdf5
纪元43/100
1302/1302[==================================================================1062s 816ms/步-损耗:0.5928-骰子系数:0.4678-val-损耗:0.5191-val-骰子系数:0.5270
Epoch 00043:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-043-0.467685-0.519115.hdf5
纪元44/100
1302/1302[==============================================================================1063s 817ms/步长-损耗:0.5966-骰子系数:0.4632-val_损耗:0.5158-val_骰子系数:0.5364
Epoch 00044:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-044-0.463308--0.515760.hdf5
纪元45/100
1302/1302[==================================================================1064s 817ms/步长-损耗:0.5892-骰子系数:0.4702-val_损耗:0.4993-val_骰子系数:0.5507
Epoch 00045:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-045-0.470134--0.499294.hdf5
纪元46/100
1302/1302[=================================================================================1063s 816ms/步长-损耗:0.5960-骰子系数:0.4636-val-损耗:0.5166-val-骰子系数:0.5329
Epoch 00046:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-046-0.463810--0.516552.hdf5
纪元47/100
1302/1302[==================================================================1065s 818ms/步长-损耗:0.5920-骰子系数:0.4672-val-损耗:0.5062-val-骰子系数:0.5427
Epoch 00047:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-047-0.467146-0.506242.hdf5
纪元48/100
1302/1302[=====================================================================1063s 816ms/步-损耗:0.5938-骰子系数:0.4657-val-损耗:0.5239-val-骰子系数:0.5277
Epoch 00048:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-048-0.465866-0.523923.hdf5
纪元49/100
1302/1302[==================================================================1064s 817ms/步长-损耗:0.5962-骰子系数:0.4639-val_损耗:0.5035-val_骰子系数:0.5434
Epoch 00049:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-049-0.463924-0.503518.hdf5
纪元50/100
1302/1302[=====================================================================1063s 816ms/步-损耗:0.5854-骰子系数:0.4743-val-损耗:0.5463-val-骰子系数:0.5066
Epoch 00050:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-050-0.474530-0.546343.hdf5
纪元51/100
1302/1302[==================================================================1063s 816ms/步长-损耗:0.5840-骰子系数:0.4749-val-损耗:0.5146-val-骰子系数:0.5360
纪元00051:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-051-0.475072-0.514581.hdf5
纪元52/100
1302/1302[==================================================================1064s 817ms/步长-损耗:0.5852-骰子系数:0.4742-val_损耗:0.5257-val_骰子系数:0.5256
Epoch 00052:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-052-0.474234-0.525729.hdf5
纪元53/100
1302/1302[=====================================================================1065s 818ms/步长-损耗:0.5857-骰子系数:0.4736-val_损耗:0.5157-val_骰子系数:0.5315
纪元00053:降低学习率至9.999999747378752e-07
Epoch 00053:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-053-0.473557-0.515651.hdf5
纪元54/100
1302/1302[==================================================================1065s 818ms/步长-损耗:0.5852-骰子系数:0.4737-val_损耗:0.5067-val_骰子系数:0.5421
Epoch 00054:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-054-0.473682-0.506671.hdf5
纪元55/100
1302/1302[========================================================================1065s 818ms/步长-损耗:0.5903-骰子系数:0.4696-val_损耗:0.4910-val_骰子系数:0.5571
纪元00055:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-055-0.469478--0.491024.hdf5
纪元56/100
1302/1302[===============================================================1065s 818ms/步长-损耗:0.5876-骰子系数:0.4711-val-损耗:0.5154-val-骰子系数:0.5340
纪元00056:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-056-0.471110-0.515441.hdf5
纪元57/100
1302/1302[===============================================================1064s 817ms/步长-损耗:0.5897-骰子系数:0.4703-val-损耗:0.5263-val-骰子系数:0.5258
纪元00057:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-057-0.470255--0.526310.hdf5
纪元58/100
1302/1302[============================================================1064s 817ms/步长-损耗:0.5849-骰子系数:0.4741-val_损耗:0.5067-val_骰子系数:0.5451
纪元00058:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-058-0.474262-0.506664.hdf5
纪元59/100
1302/1302[======================================================================================================-1062s 816ms/步长-损耗:0.5816-骰子系数:0.4769-val-损耗:0.5160-val-骰子系数:0.5348
Epoch 00059:将模型保存到权重/anmol/1/UNET_sigmoid_focus_DC_2605_R_B_t-059-0.476830-0.516005.hdf5
纪元60/100
img_size = 512
image_args = dict(seed=seed,
batch_size=2,
shuffle=True,
class_mode=None,
target_size=(img_size, img_size),
color_mode='rgb')
mask_args = dict(seed=seed,
batch_size=2,
class_mode=None,
shuffle=True,
target_size=(img_size, img_size),
color_mode='grayscale')
DIR = 'raw/brain/'
image = 'images'
masks = 'masks'
# combine generators into one which yields image and masks
train_generator = zip(image_datagen.flow_from_directory(**image_args, directory=DIR+'train_'+image),
mask_datagen.flow_from_directory(**mask_args, directory=DIR+'train_'+masks))
validation_generator = zip(image_datagen.flow_from_directory(**image_args, directory=DIR+'validation_'+image),
mask_datagen.flow_from_directory(**mask_args, directory=DIR+'validation_'+masks))
model.fit_generator(train_generator, steps_per_epoch=1302, epochs=100, validation_data=validation_generator,validation_steps=153, callbacks=callbacks)