Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/280.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 损失函数在减小,但度量函数保持不变?_Python_Tensorflow_Keras_Image Segmentation_Semantic Segmentation - Fatal编程技术网

Python 损失函数在减小,但度量函数保持不变?

Python 损失函数在减小,但度量函数保持不变?,python,tensorflow,keras,image-segmentation,semantic-segmentation,Python,Tensorflow,Keras,Image Segmentation,Semantic Segmentation,我正在研究医学图像分割。我有两节课。0级为背景,1级为病变。由于数据集高度不平衡,我使用损失函数作为(1加权骰子系数),使用度量函数作为骰子系数。我已将数据集从0-255规范化为0-1。我正在使用keras和tensorflow后端来训练模型。在训练UNet++模型时,我的损失函数随着每个历元而减少,但我的度量保持不变。我无法理解为什么当损失按预期减少时,指标是恒定的?此外,我也不明白,为什么骰子系数返回的值介于0和1之间时,损失大于1 这是我的损失函数: def dice_loss(y_tru

我正在研究医学图像分割。我有两节课。0级为背景,1级为病变。由于数据集高度不平衡,我使用损失函数作为(1加权骰子系数),使用度量函数作为骰子系数。我已将数据集从0-255规范化为0-1。我正在使用keras和tensorflow后端来训练模型。在训练UNet++模型时,我的损失函数随着每个历元而减少,但我的度量保持不变。我无法理解为什么当损失按预期减少时,指标是恒定的?此外,我也不明白,为什么骰子系数返回的值介于0和1之间时,损失大于1

这是我的损失函数:

def dice_loss(y_true, y_pred):
    smooth = 1.
    w1 = 0.3
    w2 = 0.7

    y_true_f = K.flatten(y_true[...,0])
    y_pred_f = K.flatten(y_pred[...,0])
    intersect = K.abs(K.sum(y_true_f * y_pred_f, axis = -1))
    denom = K.abs(K.sum(y_true_f, axis = -1)) + K.abs(K.sum(y_pred_f, axis = -1))
    coef1 = (2 * intersect + smooth) / (denom + smooth)

    y_true_f1 = K.flatten(y_true[...,1])
    y_pred_f1 = K.flatten(y_pred[...,1])
    intersect1 = K.abs(K.sum(y_true_f1 * y_pred_f1, axis = -1))
    denom1 = K.abs(K.sum(y_true_f1, axis = -1)) + K.abs(K.sum(y_pred_f1, axis = -1))
    coef2 = (2 * intersect1 + smooth) / (denom1 + smooth)

    weighted_dice_coef = w1 * coef1 + w2 * coef2
    return (1 - weighted_dice_coef)
def dsc(y_true, y_pred):
    """
    DSC = (|X and Y|)/ (|X| + |Y|)
    """
    smooth = 1.
    y_true_f = K.flatten(y_true[...,1])
    y_pred_f = K.flatten(y_pred[...,1])
    intersect = K.abs(K.sum(y_true_f * y_pred_f, axis = -1))
    denom = K.abs(K.sum(y_true_f, axis = -1)) + K.abs(K.sum(y_pred_f, axis = -1))
    coef = (2 * intersect + smooth) / (denom + smooth)

    return coef
这是度量函数:

def dice_loss(y_true, y_pred):
    smooth = 1.
    w1 = 0.3
    w2 = 0.7

    y_true_f = K.flatten(y_true[...,0])
    y_pred_f = K.flatten(y_pred[...,0])
    intersect = K.abs(K.sum(y_true_f * y_pred_f, axis = -1))
    denom = K.abs(K.sum(y_true_f, axis = -1)) + K.abs(K.sum(y_pred_f, axis = -1))
    coef1 = (2 * intersect + smooth) / (denom + smooth)

    y_true_f1 = K.flatten(y_true[...,1])
    y_pred_f1 = K.flatten(y_pred[...,1])
    intersect1 = K.abs(K.sum(y_true_f1 * y_pred_f1, axis = -1))
    denom1 = K.abs(K.sum(y_true_f1, axis = -1)) + K.abs(K.sum(y_pred_f1, axis = -1))
    coef2 = (2 * intersect1 + smooth) / (denom1 + smooth)

    weighted_dice_coef = w1 * coef1 + w2 * coef2
    return (1 - weighted_dice_coef)
def dsc(y_true, y_pred):
    """
    DSC = (|X and Y|)/ (|X| + |Y|)
    """
    smooth = 1.
    y_true_f = K.flatten(y_true[...,1])
    y_pred_f = K.flatten(y_pred[...,1])
    intersect = K.abs(K.sum(y_true_f * y_pred_f, axis = -1))
    denom = K.abs(K.sum(y_true_f, axis = -1)) + K.abs(K.sum(y_pred_f, axis = -1))
    coef = (2 * intersect + smooth) / (denom + smooth)

    return coef
训练损失与时代之争:

以下是示例代码:

def standard_unit(input_tensor, stage, nb_filter, kernel_size = 3):

x = Conv2D(nb_filter, kernel_size, padding = 'same', activation = act, kernel_initializer = 'he_normal', kernel_regularizer=l2(1e-4), name = 'conv' + stage + '_1')(input_tensor)
x = Dropout(dropout_rate, name = 'dp' + stage + '_1')(x)
x = Conv2D(nb_filter, kernel_size, padding = 'same', activation = act, kernel_initializer = 'he_normal', kernel_regularizer=l2(1e-4), name = 'conv' + stage + '_2')(x)
x = Dropout(dropout_rate, name = 'dp' + stage + '_2')(x)

return x
dropout_rate = 0.5
act = "relu"

def Nest_UNet(input_size = (None, None, 1), num_class = 2, deep_supervision = False):

#class 0: Background
#class 1: Lesions
nb_filter = [32,64,128,256,512]

#Handle Dimension Ordering for different backends
global bn_axis
if K.image_dim_ordering() == 'tf':
    bn_axis = 3
else:
    bn_axis = 1
img_input = Input(input_size, name = 'main_input')

conv1_1 = standard_unit(img_input, stage = '11', nb_filter = nb_filter[0])
pool1 = MaxPooling2D(2, strides=2, name='pool1')(conv1_1)
#pool1 = dilatedConv(conv1_1, stage = '11', nb_filter = nb_filter[0])

conv2_1 = standard_unit(pool1, stage='21', nb_filter=nb_filter[1])
pool2 = MaxPooling2D(2, strides=2, name='pool2')(conv2_1)
#pool2 = dilatedConv(conv2_1, stage = '21', nb_filter = nb_filter[1])

up1_2 = Conv2DTranspose(nb_filter[0], 2, strides=2, padding='same', activation = act, kernel_initializer = 'he_normal', kernel_regularizer=l2(1e-4), name='up12')(conv2_1)
conv1_2 = concatenate([up1_2, conv1_1], name='merge12', axis=bn_axis)
conv1_2 = standard_unit(conv1_2, stage='12', nb_filter=nb_filter[0])

conv3_1 = standard_unit(pool2, stage='31', nb_filter=nb_filter[2])
pool3 = MaxPooling2D(2, strides=2, name='pool3')(conv3_1)
#pool3 = dilatedConv(conv3_1, stage = '31', nb_filter = nb_filter[2])

up2_2 = Conv2DTranspose(nb_filter[1], 2, strides=2, padding='same', activation = act, kernel_initializer = 'he_normal', kernel_regularizer=l2(1e-4), name='up22')(conv3_1)
conv2_2 = concatenate([up2_2, conv2_1], name='merge22', axis=bn_axis)
conv2_2 = standard_unit(conv2_2, stage='22', nb_filter=nb_filter[1])

up1_3 = Conv2DTranspose(nb_filter[0], 2, strides=2, padding='same', activation = act, kernel_initializer = 'he_normal', kernel_regularizer=l2(1e-4), name='up13')(conv2_2)
conv1_3 = concatenate([up1_3, conv1_1, conv1_2], name='merge13', axis=bn_axis)
conv1_3 = standard_unit(conv1_3, stage='13', nb_filter=nb_filter[0])

conv4_1 = standard_unit(pool3, stage='41', nb_filter=nb_filter[3])
pool4 = MaxPooling2D(2, strides=2, name='pool4')(conv4_1)
#pool4 = dilatedConv(conv4_1, stage = '41', nb_filter = nb_filter[3])

up3_2 = Conv2DTranspose(nb_filter[2], 2, strides=2, padding='same', activation = act, kernel_initializer = 'he_normal', kernel_regularizer=l2(1e-4), name='up32')(conv4_1)
conv3_2 = concatenate([up3_2, conv3_1], name='merge32', axis=bn_axis)
conv3_2 = standard_unit(conv3_2, stage='32', nb_filter=nb_filter[2])

up2_3 = Conv2DTranspose(nb_filter[1], 2, strides=2, padding='same', activation = act, kernel_initializer = 'he_normal', kernel_regularizer=l2(1e-4), name='up23')(conv3_2)
conv2_3 = concatenate([up2_3, conv2_1, conv2_2], name='merge23', axis=bn_axis)
conv2_3 = standard_unit(conv2_3, stage='23', nb_filter=nb_filter[1])

up1_4 = Conv2DTranspose(nb_filter[0], 2, strides=2, padding='same', activation = act, kernel_initializer = 'he_normal', kernel_regularizer=l2(1e-4), name='up14')(conv2_3)
conv1_4 = concatenate([up1_4, conv1_1, conv1_2, conv1_3], name='merge14', axis=bn_axis)
conv1_4 = standard_unit(conv1_4, stage='14', nb_filter=nb_filter[0])

conv5_1 = standard_unit(pool4, stage='51', nb_filter=nb_filter[4])

up4_2 = Conv2DTranspose(nb_filter[3], 2, strides=2, padding='same', activation = act, kernel_initializer = 'he_normal', kernel_regularizer=l2(1e-4), name='up42')(conv5_1)
conv4_2 = concatenate([up4_2, conv4_1], name='merge42', axis=bn_axis)
conv4_2 = standard_unit(conv4_2, stage='42', nb_filter=nb_filter[3])

up3_3 = Conv2DTranspose(nb_filter[2], 2, strides=2, padding='same', activation = act, kernel_initializer = 'he_normal', kernel_regularizer=l2(1e-4), name='up33')(conv4_2)
conv3_3 = concatenate([up3_3, conv3_1, conv3_2], name='merge33', axis=bn_axis)
conv3_3 = standard_unit(conv3_3, stage='33', nb_filter=nb_filter[2])

up2_4 = Conv2DTranspose(nb_filter[1], 2, strides=2, padding='same', activation = act, kernel_initializer = 'he_normal', kernel_regularizer=l2(1e-4), name='up24')(conv3_3)
conv2_4 = concatenate([up2_4, conv2_1, conv2_2, conv2_3], name='merge24', axis=bn_axis)
conv2_4 = standard_unit(conv2_4, stage='24', nb_filter=nb_filter[1])

up1_5 = Conv2DTranspose(nb_filter[0], 2, strides=2, padding='same', activation = act, kernel_initializer = 'he_normal', kernel_regularizer=l2(1e-4), name='up15')(conv2_4)
conv1_5 = concatenate([up1_5, conv1_1, conv1_2, conv1_3, conv1_4], name='merge15', axis=bn_axis)
conv1_5 = standard_unit(conv1_5, stage='15', nb_filter=nb_filter[0])

nestnet_output_1 = Conv2D(num_class, 1, activation='softmax', name='output_1', kernel_initializer = 'he_normal', padding='same', kernel_regularizer=l2(1e-4))(conv1_2)
nestnet_output_2 = Conv2D(num_class, 1, activation='softmax', name='output_2', kernel_initializer = 'he_normal', padding='same', kernel_regularizer=l2(1e-4))(conv1_3)
nestnet_output_3 = Conv2D(num_class, 1, activation='softmax', name='output_3', kernel_initializer = 'he_normal', padding='same', kernel_regularizer=l2(1e-4))(conv1_4)
nestnet_output_4 = Conv2D(num_class, 1, activation='softmax', name='output_4', kernel_initializer = 'he_normal', padding='same', kernel_regularizer=l2(1e-4))(conv1_5)
nestnet_output_5 = concatenate([nestnet_output_4, nestnet_output_3, nestnet_output_2, nestnet_output_1], name = "mergeAll", axis = bn_axis)
nestnet_output_5 = Conv2D(num_class, 1, activation='softmax', name='output_5', kernel_initializer = 'he_normal', padding='same', kernel_regularizer=l2(1e-4))(nestnet_output_5)

if deep_supervision:
    model = Model(input=img_input, output = nestnet_output_5)
else:
    model = Model(input=img_input, output = nestnet_output_4)

return model

with tf.device("/cpu:0"):
    #initialize the model
    model = Nest_UNet(deep_supervision = False)
#make the model parallel
model = multi_gpu_model(model, gpus = Gpu)
#initialize the optimizer and model
optimizer = Adam(lr = init_lr, beta_1 = beta1, beta_2 = beta2)
model.compile(loss = dice_loss, optimizer = optimizer, metrics = [dsc])
callbacks = [LearningRateScheduler(poly_decay)]
#train the network
aug = ImageDataGenerator(rotation_range = 10, width_shift_range = 0.1, height_shift_range = 0.1, horizontal_flip = True, fill_mode = "nearest")
aug.fit(trainX)
train = model.fit_generator(aug.flow(x = trainX, y = trainY, batch_size = batch_size * Gpu), steps_per_epoch = len(trainX) // (batch_size * Gpu),
epochs = n_epoch, verbose = 2, callbacks = callbacks, validation_data = (validX, validY), shuffle = True)

看起来你已经拿走了,而且基本上完好无损。你从sigmoid切换到softmax有点可疑。您是否将网络中的一个热编码y_pred与一个不是热编码的y_true进行比较?也许您可以打印输出层的形状,并将其与y_true的形状进行比较

我在我的语义分段解决方案中使用了,因为它是联合和sørensen骰子系数计算的交集的推广,让我们比加权骰子系数方法更优雅地强调误报或误报,并且不必使用
轴=-1
,我认为这是你问题的根源。对于损失,我只是颠倒了Tversky指数指标

def tversky_索引(y_真,y_pred):
#骰子系数算法的推广
#alpha对应于强调假阳性
#beta对应于强调假阴性(我们的重点)
#如果alpha=beta=0.5,则与骰子相同
#如果alpha=beta=1.0,则与IoU/JACARD相同
α=0.5
β=0.5
y_true_f=K.展平(y_true)
y_pred_f=K.展平(y_pred)
交集=K.sum(y_真f*y_前f)
返回(交集)/(交集+α*(K.sum(y\u pred\u f*(1.-y\u true\u f)))+beta*(K.sum((1-y\u pred\u f)*y\u true\u f)))
def tversky指数损失(y_真,y_pred):
返回-tversky_索引(y_true,y_pred)
learning_rate=5e-5#也可以尝试5e-4、5e-3,具体取决于您的网络
优化器=Adam(lr=学习率)
unet_model.compile(优化器=优化器,loss=tversky_index_loss,metrics=[“准确度”,“稀疏分类准确度”,tversky_index])

1.只有当损失值低到一定程度时,度量值才会降低,然后才会出现显著降低。在图像分割问题中,两者之间没有正相关关系


2.骰子损失大于1,因为总损失是批次中各损失的总和。

能否提供一个可复制的示例,仅用于复制粘贴和运行?您是否尝试打印部分结果,如
相交
删除
?顺便说一句,您可以使用
dice_loss()
中的
dsc()
函数代替
coef2
,这将减少可能出现错误的地方。我怀疑您的频道1有问题。模型冻结在通道1中,或者与激活相比,您的数据超出范围。--此外,请验证图形是否具有正确的标签。我发现很难相信验证值比训练值更稳定。@DanielMöller,我验证了图表。它有正确的标签。你能解释一下你所说的“模型被冻结在第一频道”是什么意思吗?“我该怎么更正呢?”wind,我会打印部分结果,然后再给你回复。此外,我还使用了
dice\u loss()
中的
dsc()
函数,但没有使用changed@MdSharique:问题解决了吗?如果没有,您可以尝试将度量函数修改为
加权骰子系数
,而不是
骰子系数
。谢谢