Tensorflow 基于加权像素分类交叉熵的语义分割

Tensorflow 基于加权像素分类交叉熵的语义分割,tensorflow,keras,deep-learning,image-segmentation,semantic-segmentation,Tensorflow,Keras,Deep Learning,Image Segmentation,Semantic Segmentation,我最近开始学习语义分割。我正试着训练一名UNet来达到同样的目的。我的输入是RGB 128x128x3图像。我的掩码由4个类0、1、2、3组成,并且是一个热编码的维度128x128x4 def weighted_cce(y_true, y_pred): weights = [] t_inf = tf.convert_to_tensor(1e9, dtype = 'float32') t_zero = tf.convert_to_tensor(0,

我最近开始学习语义分割。我正试着训练一名UNet来达到同样的目的。我的输入是RGB 128x128x3图像。我的掩码由4个类0、1、2、3组成,并且是一个热编码的维度128x128x4

def weighted_cce(y_true, y_pred):
        weights = []
        t_inf = tf.convert_to_tensor(1e9, dtype = 'float32')
        t_zero = tf.convert_to_tensor(0, dtype = 'int64')
        for i in range(0, 4):
            l = tf.argmax(y_true, axis = -1) == i
            n = tf.cast(tf.math.count_nonzero(l), 'float32') + K.epsilon()
            weights.append(n)

        weights = [batch_size/j for j in weights]

        y_pred /= K.sum(y_pred, axis=-1, keepdims=True)
        # clip to prevent NaN's and Inf's
        y_pred = K.clip(y_pred, K.epsilon(), 1 - K.epsilon())
        # calc
        loss = y_true * K.log(y_pred) * weights
        loss = -K.sum(loss, -1)
        return loss

这是我正在使用的损失函数,但它将每个像素分类为2。我做错了什么?

你应该根据你的整个数据来计算权重(除非你的批量相当大,所以你的权重比较稳定)

如果某个类代表性不足,且批量较小,则其权重将接近无穷大

如果目标数据是numpy数组:

shp = y_train.shape
totalPixels = shp[0] * shp[1] * shp[2]

weights = np.sum(y_train, axis=(0, 1, 2)) #final shape (4,)
weights = totalPixels/weights           
如果您的数据在
序列中
生成器中:

totalPixels = 0
counts = np.zeros((4,))

for i in range(len(generator)):
    x, y = generator[i]

    shp = y.shape
    totalPixels += shp[0] * shp[1] * shp[2]

    counts = counts + np.sum(y, axis=(0,1,2))

weights = totalPixels / counts
如果您的数据位于
yield
生成器中(您必须知道一个历元中有多少批次):


尝试1 我不知道新版本的Keras是否能够处理这个问题,但您可以先尝试最简单的方法:只需使用
class\u weight
参数调用
fit
fit\u generator

model.fit(...., class_weight = {0: weights[0], 1: weights[1], 2: weights[2], 3: weights[3]})
尝试2 做一个更健康的损失函数:

weights = weights.reshape((1,1,1,4))
kWeights = K.constant(weights)

def weighted_cce(y_true, y_pred):
    yWeights = kWeights * y_pred         #shape (batch, 128, 128, 4)
    yWeights = K.sum(yWeights, axis=-1)  #shape (batch, 128, 128)  

    loss = K.categorical_crossentropy(y_true, y_pred) #shape (batch, 128, 128)
    wLoss = yWeights * loss

    return K.sum(wLoss, axis=(1,2))

你能具体说明你的代码在做什么,什么在工作,什么不在工作吗?只要在你的问题上多加一点文字,人们就可以帮助你了。
weights = weights.reshape((1,1,1,4))
kWeights = K.constant(weights)

def weighted_cce(y_true, y_pred):
    yWeights = kWeights * y_pred         #shape (batch, 128, 128, 4)
    yWeights = K.sum(yWeights, axis=-1)  #shape (batch, 128, 128)  

    loss = K.categorical_crossentropy(y_true, y_pred) #shape (batch, 128, 128)
    wLoss = yWeights * loss

    return K.sum(wLoss, axis=(1,2))