Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/352.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何计算Keras中多个类别的总损失?_Python_Tensorflow_Machine Learning_Keras_Deep Learning - Fatal编程技术网

Python 如何计算Keras中多个类别的总损失?

Python 如何计算Keras中多个类别的总损失?,python,tensorflow,machine-learning,keras,deep-learning,Python,Tensorflow,Machine Learning,Keras,Deep Learning,假设我的网络具有以下参数: 用于语义分割的完全卷积网络 损失=加权二元交叉熵(但它可以是任何损失函数,无所谓) 5类-输入为图像,地面真相为二进制掩码 批量大小=16 现在,我知道损失是通过以下方式计算的:二进制交叉熵应用于图像中每个类的每个像素。因此,基本上,每个像素将有5个损耗值 此步骤后会发生什么? 当我训练我的网络时,它只打印一个历元的单个损耗值。 为了产生单一价值,需要进行多个级别的损失累积,在文档/代码中根本不清楚它是如何发生的 首先组合的是什么?(1)类的损失值(例如,每像素组合5

假设我的网络具有以下参数:

  • 用于语义分割的完全卷积网络
  • 损失=加权二元交叉熵(但它可以是任何损失函数,无所谓)
  • 5类-输入为图像,地面真相为二进制掩码
  • 批量大小=16
  • 现在,我知道损失是通过以下方式计算的:二进制交叉熵应用于图像中每个类的每个像素。因此,基本上,每个像素将有5个损耗值

    此步骤后会发生什么?

    当我训练我的网络时,它只打印一个历元的单个损耗值。 为了产生单一价值,需要进行多个级别的损失累积,在文档/代码中根本不清楚它是如何发生的

  • 首先组合的是什么?(1)类的损失值(例如,每像素组合5个值(每个类一个),然后是图像中的所有像素,或者(2)每个单独类的图像中的所有像素,然后组合所有类损失
  • 这些不同的像素组合到底是如何发生的?在哪里求和/在哪里求平均
  • 轴=-1上的平均值。那么这是每个类所有像素的平均值还是所有类的平均值,还是两者都是 以不同的方式陈述:如何将不同类别的损失组合起来,为图像生成单个损失值?

    文档中根本没有解释这一点,这对人们在keras上进行多类预测非常有帮助,无论网络类型如何。下面是指向丢失函数中第一次通过的起始位置的链接

    我能找到的最接近解释的东西是

    丢失:字符串(目标函数名称)或目标函数。见损失。如果模型有多个输出,可以通过传递字典或损失列表,在每个输出上使用不同的损失。然后,模型将最小化的损失值将是所有单个损失的总和

    从。那么,这是否意味着图像中每个类别的损失都是简单相加的呢

    示例代码请在此处尝试。以下是借用并修改的用于多标签预测的基本实现:

    # Build U-Net model
    num_classes = 5
    IMG_DIM = 256
    IMG_CHAN = 3
    weights = {0: 1, 1: 1, 2: 1, 3: 1, 4: 1000} #chose an extreme value just to check for any reaction
    inputs = Input((IMG_DIM, IMG_DIM, IMG_CHAN))
    s = Lambda(lambda x: x / 255) (inputs)
    
    c1 = Conv2D(8, (3, 3), activation='relu', padding='same') (s)
    c1 = Conv2D(8, (3, 3), activation='relu', padding='same') (c1)
    p1 = MaxPooling2D((2, 2)) (c1)
    
    c2 = Conv2D(16, (3, 3), activation='relu', padding='same') (p1)
    c2 = Conv2D(16, (3, 3), activation='relu', padding='same') (c2)
    p2 = MaxPooling2D((2, 2)) (c2)
    
    c3 = Conv2D(32, (3, 3), activation='relu', padding='same') (p2)
    c3 = Conv2D(32, (3, 3), activation='relu', padding='same') (c3)
    p3 = MaxPooling2D((2, 2)) (c3)
    
    c4 = Conv2D(64, (3, 3), activation='relu', padding='same') (p3)
    c4 = Conv2D(64, (3, 3), activation='relu', padding='same') (c4)
    p4 = MaxPooling2D(pool_size=(2, 2)) (c4)
    
    c5 = Conv2D(128, (3, 3), activation='relu', padding='same') (p4)
    c5 = Conv2D(128, (3, 3), activation='relu', padding='same') (c5)
    
    u6 = Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same') (c5)
    u6 = concatenate([u6, c4])
    c6 = Conv2D(64, (3, 3), activation='relu', padding='same') (u6)
    c6 = Conv2D(64, (3, 3), activation='relu', padding='same') (c6)
    
    u7 = Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same') (c6)
    u7 = concatenate([u7, c3])
    c7 = Conv2D(32, (3, 3), activation='relu', padding='same') (u7)
    c7 = Conv2D(32, (3, 3), activation='relu', padding='same') (c7)
    
    u8 = Conv2DTranspose(16, (2, 2), strides=(2, 2), padding='same') (c7)
    u8 = concatenate([u8, c2])
    c8 = Conv2D(16, (3, 3), activation='relu', padding='same') (u8)
    c8 = Conv2D(16, (3, 3), activation='relu', padding='same') (c8)
    
    u9 = Conv2DTranspose(8, (2, 2), strides=(2, 2), padding='same') (c8)
    u9 = concatenate([u9, c1], axis=3)
    c9 = Conv2D(8, (3, 3), activation='relu', padding='same') (u9)
    c9 = Conv2D(8, (3, 3), activation='relu', padding='same') (c9)
    
    outputs = Conv2D(num_classes, (1, 1), activation='sigmoid') (c9)
    
    model = Model(inputs=[inputs], outputs=[outputs])
    model.compile(optimizer='adam', loss=weighted_loss(weights), metrics=[mean_iou])
    
    def weighted_loss(weightsList):
        def lossFunc(true, pred):
    
            axis = -1 #if channels last 
            #axis=  1 #if channels first        
            classSelectors = K.argmax(true, axis=axis) 
            classSelectors = [K.equal(tf.cast(i, tf.int64), tf.cast(classSelectors, tf.int64)) for i in range(len(weightsList))]
            classSelectors = [K.cast(x, K.floatx()) for x in classSelectors]
            weights = [sel * w for sel,w in zip(classSelectors, weightsList)] 
    
            weightMultiplier = weights[0]
            for i in range(1, len(weights)):
                weightMultiplier = weightMultiplier + weights[i]
    
            loss = BCE_loss(true, pred) - (1+dice_coef(true, pred))
            loss = loss * weightMultiplier
            return loss
        return lossFunc
    model.summary()
    
    这里可以找到实际的BCE-DICE损失函数

    问题动机:根据上述代码,20个时代后网络的总验证损失约为1%;然而,前4个班级的平均联合分数交叉点均在95%以上,但最后一个班级的平均交叉点为23%。很明显,第五节课的成绩并不好。然而,准确度的损失根本没有反映在损失中。因此,这意味着样本的单个损失以一种完全否定我们看到的第五类巨大损失的方式进行组合。因此,当每批样品的损失合并在一起时,仍然很低。我不知道如何协调这些信息

    然后是图像中的所有像素或(2)图像中的所有像素,用于 每一个班级,那么所有的班级损失都加起来了吗? 2) 这些不同的像素组合到底是如何发生的?在哪里求和/在哪里求平均

    我对第(1)项的答复如下: 在训练一批图像时,通过计算非线性函数、损失和优化(更新权重)来训练由像素值组成的阵列不计算每个像素值的损失;而是针对每个图像执行此操作

    像素值(X_序列)、权重和偏置(b)在sigmoid(对于非线性的最简单示例)中用于计算预测的y值。这与y_序列(一次一批)一起用于计算损耗,该损耗使用SGD、MOMONTORM、Adam等优化方法之一进行优化,以更新权重和偏差

    我对第(2)项的答复如下: 在非线性操作期间,像素值(X_序列)与权重(通过点积)组合并添加到偏置以形成预测目标值

    在一批中,可能有属于不同类的培训示例。将相应的目标值(每个类别)与相应的预测值进行比较,以计算损失。因此,把所有损失加起来是完全可以的


    它们属于一个类还是多个类并不重要,只要您将其与正确类的相应目标进行比较即可。有意义吗?

    虽然我已经在a中提到了这个答案的一部分,但是让我们一步一步地检查源代码,了解更多细节,以找到具体的答案

    首先,让我们前馈(!):到
    加权损失
    函数,该函数将
    y\u真
    y\u pred
    样本重量
    掩码
    作为输入:

    weighted_loss = weighted_losses[i]
    # ...
    output_loss = weighted_loss(y_true, y_pred, sample_weight, mask)
    
    weighted_loss
    实际上包含传递给
    fit
    方法的所有(增强)损失函数:

    weighted_losses = [
        weighted_masked_objective(fn) for fn in loss_functions]
    
    我提到的“增强”一词在这里很重要。这是因为,如上所述,实际损失函数由另一个名为的函数包装,该函数定义如下:

    def weighted_masked_objective(fn):
        """Adds support for masking and sample-weighting to an objective function.
        It transforms an objective function `fn(y_true, y_pred)`
        into a sample-weighted, cost-masked objective function
        `fn(y_true, y_pred, weights, mask)`.
        # Arguments
            fn: The objective function to wrap,
                with signature `fn(y_true, y_pred)`.
        # Returns
            A function with signature `fn(y_true, y_pred, weights, mask)`.
        """
        if fn is None:
            return None
    
        def weighted(y_true, y_pred, weights, mask=None):
            """Wrapper function.
            # Arguments
                y_true: `y_true` argument of `fn`.
                y_pred: `y_pred` argument of `fn`.
                weights: Weights tensor.
                mask: Mask tensor.
            # Returns
                Scalar tensor.
            """
            # score_array has ndim >= 2
            score_array = fn(y_true, y_pred)
            if mask is not None:
                # Cast the mask to floatX to avoid float64 upcasting in Theano
                mask = K.cast(mask, K.floatx())
                # mask should have the same shape as score_array
                score_array *= mask
                #  the loss per batch should be proportional
                #  to the number of unmasked samples.
                score_array /= K.mean(mask)
    
            # apply sample weighting
            if weights is not None:
                # reduce score_array to same ndim as weight array
                ndim = K.ndim(score_array)
                weight_ndim = K.ndim(weights)
                score_array = K.mean(score_array,
                                     axis=list(range(weight_ndim, ndim)))
                score_array *= weights
                score_array /= K.mean(K.cast(K.not_equal(weights, 0), K.floatx()))
            return K.mean(score_array)
    return weighted
    
    def binary_crossentropy(target, output, from_logits=False):
        """Binary crossentropy between an output tensor and a target tensor.
        # Arguments
            target: A tensor with the same shape as `output`.
            output: A tensor.
            from_logits: Whether `output` is expected to be a logits tensor.
                By default, we consider that `output`
                encodes a probability distribution.
        # Returns
            A tensor.
        """
        # Note: tf.nn.sigmoid_cross_entropy_with_logits
        # expects logits, Keras expects probabilities.
        if not from_logits:
            # transform back to logits
            _epsilon = _to_tensor(epsilon(), output.dtype.base_dtype)
            output = tf.clip_by_value(output, _epsilon, 1 - _epsilon)
            output = tf.log(output / (1 - output))
    
        return tf.nn.sigmoid_cross_entropy_with_logits(labels=target,
                                                       logits=output)
    
    因此,有一个嵌套函数,
    weighted
    ,它实际上调用了
    score\u array=fn(y\u true,y\u pred)
    行中的实际损失函数
    fn
    。现在,具体地说,在OP提供的示例中,
    fn
    (即损失函数)是
    二进制交叉熵。因此,我们需要看一下Keras中的定义:

    def binary_crossentropy(y_true, y_pred):
        return K.mean(K.binary_crossentropy(y_true, y_pred), axis=-1)
    
    然后调用后端函数
    K.binary\u crossentropy()
    。如果使用Tensorflow作为后端,则定义如下:

    def weighted_masked_objective(fn):
        """Adds support for masking and sample-weighting to an objective function.
        It transforms an objective function `fn(y_true, y_pred)`
        into a sample-weighted, cost-masked objective function
        `fn(y_true, y_pred, weights, mask)`.
        # Arguments
            fn: The objective function to wrap,
                with signature `fn(y_true, y_pred)`.
        # Returns
            A function with signature `fn(y_true, y_pred, weights, mask)`.
        """
        if fn is None:
            return None
    
        def weighted(y_true, y_pred, weights, mask=None):
            """Wrapper function.
            # Arguments
                y_true: `y_true` argument of `fn`.
                y_pred: `y_pred` argument of `fn`.
                weights: Weights tensor.
                mask: Mask tensor.
            # Returns
                Scalar tensor.
            """
            # score_array has ndim >= 2
            score_array = fn(y_true, y_pred)
            if mask is not None:
                # Cast the mask to floatX to avoid float64 upcasting in Theano
                mask = K.cast(mask, K.floatx())
                # mask should have the same shape as score_array
                score_array *= mask
                #  the loss per batch should be proportional
                #  to the number of unmasked samples.
                score_array /= K.mean(mask)
    
            # apply sample weighting
            if weights is not None:
                # reduce score_array to same ndim as weight array
                ndim = K.ndim(score_array)
                weight_ndim = K.ndim(weights)
                score_array = K.mean(score_array,
                                     axis=list(range(weight_ndim, ndim)))
                score_array *= weights
                score_array /= K.mean(K.cast(K.not_equal(weights, 0), K.floatx()))
            return K.mean(score_array)
    return weighted
    
    def binary_crossentropy(target, output, from_logits=False):
        """Binary crossentropy between an output tensor and a target tensor.
        # Arguments
            target: A tensor with the same shape as `output`.
            output: A tensor.
            from_logits: Whether `output` is expected to be a logits tensor.
                By default, we consider that `output`
                encodes a probability distribution.
        # Returns
            A tensor.
        """
        # Note: tf.nn.sigmoid_cross_entropy_with_logits
        # expects logits, Keras expects probabilities.
        if not from_logits:
            # transform back to logits
            _epsilon = _to_tensor(epsilon(), output.dtype.base_dtype)
            output = tf.clip_by_value(output, _epsilon, 1 - _epsilon)
            output = tf.log(output / (1 - output))
    
        return tf.nn.sigmoid_cross_entropy_with_logits(labels=target,
                                                       logits=output)
    
    报税表:

    s的张量
    weighted_loss = weighted_losses[i]
    # ...
    output_loss = weighted_loss(y_true, y_pred, sample_weight, mask)
    
    # Compute total loss.
    total_loss = None
    with K.name_scope('loss'):
        for i in range(len(self.outputs)):
            if i in skip_target_indices:
                continue
            y_true = self.targets[i]
            y_pred = self.outputs[i]
            weighted_loss = weighted_losses[i]
            sample_weight = sample_weights[i]
            mask = masks[i]
            loss_weight = loss_weights_list[i]
            with K.name_scope(self.output_names[i] + '_loss'):
                output_loss = weighted_loss(y_true, y_pred,
                                            sample_weight, mask)
            if len(self.outputs) > 1:
                self.metrics_tensors.append(output_loss)
                self.metrics_names.append(self.output_names[i] + '_loss')
            if total_loss is None:
                total_loss = loss_weight * output_loss
            else:
                total_loss += loss_weight * output_loss
        if total_loss is None:
            if not self.losses:
                raise ValueError('The model cannot be compiled '
                                    'because it has no loss to optimize.')
            else:
                total_loss = 0.
    
        # Add regularization penalties
        # and other layer-specific losses.
        for loss_tensor in self.losses:
            total_loss += loss_tensor