Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何使用keras对一个热编码使用分类焦点损失?_Python_Tensorflow_Keras_Loss Function - Fatal编程技术网

Python 如何使用keras对一个热编码使用分类焦点损失?

Python 如何使用keras对一个热编码使用分类焦点损失?,python,tensorflow,keras,loss-function,Python,Tensorflow,Keras,Loss Function,我正在研究癫痫发作预测。我有一个不平衡的数据集,我想用焦点损失来平衡它。我有两个类一个热编码向量。我找到了下面的焦损代码,但我不知道如何才能在型号之前的焦损代码中使用y\u pred。安装发电机 y_pred是模型的输出。那么,在安装我的模型之前,我如何在焦损代码中使用它呢 焦点丢失代码: def categorical_focal_loss(gamma=2.0, alpha=0.25): """ Implementation of Focal Lo

我正在研究癫痫发作预测。我有一个不平衡的数据集,我想用焦点损失来平衡它。我有两个类一个热编码向量。我找到了下面的焦损代码,但我不知道如何才能在
型号之前的焦损代码中使用
y\u pred
。安装发电机

y_pred
是模型的输出。那么,在安装我的模型之前,我如何在焦损代码中使用它呢

焦点丢失代码:

def categorical_focal_loss(gamma=2.0, alpha=0.25):
    """
    Implementation of Focal Loss from the paper in multiclass classification
    Formula:
        loss = -alpha*((1-p)^gamma)*log(p)
    Parameters:
        alpha -- the same as wighting factor in balanced cross entropy
        gamma -- focusing parameter for modulating factor (1-p)
    Default value:
        gamma -- 2.0 as mentioned in the paper
        alpha -- 0.25 as mentioned in the paper
    """
    def focal_loss(y_true, y_pred):
        # Define epsilon so that the backpropagation will not result in NaN
        # for 0 divisor case
        epsilon = K.epsilon()
        # Add the epsilon to prediction value
        #y_pred = y_pred + epsilon
        # Clip the prediction value
        y_pred = K.clip(y_pred, epsilon, 1.0-epsilon)
        # Calculate cross entropy
        cross_entropy = -y_true*K.log(y_pred)
        # Calculate weight that consists of  modulating factor and weighting factor
        weight = alpha * y_true * K.pow((1-y_pred), gamma)
        # Calculate focal loss
        loss = weight * cross_entropy
        # Sum the losses in mini_batch
        loss = K.sum(loss, axis=1)
        return loss
    
    return focal_loss
history=model.fit_generator(generate_arrays_for_training(indexPat, train_data, start=0,end=100)
validation_data=generate_arrays_for_training(indexPat, test_data, start=0,end=100)
steps_per_epoch=int((len(train_data)/2)), 
                                validation_steps=int((len(test_data)/2)),
                                verbose=2,epochs=65, max_queue_size=2, shuffle=True)
preictPrediction=model.predict_generator(generate_arrays_for_predict(indexPat, filesPath_data), max_queue_size=4, steps=len(filesPath_data))
y_pred1=np.argmax(preictPrediction,axis=1)
y_pred=list(y_pred1)

我的代码:

def categorical_focal_loss(gamma=2.0, alpha=0.25):
    """
    Implementation of Focal Loss from the paper in multiclass classification
    Formula:
        loss = -alpha*((1-p)^gamma)*log(p)
    Parameters:
        alpha -- the same as wighting factor in balanced cross entropy
        gamma -- focusing parameter for modulating factor (1-p)
    Default value:
        gamma -- 2.0 as mentioned in the paper
        alpha -- 0.25 as mentioned in the paper
    """
    def focal_loss(y_true, y_pred):
        # Define epsilon so that the backpropagation will not result in NaN
        # for 0 divisor case
        epsilon = K.epsilon()
        # Add the epsilon to prediction value
        #y_pred = y_pred + epsilon
        # Clip the prediction value
        y_pred = K.clip(y_pred, epsilon, 1.0-epsilon)
        # Calculate cross entropy
        cross_entropy = -y_true*K.log(y_pred)
        # Calculate weight that consists of  modulating factor and weighting factor
        weight = alpha * y_true * K.pow((1-y_pred), gamma)
        # Calculate focal loss
        loss = weight * cross_entropy
        # Sum the losses in mini_batch
        loss = K.sum(loss, axis=1)
        return loss
    
    return focal_loss
history=model.fit_generator(generate_arrays_for_training(indexPat, train_data, start=0,end=100)
validation_data=generate_arrays_for_training(indexPat, test_data, start=0,end=100)
steps_per_epoch=int((len(train_data)/2)), 
                                validation_steps=int((len(test_data)/2)),
                                verbose=2,epochs=65, max_queue_size=2, shuffle=True)
preictPrediction=model.predict_generator(generate_arrays_for_predict(indexPat, filesPath_data), max_queue_size=4, steps=len(filesPath_data))
y_pred1=np.argmax(preictPrediction,axis=1)
y_pred=list(y_pred1)


为了社区的利益,请参阅评论部分

这并不特定于
焦点损失
,所有keras损失函数都采用
y_true
y_pred
,您不必担心这些参数在哪里 它们来自,由
keras
自动供给


你的意思是如何使用焦损编译模型?@Frightera焦损代码应该在“model.fit_generator”之前,而“y_pred”是模型的输出。那么,我将如何在焦损代码中使用“y_pred”?我想您可能感到困惑,在调用fit_generator之前您不需要y_pred,或者我没有正确理解这个问题(考虑到其他人也不理解)这不是焦损特有的,所有keras损耗函数都取y_true和y_pred,您不必担心这些参数来自何处,它们是由keras自动提供的。@user202729因为这个问题需要弄清楚,对我来说还不清楚,似乎只是一个误解。