Python KERA能否在自定义指标中使用sklearn来创建微观f1_分数
我在stackoverflow中找到了一个版本Python KERA能否在自定义指标中使用sklearn来创建微观f1_分数,python,keras,Python,Keras,我在stackoverflow中找到了一个版本 from keras import backend as K def f1(y_true, y_pred): def recall(y_true, y_pred): """Recall metric. Only computes a batch-wise average of recall. Computes the recall, a metric for multi-label cl
from keras import backend as K
def f1(y_true, y_pred):
def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
precision = precision(y_true, y_pred)
recall = recall(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
model.compile(loss='binary_crossentropy',
optimizer= "adam",
metrics=[f1])
但我可以使用sklearn f1_分数创建自定义指标吗?
我想使用f1_分数宏和f1_分数微的平均值,有人能帮我吗?谢谢我认为您可以在每批培训中使用上面显示的代码。因为它正在计算每个批次的F1分数,您可以在终端中看到打印的日志
1/13[=>………]-预计到达时间:4s-损失:0.2646-f1:0.2927 2/13[=>………]-预计到达时间:4s-损失:0.2664-f1:0.1463 13/13[========================================================================7s 505ms/步-损耗:0.2615-f1:0.1008-val\u损耗:0.2887-val\u f1:0.1464
如果您使用fit方法并希望计算每个历元的F1,则应尝试编写如下代码
class Metrics(Callback):
'''
Defined your personal callback
'''
def on_train_begin(self, logs={}):
self.val_f1s = []
self.val_recalls = []
self.val_precisions = []
def on_epoch_end(self, epoch, logs={}):
# val_predict = (np.asarray(self.model.predict(self.validation_data[0]))).round()
val_predict = np.argmax(np.asarray(self.model.predict(self.validation_data[0])), axis=1)
# val_targ = self.validation_data[1]
val_targ = np.argmax(self.validation_data[1], axis=1)
_val_f1 = f1_score(val_targ, val_predict, average='macro')
# _val_recall = recall_score(val_targ, val_predict)
# _val_precision = precision_score(val_targ, val_predict)
self.val_f1s.append(_val_f1)
# self.val_recalls.append(_val_recall)
# self.val_precisions.append(_val_precision)
# print('— val_f1: %f — val_precision: %f — val_recall %f' %(_val_f1, _val_precision, _val_recall))
print(' — val_f1:', _val_f1)
return
使用回调拟合方法
metrics = Metrics()
model.fit_generator(generator=generator_train,
steps_per_epoch=len(generator_train),
validation_data=generator_val,
validation_steps=len(generator_val),
epochs=epochs,
callbacks=[metrics])
有一些提示需要注意:
如果使用fit_generator()方法进行训练,则只能使用显示的代码。另外,如果使用fit()方法,可以尝试回调函数
都在那儿