Python Scikit learn和Yellowbrick给出不同的分数

Python Scikit learn和Yellowbrick给出不同的分数,python,machine-learning,scikit-learn,yellowbrick,Python,Machine Learning,Scikit Learn,Yellowbrick,我正在使用sklearn计算分类器的平均精度和roc_auc,并使用yellowbrick绘制roc_auc和精度召回曲线。问题是,这些包在这两个指标上给出了不同的分数,我不知道哪一个是正确的 使用的代码: import numpy as np from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split from yellowbrick.class

我正在使用sklearn计算分类器的平均精度和roc_auc,并使用yellowbrick绘制roc_auc和精度召回曲线。问题是,这些包在这两个指标上给出了不同的分数,我不知道哪一个是正确的

使用的代码:

import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ROCAUC
from yellowbrick.classifier import PrecisionRecallCurve
from sklearn.datasets import make_classification
from sklearn.metrics import roc_auc_score
from sklearn.metrics import average_precision_score

seed = 42

# provides de data
X, y = make_classification(n_samples=1000, n_features=2, n_redundant=0,
                           n_informative=2, random_state=seed)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

clf_lr = LogisticRegression(random_state=seed)
clf_lr.fit(X_train, y_train)

y_pred = clf_lr.predict(X_test)
roc_auc = roc_auc_score(y_test, y_pred)
avg_precision = average_precision_score(y_test, y_pred)
print(f"ROC_AUC: {roc_auc}")
print(f"Average_precision: {avg_precision}")
print('='*20)

# visualizations
viz3 = ROCAUC(LogisticRegression(random_state=seed))
viz3.fit(X_train, y_train) 
viz3.score(X_test, y_test)
viz3.show()
viz4 = PrecisionRecallCurve(LogisticRegression(random_state=seed))
viz4.fit(X_train, y_train)
viz4.score(X_test, y_test)
viz4.show()
该代码生成以下输出:


正如上面所看到的,度量根据包给出不同的值。print语句中是由scikit learn计算的值,而在图中显示的是由yellowbrick计算的值。由于您使用scikit learn的
predict
方法,您的预测
y_pred
是硬类成员,而不是概率:

np.unique(y_pred)
# array([0, 1])
但是对于ROC和精确回忆计算,情况应该不是这样;传递给这些方法的预测应该是概率,而不是硬类。从
平均精度得分

y_分数:数组,形状=[n_样本]或[n_样本,n_类]

目标分数可以是正类的概率估计、置信值或决策的非阈值度量(如图所示) 由某些分类器上的“决策函数”返回)

其中非阈值意味着完全不是硬类。
roc\u auc\u得分的情况也类似

使用以下代码更正此问题,使scikit学习结果与Yellowbrick返回的结果相同:

y_pred = clf_lr.predict_proba(X_test)     # get probabilities
y_prob = np.array([x[1] for x in y_pred]) # keep the prob for the positive class 1
roc_auc = roc_auc_score(y_test, y_prob)
avg_precision = average_precision_score(y_test, y_prob)
print(f"ROC_AUC: {roc_auc}")
print(f"Average_precision: {avg_precision}")
结果:

ROC_AUC: 0.9545954595459546
Average_precision: 0.9541994473779806
由于Yellowbrick在内部(透明地)处理所有这些计算细节,因此它不会受到这里所做的手工scikit学习过程中的错误的影响


请注意,在二进制情况下(如下所示),您可以(也应该)使用
binary=True
参数使绘图不那么混乱:

viz3 = ROCAUC(LogisticRegression(random_state=seed), binary=True) # similarly for the PrecisionRecall curve
而且,与人们直观的预期相反,至少对于二元情况,
ROCAUC
score
方法将返回AUC,而是返回准确度,如下所述:


请更新您的帖子,并提供两个软件包给出的确切结果;目前,这一点并不明显,图表与问题的相关性也不明显(两者似乎都来自yellowbricks)。@desertnaut scikit learn计算的auc分数和平均精度在打印语句中,而yellowbric计算的值在图表中出现注释。我编辑这篇文章是为了澄清这一点。
viz3.score(X_test, y_test)
# 0.88

# verify this is the accuracy:

from sklearn.metrics import accuracy_score
accuracy_score(y_test, clf_lr.predict(X_test))
# 0.88