Python 使用roc_auc和roc_auc得分时,测试集上的roc_auc值不同?

Python 使用roc_auc和roc_auc得分时,测试集上的roc_auc值不同?,python,scikit-learn,roc,sklearn-pandas,grid-search,Python,Scikit Learn,Roc,Sklearn Pandas,Grid Search,我有以下数据管道,但在解释输出时有些混乱。非常感谢您的帮助 # tune the hyperparameters via a cross-validated grid search from sklearn.ensemble import RandomForestClassifier print("[INFO] tuning hyperparameters via grid search") params = {"max_depth": [3, None], "max_fe

我有以下数据管道,但在解释输出时有些混乱。非常感谢您的帮助

# tune the hyperparameters via a cross-validated grid search

from sklearn.ensemble import RandomForestClassifier
print("[INFO] tuning hyperparameters via grid search")
params = {"max_depth": [3, None],
          "max_features": [1, 2, 3, 4],
          "min_samples_split": [2, 3, 10],
          "min_samples_leaf": [1, 3, 10],
          "bootstrap": [True, False],
          "criterion": ["gini", "entropy"]}

model = RandomForestClassifier(50)
grid = RandomizedSearchCV(model, params, cv=10, scoring = 'roc_auc')
start = time()
grid.fit(X_train, y_train)

# evaluate the best grid searched model on the testing data

print("[INFO] grid search took {:.2f} seconds".format(
    time() - start))
acc = grid.score(X_train, y_train)
print("[INFO] grid search accuracy: {:.2f}%".format(acc * 100))
print("[INFO] grid search best parameters: {}".format(
grid.best_params_))
查看交叉验证的培训分数:

rf_score_train = grid.score(X_train, y_train)
rf_score_train

0.87845540607872441
现在使用这个经过训练的模型来预测测试集:

rf_score_test = grid.score(X_test, y_test)
rf_score_test

0.72482993197278911
然而,当我将该模型的预测视为一个数组,并使用外部roc_auc_分数指标将该预测与实际结果进行比较时,我得到的分数与上面测试集上的GridSearchCV“roc_auc”分数完全不同

model_prediction = grid.predict(X_test)
model_prediction

array([0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 
0, 0,0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 
0,0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 
0,0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0,0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 
0,0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 
0,0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 
0,0, 1, 0, 0, 0, 0, 0, 0])
实际结果:

actual_outcome = np.array(y_test)
actual_outcome

array([0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 
0, 0,0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 
1,1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 
0,0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 
0,0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 
1,0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 
0,0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 
0,0, 0, 1, 0, 0, 0, 1, 0])
使用网格搜索之外的roc_auc_分数:

from sklearn.metrics import roc_auc_score
roc_accuracy = roc_auc_score(actual_outcome, model_prediction)*100
roc_accuracy

59.243197278911566

因此,在GridSearch中使用交叉验证的“roc_auc”分数,我得到大约72分,而在同一预测中使用“roc_auc_分数”,我得到59分。哪一个是正确的?我很困惑。我做错什么了吗?非常感谢您的帮助

在GridSearch中,scoring='roc_auc实际上是:scoring='roc_auc'在代码中。这只是上面的一个输入错误。当你输入了这样的输入错误时,你可以编辑你的文章。这一次我帮你做了。有没有可能获得到你的数据集的链接?在GridSearch中,scoring='roc_auc实际上是:scoring='roc_auc'在代码中。这只是上面的一个输入错误。当你输入了这样的输入错误时,你可以编辑你的文章。这一次我帮你做了。有可能获得到你的数据集的链接吗?