Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/351.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 整个数据集上的K-折叠交叉验证_Python_Machine Learning_Cross Validation_Catboost_K Fold - Fatal编程技术网

Python 整个数据集上的K-折叠交叉验证

Python 整个数据集上的K-折叠交叉验证,python,machine-learning,cross-validation,catboost,k-fold,Python,Machine Learning,Cross Validation,Catboost,K Fold,我想知道我目前的程序是否正确,或者我可能有数据泄漏。 导入数据集后,我以80/20的比例拆分 X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.20, random_state=0, stratify=y) 然后在定义了CatBoostClassifier之后,我使用训练集执行交叉验证的GridSearch clf = CatBoostClassifier(leaf_estimation_iterati

我想知道我目前的程序是否正确,或者我可能有数据泄漏。 导入数据集后,我以80/20的比例拆分

X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.20, random_state=0, stratify=y)
然后在定义了CatBoostClassifier之后,我使用训练集执行交叉验证的GridSearch

clf = CatBoostClassifier(leaf_estimation_iterations=1, border_count=254, scale_pos_weight=1.67)
grid = {'learning_rate': [0.001, 0.003, 0.006,0.01, 0.03, 0.06, 0.1, 0.3, 0.6, 0.9],
     'depth': [1, 2,3,4,5, 6,7,8,9, 10],
     'l2_leaf_reg': [1, 3, 5, 7, 9,11,13,15],
      'iterations': [50,150,250,350,450,600, 800,1000]}
clf.grid_search(grid,
             X=X_train,
             y=y_train, cv=10)
现在我想评估一下我的模型。我现在可以使用整个数据集执行k-fold交叉验证以评估模型吗?就像下面的代码一样

kf = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=0)
scoring = ['accuracy', 'f1', 'roc_auc', 'recall', 'precision']
scores = cross_validate(
    clf, X, y, scoring=scoring, cv=kf, return_train_score=True)
print("Accuracy TEST: %0.2f (+/- %0.2f) Accuracy TRAIN: %0.2f (+/- %0.2f)" %
      (scores['test_accuracy'].mean(), scores['test_accuracy'].std() * 2, scores['train_accuracy'].mean(), scores['train_accuracy'].std() * 2))
print("F1 TEST: %0.2f (+/- %0.2f) F1 TRAIN : %0.2f (+/- %0.2f) " %
      (scores['test_f1'].mean(), scores['test_f1'].std() * 2, scores['train_f1'].mean(), scores['train_f1'].std() * 2))
print("AUROC TEST: %0.2f (+/- %0.2f) AUROC TRAIN : %0.2f (+/- %0.2f)" %
      (scores['test_roc_auc'].mean(), scores['test_roc_auc'].std() * 2, scores['train_roc_auc'].mean(), scores['train_roc_auc'].std() * 2))
print("recall TEST: %0.2f (+/- %0.2f) recall TRAIN: %0.2f (+/- %0.2f)" %
      (scores['test_recall'].mean(), scores['test_recall'].std() * 2, scores['train_recall'].mean(), scores['train_recall'].std() * 2))
print("Precision TEST: %0.2f (+/- %0.2f) Precision TRAIN: %0.2f (+/- %0.2f)" %
      (scores['test_precision'].mean(), scores['test_precision'].std() * 2, scores['train_precision'].mean(), scores['train_precision'].std() * 2))

或者我应该只在训练集上执行k-折叠交叉验证吗

您通常将交叉验证作为培训程序的一部分。它旨在找到模型的良好参数。只有这样,在最后,您才应该在测试集上评估您的模型,这些数据以前是模型看不到的,甚至在交叉验证期间也是如此。这样你就不会泄露任何数据


因此,是的,您应该只对培训集执行交叉验证。并且仅将测试集用于最终评估。

这样,您实际上根本不使用测试集的事实应该已经敲响了警钟。请看下面的答案。谢谢你的回答。但我说的是k倍cv,它不同于简单的cv。这种技术不是将数据分成k个折叠,生成一个训练集和一个验证集,所以模型总是根据看不见的数据进行评估吗?