为什么在sklearn和python中使用pipline和不使用pipline会得到不同的值

为什么在sklearn和python中使用pipline和不使用pipline会得到不同的值,python,machine-learning,scikit-learn,pipeline,cross-validation,Python,Machine Learning,Scikit Learn,Pipeline,Cross Validation,我正在使用交叉验证递归特征消除(rfecv)和GridSearchCV和RandomForest分类器,如下使用管道和不使用管道 我的管道代码如下 X = df[my_features_all] y = df['gold_standard'] #get development and testing sets x_train, x_test, y_train, y_test = train_test_split(X, y, random_state=0) from sklearn.pipel

我正在使用交叉验证递归特征消除(rfecv)和
GridSearchCV
RandomForest
分类器,如下使用管道和不使用管道

我的管道代码如下

X = df[my_features_all]
y = df['gold_standard']

#get development and testing sets
x_train, x_test, y_train, y_test = train_test_split(X, y, random_state=0)

from sklearn.pipeline import Pipeline

#cross validation setting
k_fold = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)
#this is the classifier used for feature selection
clf_featr_sele = RandomForestClassifier(random_state = 42, class_weight="balanced")
rfecv = RFECV(estimator=clf_featr_sele, step=1, cv=k_fold, scoring='roc_auc')

param_grid = {'n_estimators': [200, 500],
    'max_features': ['auto', 'sqrt', 'log2'],
    'max_depth' : [3,4,5]
    }

#you can have different classifier for your final classifier
clf = RandomForestClassifier(random_state = 42, class_weight="balanced")
CV_rfc = GridSearchCV(estimator=clf, param_grid=param_grid, cv= k_fold, scoring = 'roc_auc', verbose=10, n_jobs = 5)

pipeline  = Pipeline([('feature_sele',rfecv),('clf_cv',CV_rfc)])

pipeline.fit(x_train, y_train)
X = df[my_features_all]
y = df['gold_standard']

#get development and testing sets
x_train, x_test, y_train, y_test = train_test_split(X, y, random_state=0)

#cross validation setting
k_fold = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)

clf = RandomForestClassifier(random_state = 42, class_weight="balanced")

rfecv = RFECV(estimator=clf, step=1, cv=k_fold, scoring='roc_auc')

param_grid = {'estimator__n_estimators': [200, 500],
    'estimator__max_features': ['auto', 'sqrt', 'log2'],
    'estimator__max_depth' : [3,4,5]
    }

CV_rfc = GridSearchCV(estimator=rfecv, param_grid=param_grid, cv= k_fold, scoring = 'roc_auc', verbose=10, n_jobs = 5)
CV_rfc.fit(x_train, y_train)
结果是(使用管道):

我没有管道的代码如下

X = df[my_features_all]
y = df['gold_standard']

#get development and testing sets
x_train, x_test, y_train, y_test = train_test_split(X, y, random_state=0)

from sklearn.pipeline import Pipeline

#cross validation setting
k_fold = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)
#this is the classifier used for feature selection
clf_featr_sele = RandomForestClassifier(random_state = 42, class_weight="balanced")
rfecv = RFECV(estimator=clf_featr_sele, step=1, cv=k_fold, scoring='roc_auc')

param_grid = {'n_estimators': [200, 500],
    'max_features': ['auto', 'sqrt', 'log2'],
    'max_depth' : [3,4,5]
    }

#you can have different classifier for your final classifier
clf = RandomForestClassifier(random_state = 42, class_weight="balanced")
CV_rfc = GridSearchCV(estimator=clf, param_grid=param_grid, cv= k_fold, scoring = 'roc_auc', verbose=10, n_jobs = 5)

pipeline  = Pipeline([('feature_sele',rfecv),('clf_cv',CV_rfc)])

pipeline.fit(x_train, y_train)
X = df[my_features_all]
y = df['gold_standard']

#get development and testing sets
x_train, x_test, y_train, y_test = train_test_split(X, y, random_state=0)

#cross validation setting
k_fold = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)

clf = RandomForestClassifier(random_state = 42, class_weight="balanced")

rfecv = RFECV(estimator=clf, step=1, cv=k_fold, scoring='roc_auc')

param_grid = {'estimator__n_estimators': [200, 500],
    'estimator__max_features': ['auto', 'sqrt', 'log2'],
    'estimator__max_depth' : [3,4,5]
    }

CV_rfc = GridSearchCV(estimator=rfecv, param_grid=param_grid, cv= k_fold, scoring = 'roc_auc', verbose=10, n_jobs = 5)
CV_rfc.fit(x_train, y_train)
结果是(无管道):

尽管如此,这两种方法的概念是相似的,我得到了不同的结果和不同的选择特征(如上面的结果部分所示)。但是,我得到了相同的超参数值

我只是想知道为什么会出现这种差异。什么方法(不使用管道或使用管道?)最适合执行上述任务?


如果需要,我很乐意提供更多细节。

在管道案例中

特征选择(
RFECV
)在对最终估计器应用
网格搜索CV
之前,使用基本模型(
RandomForestClassifier(random_state=42,class_weight=“balanced”)
)执行

在没有管道的情况下


对于超参数的每个组合,相应的估计器用于特征选择(
RFECV
)。因此,这将非常耗时。

非常感谢您的回答。我真的很感激:)