Python 整合sklearn管道+;KNN回归的嵌套交叉验证

Python 整合sklearn管道+;KNN回归的嵌套交叉验证,python,scikit-learn,pipeline,feature-selection,hyperparameters,Python,Scikit Learn,Pipeline,Feature Selection,Hyperparameters,我正试图找出如何为sklearn.neighborsregerator.kneighborsregerator构建一个工作流,其中包括: 规范化特征 特征选择(20个数字特征的最佳子集,无特定总数) 交叉验证范围为1到20的超参数K 交叉验证模型 使用RMSE作为错误度量 在scikit learn中有太多不同的选项,以至于在决定我需要哪些课程时,我有点不知所措 除了sklearn.neighborsresselector.kneighbors,我想我还需要: sklearn.pipelin

我正试图找出如何为
sklearn.neighborsregerator.kneighborsregerator
构建一个工作流,其中包括:

  • 规范化特征
  • 特征选择(20个数字特征的最佳子集,无特定总数)
  • 交叉验证范围为1到20的超参数K
  • 交叉验证模型
  • 使用RMSE作为错误度量
在scikit learn中有太多不同的选项,以至于在决定我需要哪些课程时,我有点不知所措

除了
sklearn.neighborsresselector.kneighbors
,我想我还需要:

sklearn.pipeline.Pipeline  
sklearn.preprocessing.Normalizer
sklearn.model_selection.GridSearchCV
sklearn.model_selection.cross_val_score

sklearn.feature_selection.selectKBest
OR
sklearn.feature_selection.SelectFromModel
有人能告诉我这个管道/工作流的定义是什么样的吗?我认为应该是这样的:

import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Normalizer
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import cross_val_score, GridSearchCV

# build regression pipeline
pipeline = Pipeline([('normalize', Normalizer()),
                     ('kbest', SelectKBest(f_classif)),
                     ('regressor', KNeighborsRegressor())])

# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features)
parameters = {'kbest__k':  list(range(1, X.shape[1]+1)),
              'regressor__n_neighbors': list(range(1,21))}

# outer cross-validation on model, inner cross-validation on hyperparameters
scores = cross_val_score(GridSearchCV(pipeline, parameters, scoring="neg_mean_squared_error", cv=10), 
                         X, y, cv=10, scoring="neg_mean_squared_error", verbose=2)

rmses = np.abs(scores)**(1/2)
avg_rmse = np.mean(rmses)
print(avg_rmse)
它似乎没有出错,但我的一些担忧是:

  • 我是否正确地执行了嵌套交叉验证,以便RMSE无偏
  • 如果我希望根据最佳RMSE选择最终模型,我是否应该使用
    评分=“neg_mean_squared_error”
    来计算
    交叉值
    GridSearchCV
  • SelectKBest,f_classif
    是否是为
    KNeighborsRegressor
    模型选择功能的最佳选项
  • 我怎样才能看到:
    • 哪个特征子集被选为最佳
    • 哪个K被选为最佳
非常感谢您的帮助

您的代码似乎还可以

对于
scoring=“neg_mean_squared_error”
对于
cross_val_score
GridSearchCV
,我也会这样做,以确保一切正常运行,但测试这一点的唯一方法是删除其中一个,并查看结果是否发生变化

SelectKBest
是一种很好的方法,但您也可以使用
SelectFromModel
或您可以找到的其他方法

最后,为了获得最佳参数特征分数,我对代码做了如下修改:

import ...


pipeline = Pipeline([('normalize', Normalizer()),
                     ('kbest', SelectKBest(f_classif)),
                     ('regressor', KNeighborsRegressor())])

# try knn__n_neighbors from 1 to 20, and feature count from 1 to len(features)
parameters = {'kbest__k':  list(range(1, X.shape[1]+1)),
              'regressor__n_neighbors': list(range(1,21))}

# changes here

grid = GridSearchCV(pipeline, parameters, cv=10, scoring="neg_mean_squared_error")

grid.fit(X, y)

# get the best parameters and the best estimator
print("the best estimator is \n {} ".format(grid.best_estimator_))
print("the best parameters are \n {}".format(grid.best_params_))

# get the features scores rounded in 2 decimals
pip_steps = grid.best_estimator_.named_steps['kbest']

features_scores = ['%.2f' % elem for elem in pip_steps.scores_ ]
print("the features scores are \n {}".format(features_scores))

feature_scores_pvalues = ['%.3f' % elem for elem in pip_steps.pvalues_]
print("the feature_pvalues is \n {} ".format(feature_scores_pvalues))

# create a tuple of feature names, scores and pvalues, name it "features_selected_tuple"

featurelist = ['age', 'weight']

features_selected_tuple=[(featurelist[i], features_scores[i], 
feature_scores_pvalues[i]) for i in pip_steps.get_support(indices=True)]

# Sort the tuple by score, in reverse order

features_selected_tuple = sorted(features_selected_tuple, key=lambda 
feature: float(feature[1]) , reverse=True)

# Print
print 'Selected Features, Scores, P-Values'
print features_selected_tuple
使用我的数据得出的结果:

the best estimator is
Pipeline(steps=[('normalize', Normalizer(copy=True, norm='l2')), ('kbest', SelectKBest(k=2, score_func=<function f_classif at 0x0000000004ABC898>)), ('regressor', KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski',
         metric_params=None, n_jobs=1, n_neighbors=18, p=2,
         weights='uniform'))])

the best parameters are
{'kbest__k': 2, 'regressor__n_neighbors': 18}

the features scores are
['8.98', '8.80']

the feature_pvalues is
['0.000', '0.000']

Selected Features, Scores, P-Values
[('correlation', '8.98', '0.000'), ('gene', '8.80', '0.000')]
最好的估计器是
管道(步骤=[('normalize',Normalizer(copy=True,norm=l2')),('kbest',SelectKBest(k=2,score\u func=),('regressor',KNeighborsRegressor(algorithm='auto',leaf\u size=30,metric='minkowski'),
度量参数=None,n_作业=1,n_邻居=18,p=2,
重量(='uniform'))])
最佳参数为
{'kbest_____k':2,'regressor___n_neights':18}
特征分数为
['8.98', '8.80']
特征值是
['0.000', '0.000']
所选特征、分数、P值
[(‘相关性’、‘8.98’、‘0.000’、(‘基因’、‘8.80’、‘0.000’)]

您的代码似乎很好。而且,这种方法对我来说是正确的。您是否收到任何错误或意外结果?嘿,谢谢您的评论。我更新了我的帖子,提供了更多关于我关注的问题的信息。谢谢!我看到它显示了用于
kbest\uu k
的参数数量,但是有没有办法查看具体使用了哪些列?
SelectKBest
是只尝试第一列,然后是第一列和第二列,等等,还是尝试所选范围内所有功能的排列?@Jake我编辑了我的帖子。我为特性p值和分数添加了代码。我认为它是基于排列的,正如你在报告中所说的那样comment@Jake第二次更新我的答案。现在您可以获得所选功能了,谢谢@杰克很高兴我能帮忙