Python 如何使用变量标签获取要素重要性

Python 如何使用变量标签获取要素重要性,python,machine-learning,scikit-learn,feature-selection,Python,Machine Learning,Scikit Learn,Feature Selection,我正在训练一个决策树回归器,但是当我得到重要的特性时,只有值出现 有人知道如何获得带有变量名称的数据帧吗 以下是代码的主要部分: num_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('std_scaler', StandardScaler()), ]) cat_pipeline = Pipeline([ ('imputer', SimpleImputer(

我正在训练一个决策树回归器,但是当我得到重要的特性时,只有值出现

有人知道如何获得带有变量名称的数据帧吗

以下是代码的主要部分:

num_pipeline = Pipeline([
    ('imputer', SimpleImputer(strategy="median")),
    ('std_scaler', StandardScaler()),
])

cat_pipeline = Pipeline([
    ('imputer', SimpleImputer(strategy="most_frequent")),
    ('oneHot', OneHotEncoder(handle_unknown='ignore')),
])

num_attribs = x_train.select_dtypes(include=np.number).columns.tolist()
cat_attribs = x_train.select_dtypes(include='object').columns.tolist()

full_pipeline = ColumnTransformer([
    ("num", num_pipeline, num_attribs),
    ("cat", cat_pipeline, cat_attribs),
])

train_prepared = full_pipeline.fit_transform(x_train)

param_grid = {'max_leaf_nodes': list(range(2, 100)), 'min_samples_split': [2, 3, 4], 'max_depth': list(range(3, 20))}

dtr = DecisionTreeRegressor()
grid_search = GridSearchCV(dtr, param_grid, cv=5, scoring='neg_mean_squared_error', verbose=1, return_train_score=True, n_jobs=-1)
grid_search = grid_search.fit(train_prepared, y_train)

grid_search.best_estimator_.feature_importances_
以下是要素重要性的输出:

array([2.59182901e-03, 5.08807106e-04, 1.46808641e-03, 2.20756886e-03,
       1.48878361e-01, 5.65411415e-03, 5.16351699e-03, 9.37444882e-03,
       0.00000000e+00, 7.19228983e-03, 1.00581364e-03, 1.05073934e-03,
       2.63424620e-03, 9.41587243e-03, 7.22742602e-02, 0.00000000e+00,
       2.41075666e-03, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
       0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.12861715e-02,
       3.39987538e-03, 5.27924849e-04, 2.20562317e-03, 4.14808367e-03,
       5.82557008e-04, 1.40134963e-03, 0.00000000e+00, 0.00000000e+00,
       1.08351677e-03, 0.00000000e+00, 0.00000000e+00, 1.58022433e-03,
       0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 2.79779634e-02,
       5.94436576e-01, 3.72725666e-02, 1.11665462e-03, 2.39049915e-03,
       0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 1.15314788e-03,
       0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
       0.00000000e+00,...])

虽然您不能直接调用方法从模型中获取标签,但它们与
x\u train
相同,索引方式也相同,因此您可以使用以下方法获取名称:

x_train.select_dtypes(include=np.number).columns
也可以创建字典,例如:

feature_importances = {x_train.select_dtypes(include=np.number).columns[x]:grid_search.best_estimator_.feature_importances_[x] for x in range(len(grid_search.best_estimator_.feature_importances_))}

谢谢你的回答。我正在运行dictionary方法,它给出了一个错误“索引82超出大小为82的轴0的界限”。可能是什么?如果有必要,我可以把完整的代码放在帖子里。对不起,耽搁了,是的,我希望能得到更多关于x_火车的信息,可能会有帮助。嘿!根据你的建议,我设法解决了这个问题。这里有一个解决方案:'feats={}一个保存特性的dict\u name:feature\u特性的重要性,zip中的重要性(x\u train.columns,grid\u search.best\u estimator\u.feature\u importances\u):feats[feature]=重要性#添加名称/值对重要性=pd.DataFrame.from\u dict(feats,orient='index')。重命名(columns={0:'Gini importancement'}).sort_值(按class='Gini-importance',升序=False)重要性[:10]'