Scikit learn 如何在Scikit学习管道中访问回归器的权重

Scikit learn 如何在Scikit学习管道中访问回归器的权重,scikit-learn,keras,Scikit Learn,Keras,我使用了Keras回归器来拟合数据的回归。我使用Scikit learn wrapper和Pipeline首先对数据进行标准化,然后将其装配到Keras回归器上。有点像这样: from sklearn.grid_search import GridSearchCV from keras.models import Sequential from keras.layers import Dense from keras.wrappers.scikit_learn import KerasRegre

我使用了Keras回归器来拟合数据的回归。我使用Scikit learn wrapper和Pipeline首先对数据进行标准化,然后将其装配到Keras回归器上。有点像这样:

from sklearn.grid_search import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.cross_validation import cross_val_score
from sklearn.cross_validation import KFold
from sklearn.externals import joblib
import cPickle
import pandas as pd
import os
from create_model import *

estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasRegressor(build_fn=create_model, nb_epoch=50,         batch_size=5, verbose=0, neurons = 1)))
pipeline = Pipeline(estimators)
然后,我通过GridSearchCv搜索最佳拟合,并在变量中获得最佳拟合:

batch_size = [60, 80, 100, 200]
epochs = [2, 4, 6, 8, 10, 50]
neurons = np.arange(3,10,1)
optimizer = ['sgd', 'adam', 'rmsprom']
activation = ['relu', 'tanh']
lr = [0.001, 0.01, 0.1]
param_grid = dict(mlp__neurons = neurons, mlp__batch_size = batch_size, mlp__nb_epoch = epochs, mlp__optimizer = optimizer, mlp__activation = activation, mlp__learn_rate = lr)
grid = GridSearchCV(estimator=pipeline, param_grid=param_grid, cv = kfold,scoring='mean_squared_error')
grid_result = grid.fit(X, Y)
clf = []
clf = grid_result.best_estimator_  
clf变量具有管道中定义的2个进程。 我的问题是如何通过get_params函数提取keras回归器的权重和偏差以获得最佳拟合(clf):

clf.get_params()

我找不到这方面的好文档。

weights=KerasRegressor.model.layers[0]。get_weights()[0] 偏差=KerasRegressor.model.layers[0]。获取权重()[1]