Python 使用StandardScalar和Keras回归器生成_管道

Python 使用StandardScalar和Keras回归器生成_管道,python,scikit-learn,keras,pipeline,Python,Scikit Learn,Keras,Pipeline,我正在尝试使用以下代码搜索CV时代和批次大小: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, shuffle=False) X_train2 = X_train.values.reshape((X_train.shape[0], 1, X_train.shape[1])) y_train2 = np.ravel(y_train.values) X_test2 = X_test.values

我正在尝试使用以下代码搜索CV时代和批次大小:

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, shuffle=False)

X_train2 = X_train.values.reshape((X_train.shape[0], 1, X_train.shape[1]))
y_train2 = np.ravel(y_train.values)

X_test2 = X_test.values.reshape((X_test.shape[0], 1, X_test.shape[1]))
y_test2 = np.ravel(y_test.values)

def build_model():
    model = Sequential()
    model.add(LSTM(500, input_shape=(1, X_train.shape[1])))
    model.add(Dense(1))
    model.compile(loss="mse", optimizer="adam")
    return model


new_model = KerasRegressor(build_fn=build_model, verbose=0)

pipe = Pipeline([('s', StandardScaler()), ('reg', new_model)])
param_gridd = {'reg__epochs': [5, 6], 'reg__batch_size': [71, 72]}
model = GridSearchCV(estimator=pipe, param_grid=param_gridd)

# ------------------ if the following two lines are uncommented the code works -> problem with Pipeline?
# param_gridd = {'epochs':[5,6], 'batch_size': [71, 72]}
# model = GridSearchCV(estimator=new_model, param_grid=param_gridd)


fitted = model.fit(X_train2, y_train2, validation_data=(X_test2, y_test2), verbose=2, shuffle=False)
并获取以下错误:

File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/model_selection/_search.py", line 722, in fit
 self._run_search(evaluate_candidates)   
File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/model_selection/_search.py", line 1191, in _run_search
 evaluate_candidates(ParameterGrid(self.param_grid))   
File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/model_selection/_search.py", line 711, in evaluate_candidates
 cv.split(X, y, groups)))   
File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 917, in __call__
 if self.dispatch_one_batch(iterator):   
File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 759, in dispatch_one_batch
 self._dispatch(tasks)   
File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 716, in _dispatch
 job = self._backend.apply_async(batch, callback=cb)   
File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/externals/oblib/_parallel_backends.py", line 182, in apply_async
 result = ImmediateResult(func)   
File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/_parallel_backends.py", line 549, in __init__
 self.results = batch()   
File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 225, in __call__
 for func, args, kwargs in self.items]   
File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 225, in <listcomp>
 for func, args, kwargs in self.items]   
File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/model_selection/_validation.py", line 528, in _fit_and_score
 estimator.fit(X_train, y_train, **fit_params)   
File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/pipeline.py", line 265, in fit
 Xt, fit_params = self._fit(X, y, **fit_params)    
File "/home/geo/anaconda3/lib/python3.6/site-packages/sklearn/pipeline.py", line 202, in _fit
 step, param = pname.split('__', 1)

ValueError: not enough values to unpack (expected 2, got 1)
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/model_selection/_search.py”,第722行,适合
自我评估。运行搜索(评估候选人)
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/model_selection/_search.py”,第1191行,在运行搜索中
评估候选参数(参数网格(self.param网格))
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/model\u selection/\u search.py”,第711行,评估候选对象
cv.分割(X、y、组)
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/externals/joblib/parallel.py”,第917行,在调用中__
如果self.dispatch\u一批(迭代器):
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/externals/joblib/parallel.py”,第759行,分批发送
自我分配(任务)
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/externals/joblib/parallel.py”,第716行,在
作业=self.\u后端.apply\u异步(批处理,回调=cb)
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/externals/oblib/_parallel_backends.py”,第182行,应用异步
结果=立即结果(func)
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/externals/joblib/_parallel_backends.py”,第549行,在初始化中__
self.results=batch()
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/externals/joblib/parallel.py”,第225行,in_ucall__
对于self.items中的func、args、kwargs]
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/externals/joblib/parallel.py”,第225行,在
对于self.items中的func、args、kwargs]
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/model_selection/_validation.py”,第528行,在_fit_和_score中
估计值拟合(X_序列、y_序列、**拟合参数)
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/pipeline.py”,第265行,适合
Xt,拟合参数=自拟合(X,y,**拟合参数)
文件“/home/geo/anaconda3/lib/python3.6/site packages/sklearn/pipeline.py”,第202行,格式为
步骤,参数=pname.split(1)
ValueError:没有足够的值来解包(预期值为2,实际值为1)

我怀疑这与
param_gridd
中的命名有关,但并不确定到底发生了什么。请注意,当我从代码中删除
make_pipeline
并在新的_模型上直接使用GridSearchCV时,代码运行良好。

我认为问题在于
KerasRegressionor
的拟合参数的输入方式。
validation\u data
shuffle
不是GridSearchCV的参数,而是
reg
。 试试这个

fitted = model.fit(X_train2, y_train2,**{'reg__validation_data':(X_test2, y_test2),'reg__verbose':2, 'reg__shuffle':False} )
编辑: 根据@Vivek kumar的发现,我为您的预处理编写了一个包装器

from sklearn.preprocessing import StandardScaler
class custom_StandardScaler():
    def __init__(self):
        self.scaler =StandardScaler()
    def fit(self,X,y=None):
        self.scaler.fit(X)
        return self
    def transform(self,X,y=None):
        X_new=self.scaler.transform(X)
        X_new = X_new.reshape((X.shape[0], 1, X.shape[1]))
        return X_new
这将帮助您在创建新维度的同时实现标准的scaler。请记住,在将评估数据集作为fit_params()提供之前,我们必须对其进行转换,因此使用单独的scaler(
offline_scaler()
)来转换该数据集

from sklearn.datasets import load_boston
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from keras.layers import LSTM
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
import numpy as np

seed = 1

boston = load_boston()
X, y = boston['data'], boston['target']

X_train, X_eval, y_train, y_eval = train_test_split(X, y, test_size=0.2, random_state=42)


def build_model():
    model = Sequential()
    model.add(LSTM(5, input_shape=(1, X_train.shape[1])))
    model.add(Dense(1))
    model.compile(loss='mean_squared_error', optimizer='Adam', metrics=['mae'])
    return model


new_model = KerasRegressor(build_fn=build_model, verbose=0)

param_gridd = {'reg__epochs':[2,3], 'reg__batch_size':[16,32]}
pipe = Pipeline([('s', custom_StandardScaler()),('reg', new_model)])

offline_scaler = custom_StandardScaler()
offline_scaler.fit(X_train)
X_eval2 = offline_scaler.transform(X_eval)

model = GridSearchCV(estimator=pipe, param_grid=param_gridd,cv=3)
fitted = model.fit(X_train, y_train,**{'reg__validation_data':(X_eval2, y_eval),'reg__verbose':2, 'reg__shuffle':False} )

正如@AI_Learning所说,这条线应该是有效的:

fitted = model.fit(X_train2, y_train2, 
                   reg__validation_data=(X_test2, y_test2), 
                   reg__verbose=2, reg__shuffle=False)
管道要求将参数命名为
“组件\u参数”
。因此,将
reg__
预编到参数工作

但是,这不起作用,因为
StandardScaler
会抱怨数据维度。你看,当你这样做的时候:

X_train2 = X_train.values.reshape((X_train.shape[0], 1, X_train.shape[1]))
...

X_test2 = X_test.values.reshape((X_test.shape[0], 1, X_test.shape[1]))
您将
X_train2
X_test2
制作成三维数据。您这样做是为了使其适用于
LSTM
,但不适用于
StandardScaler
,因为这需要形状
的二维数据(n个样本,n个特征)

如果像这样从管道中卸下
StandardScaler

pipe = Pipeline([('reg', new_model)])
试试我和@AI_Learning建议的代码,它会起作用的。这表明它与管道无关,而是将不兼容的变压器一起使用

您可以将StandardScaler从管道中取出并执行以下操作:

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, shuffle=False)

std = StandardScaler()
X_train = std.fit_transform(X_train)
X_test = std.transform(X_test)

X_train2 = X_train.values.reshape((X_train.shape[0], 1, X_train.shape[1]))
y_train2 = np.ravel(y_train.values)

...
...

我不认为您发布的代码片段是导致实际问题的原因。你能提供原始片段吗?嗨,克里斯,谢谢你的评论。我已经更新了上面的代码,正如我之前所说的,没有管道可以让代码正常工作。当我打印pipe.get_-params().keys()时,它只有以下内容:dict_键(['memory'、'steps'、's'、'reg'、's_-copy'、's_-with_-mean'、's_-with_-std'、'reg_-verbose'、'reg_-build_-fn']),所以reg_-epoch和reg_-batch_-size根本不是有效的键。嗨,乔治,您可能看不到pipe.get_-params()中列出的这些参数,因为它们是.fit()参数。建议的行是否也有同样的错误?我添加了一个示例,以表明它可以用于示例数据集。因此StandardScalar无法转换keras所采用的3D数组,是的,这是有意义的。谢谢你的帮助!感谢@AI_Learning的澄清!所以在我制作3D阵列之前,所有的转换都需要完成。这使得Keras应用中的管道冗余。谢谢你的回答,维韦克@GeorgeM好吧,你可以在
标准缩放器
和Keras回归器之间设置一个中间点,它可以将二维转换为三维,作为输入传递给Keras,正如@AI_Learning在他的编辑中建议的那样。但是,由于您正在将验证数据传递给
KerasRegressor
,这将需要在管道外使用fitted
StandardScaler
,管道的使用是多余的,在您的情况下是不正确的。@GeorgeM因为验证数据需要与X_序列和X_测试相同的缩放比例,但如果将
StandardScaler
包含到管道中,则将再次缩放数据(X_序列将由于
GridSearchCV
而再次拆分),这将是错误的。所以不要在这里使用
StandardScaler
。是的,我同意这一点。谢谢