Python 使用LSTM进行交叉验证的正确方法是什么?
我对LSTM和深度学习还比较陌生。我正在尝试训练多元LSTM时间序列预测,我想进行交叉验证。我尝试了两种不同的方法,发现了非常不同的结果Python 使用LSTM进行交叉验证的正确方法是什么?,python,tensorflow,keras,lstm,cross-validation,Python,Tensorflow,Keras,Lstm,Cross Validation,我对LSTM和深度学习还比较陌生。我正在尝试训练多元LSTM时间序列预测,我想进行交叉验证。我尝试了两种不同的方法,发现了非常不同的结果 使用kfold.split 使用KerasRegressor和cross\u val\u分数 第一个选项的结果更好,RMSE约为3.5,而第二个代码的RMSE为5.7(反向归一化后)。我试图搜索使用KerasRegressionor包装器的LSTM示例,但没有找到很多,而且它们似乎没有遇到相同的问题(或者可能没有检查)。我想知道Keras回归者是不是搞
- 使用
kfold.split
- 使用
和KerasRegressor
cross\u val\u分数
seed(1)
tf.random.set_seed(2)
kfold = KFold(n_splits=8, shuffle=False)
epochs = 20
batch_size=100
model = Sequential()
model.add(LSTM(200, activation="relu",
input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Dense(1,activation="sigmoid"))
#compile model
model.compile(optimizer='adam', loss='mse', metrics=rmse)
for trn, val in kfold.split(X_train, Y_train):
model.fit(X_train[trn], Y_train[trn], epochs=epochs,
batch_size=batch_size, verbose=1)
# Generate generalization metrics
scores = model.evaluate(X_train[val], Y_train[val], verbose=1)
#make prediction
history = model.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size,verbose=1)
Y_pred = model.predict(X_test)
epochs = 20
batch_size=100
#model funtion
def lstm_model(unit=200):
seed(1)
tf.random.set_seed(2)
#define model
model = Sequential()
model.add(LSTM(unit, activation="relu",input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Dense(1,activation="sigmoid"))
# compile model
model.compile(optimizer="adam", loss="mse", metrics=rmse)
model.summary()
return model
model = KerasRegressor(build_fn=lstm_model, epochs=epochs, batch_size=batch_size, verbose=1)
kfold = KFold(n_splits=8, shuffle=False)
cv_scores = cross_val_score(model, X_train, Y_train, cv=kfold, scoring='neg_root_mean_squared_error')
#fit the model
model.fit(X_train, Y_train)
#test on test set
Y_pred = model.predict(X_test)