Keras 如何在LSTM中提供批量大小?
我存档以查找如何将此(粗体文本)批处理形状转换为正确形式的信息。我知道提供了尺寸、时间步长和尺寸。我的时间步是1231步。我想建立一个合并模型,其中一个LSTM是状态完整的,另一个是无状态的Keras 如何在LSTM中提供批量大小?,keras,syntax,lstm,Keras,Syntax,Lstm,我存档以查找如何将此(粗体文本)批处理形状转换为正确形式的信息。我知道提供了尺寸、时间步长和尺寸。我的时间步是1231步。我想建立一个合并模型,其中一个LSTM是状态完整的,另一个是无状态的 from numpy import array from numpy import hstack from keras.models import Model from keras.layers import Input from keras.layers import Dense from keras.l
from numpy import array
from numpy import hstack
from keras.models import Model
from keras.layers import Input
from keras.layers import Dense
from keras.layers.merge import concatenate
def split_sequences(sequences, n_steps):
X, y = list(), list()
for i in range(len(sequences)):
#
end_ix = i + n_steps
if end_ix > len(sequences):
break
seq_x, seq_y = sequences[i:end_ix, :-1], sequences[end_ix-1, -1]
X.append(seq_x)
y.append(seq_y)
return array(X), array(y)
in_seq1 = X_left
in_seq2 = X_right
out_seq = Y_train# array([in_seq1[i]+in_seq2[i] for i in range(len(in_seq1))])
in_seq1 = in_seq1.reshape((len(in_seq1), 5))
in_seq2 = in_seq2.reshape((len(in_seq2), 1))
out_seq = out_seq.reshape((len(out_seq), 1))
dataset = hstack((in_seq1, in_seq2, out_seq))
n_steps = 1
# convert into input/output
X, y = split_sequences(dataset, n_steps)
X1 = X[:, :, 0]
X2 = X[:, :, 1]
visible1 = Input(shape=(n_steps,1))
lstm1 = LSTM(100, input_shape =(100,1),**batch_input_shape=(32,100,1)**, stateful=True)(visible1)
visible2 = Input(shape=(n_steps,1))
lstm2 = LSTM(100, input_shape =(100,1))(visible2)
merge = concatenate([lstm1, lstm2])
output = Dense(1)(merge)
model = Model(inputs=[visible1, visible2], outputs=output)
model.compile(optimizer='adam', loss='mape', metrics=['accuracy'])
# fit model
model.fit([X1, X2], y, epochs=2000, verbose=1, batch_size=64,
validation_split=0.1)
yhat = model.predict([X1, X2], verbose=0)
print(yhat)
塔克斯 我已删除您的无效[]标记。在进行此操作之前,请阅读所用标签的说明。如果你想包括一个更合适的,请随意相应地。