Keras 稠密期望2d,但在LSTM中获得3d
在我的模型中Keras 稠密期望2d,但在LSTM中获得3d,keras,neural-network,lstm,recurrent-neural-network,lstm-stateful,Keras,Neural Network,Lstm,Recurrent Neural Network,Lstm Stateful,在我的模型中 Xtrain shape : (62, 30, 100) Ytrain shape : (62, 1, 100) Xtest shape : (16, 30, 100) Ytest shape : (16, 1, 100) 当我建立模型时 model = Sequential() model.add(LSTM(units=100, return_sequences= True, input_shape=(x_train.shape[1],X_train.shape[2]))) m
Xtrain shape : (62, 30, 100)
Ytrain shape : (62, 1, 100)
Xtest shape : (16, 30, 100)
Ytest shape : (16, 1, 100)
当我建立模型时
model = Sequential()
model.add(LSTM(units=100, return_sequences= True, input_shape=(x_train.shape[1],X_train.shape[2])))
model.add(LSTM(units=100, return_sequences=True))
model.add(Dense(units=100))
model.fit(x_train,y_train,epochs=5,batch_size=13)
当我尝试拟合时,它会抛出一个错误
ValueError: Error when checking target: expected dense_1 to have 2 dimensions, but got array with shape (62, 1, 100)
我需要预测所有100个功能的下一个1时间戳。
需要做哪些更改?发布的代码似乎与生成错误的代码不同 打印您的
model.summary()
。你会看到:
- LSTM 1:(无,30100)
- LSTM 2:(无,30100)
- 密度:(无,30100)
(无,30100)
vs(62,1100)
要消除时间步长,您需要在最后一个LSTM中使用return\u sequences=False
,这样您的模型就变成:
- (无,30100)
- (无,100)
- (无,100)
Ytrain.shape==(62100)
如果您真的需要中间维度==1,只需在密集后使用
Lambda(Lambda x:K.expand_dims(x,1))
如果您想单独处理每个LSTM步骤的结果(最常用的方法是在LSTM或其他RNN之后放置致密层),则需要将其包裹起来,如下所示:
model = Sequential()
model.add(LSTM(units=100, return_sequences= True, input_shape=(x_train.shape[1],X_train.shape[2])))
model.add(LSTM(units=100, return_sequences=True))
model.add(TimeDistributed(Dense(units=100)))
每个输出都将单独提供给密集层(当然,它将是同一层-所有权重将在它的每个“实例”之间共享)