Python 检查输入时出错:预期输入_39有2个维度,但得到了形状为(100、50、780)的数组

Python 检查输入时出错:预期输入_39有2个维度,但得到了形状为(100、50、780)的数组,python,keras,deep-learning,lstm,word-embedding,Python,Keras,Deep Learning,Lstm,Word Embedding,我试图在我的编码器-解码器上嵌入一个单词,但似乎存在一些维度问题 encoder_inputs = Input(shape=(None,)) x = Embedding(nEncoderToken, embedding_dim, weights=[encoder_embedding_matrix])(encoder_inputs) x, state_h, state_c = LSTM(embedding_dim, return_state=True)(x) encoder_states = [s

我试图在我的编码器-解码器上嵌入一个单词,但似乎存在一些维度问题

encoder_inputs = Input(shape=(None,))
x = Embedding(nEncoderToken, embedding_dim, weights=[encoder_embedding_matrix])(encoder_inputs)
x, state_h, state_c = LSTM(embedding_dim, return_state=True)(x)
encoder_states = [state_h, state_c]

decoder_inputs = Input(shape=(None,))
y = Embedding(nDecoderToken, embedding_dim, weights=[decoder_embedding_matrix], input_length=max_len)(decoder_inputs)
y = LSTM(embedding_dim, return_sequences=True)(decoder, initial_state=encoder_states)
decoder_outputs = Dense(nDecoderToken, activation='softmax')(x)

model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
到目前为止,没有错误。模型概要如下所示

Model: "model_17"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_39 (InputLayer)           (None, None)         0                                            
__________________________________________________________________________________________________
input_40 (InputLayer)           (None, None)         0                                            
__________________________________________________________________________________________________
embedding_41 (Embedding)        (None, None, 300)    234000      input_39[0][0]                   
__________________________________________________________________________________________________
embedding_42 (Embedding)        (None, 50, 300)      239100      input_40[0][0]                   
__________________________________________________________________________________________________
lstm_33 (LSTM)                  [(None, 300), (None, 721200      embedding_41[0][0]               
__________________________________________________________________________________________________
lstm_34 (LSTM)                  (None, 50, 300)      721200      embedding_42[0][0]               
                                                                 lstm_33[0][1]                    
                                                                 lstm_33[0][2]                    
__________________________________________________________________________________________________
dense_17 (Dense)                (None, 50, 797)      239897      lstm_34[0][0]                    
==================================================================================================
Total params: 2,155,397
Trainable params: 2,155,397
Non-trainable params: 0
__________________________________________________________________________________________________
但是当我试着去适应这个模型的时候

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit([trainInputEncoded, trainInputDecoded], trainTargetDecoded, epochs=100)
检查输入时我得到错误:预期输入\u 39有2个维度,但得到了形状为(100、50、780)的数组


我一整天都在试着改变事情,但一事无成。我能做的唯一方法是,如果我输入(shape=(None,nEncoderToken))并删除嵌入层。

不要删除嵌入层,使用Input(shape=(None,nEncoderToken)),那么会出现什么错误?
Input 39与层lstm\u 33不兼容:预期ndim=3,找到ndim=4
这是因为您在shape参数中给出了batch_size=None。因此,将其更改为输入(shape=(,))或输入(batch_shape=(None,)
Input(shape=(,)
)是无效的语法。和
Input(batch_shape=(None,)
导致相同的错误=/trainInputEncoded的形状是什么?不要删除嵌入层,使用Input(shape=(None,nEncoderToken)),那么您得到了什么错误?
Input 39与层lstm\u 33不兼容:预期的ndim=3,找到ndim=4
这是因为您在shape参数中给出了batch_size=None。因此,将其更改为输入(shape=(,))或输入(batch_shape=(None,)
Input(shape=(,)
)是无效的语法。和
Input(批处理形状=(无,)
导致相同的错误=/trainpuntencoded的形状是什么?