Tensorflow 嵌入层和lstm编码器层之间的尺寸don';不匹配

Tensorflow 嵌入层和lstm编码器层之间的尺寸don';不匹配,tensorflow,keras,lstm,embedding,Tensorflow,Keras,Lstm,Embedding,我正在尝试建立一个用于文本生成的编码器-解码器模型。我正在使用带有嵌入层的LSTM层。我对嵌入层到LSTM编码器层的输出有一个问题。我得到的错误是: ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 13, 128, 512) 我的编码器数据具有以下形状:(40,13,128)=(num\u观测

我正在尝试建立一个用于文本生成的编码器-解码器模型。我正在使用带有嵌入层的LSTM层。我对嵌入层到LSTM编码器层的输出有一个问题。我得到的错误是:

 ValueError: Input 0 of layer lstm is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: (None, 13, 128, 512)
我的编码器数据具有以下形状:
(40,13,128)=(num\u观测值,max\u编码器长度,vocab\u大小)
嵌入大小/潜在大小=512

我的问题是:我如何“摆脱”从嵌入层到LSTM编码器层的这4维,或者换句话说:我应该如何将这4维传递到编码器模型的LSTM层?由于我对这个主题还不熟悉,我最终应该在解码器LSTM层纠正什么

我读过好几篇文章,包括,和这篇,还有很多其他的,但都找不到解决办法。在我看来,我的问题不在于模型,而在于数据的形状。任何关于潜在错误的暗示或评论都将不胜感激。多谢各位

我的模型是()中的以下模型:

我的模型总结如下:

Model: "functional_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 13)]         0                                            
__________________________________________________________________________________________________
input_2 (InputLayer)            [(None, 15)]         0                                            
__________________________________________________________________________________________________
embedding (Embedding)           (None, 13, 512)      65536       input_1[0][0]                    
__________________________________________________________________________________________________
embedding_1 (Embedding)         (None, 15, 512)      65536       input_2[0][0]                    
__________________________________________________________________________________________________
lstm (LSTM)                     [(None, 512), (None, 2099200     embedding[0][0]                  
__________________________________________________________________________________________________
lstm_1 (LSTM)                   (None, 15, 512)      2099200     embedding_1[0][0]                
                                                                 lstm[0][1]                       
                                                                 lstm[0][2]                       
__________________________________________________________________________________________________
dense (Dense)                   (None, 15, 128)      65664       lstm_1[0][0]                     
==================================================================================================
Total params: 4,395,136
Trainable params: 4,395,136
Non-trainable params: 0
__________________________________________________________________________________________________
编辑

我正在按以下方式格式化数据:

for i, text, in enumerate(input_texts):
    words = text.split() #text is a sentence 
    for t, word in enumerate(words):
        encoder_input_data[i, t, input_dict[word]] = 1.
它为这样的命令
解码器输入数据[:2]

array([[[0., 1., 0., ..., 0., 0., 0.],
        [0., 0., 1., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        ...,
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.]],
       [[0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 1., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        ...,
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.],
        [0., 0., 0., ..., 0., 0., 0.]]], dtype=float32)

我不确定您将什么作为输入和输出传递给该模式,但这是有效的。请注意我传递的
编码器
解码器
输入的形状。您的输入必须是该形状,模型才能运行

### INITIAL CONFIGURATION
num_observations = 40
max_encoder_seq_length = 13
max_decoder_seq_length = 15
num_encoder_tokens = 128
num_decoder_tokens = 128
latent_dim = 512
batch_size = 256
epochs = 5

### MODEL DEFINITION
encoder_inputs = Input(shape=(max_encoder_seq_length,))
x = Embedding(num_encoder_tokens, latent_dim)(encoder_inputs)
x, state_h, state_c = LSTM(latent_dim, return_state=True)(x)
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(max_decoder_seq_length,))
x = Embedding(num_decoder_tokens, latent_dim)(decoder_inputs)
x = LSTM(latent_dim, return_sequences=True)(x, initial_state=encoder_states)
decoder_outputs = Dense(num_decoder_tokens, activation='softmax')(x)

# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

model.summary()

model.compile(optimizer='rmsprop', loss='categorical_crossentropy')


### MODEL INPUT AND OUTPUT SHAPES
encoder_input_data = np.random.random((1000,13))
decoder_input_data = np.random.random((1000,15))
decoder_target_data = np.random.random((1000, 15, 128))

model.fit([encoder_input_data, decoder_input_data],
          decoder_target_data,
          batch_size=batch_size,
          epochs=epochs,
          shuffle=True,
          validation_split=0.05)

序列数据(文本)需要作为标签编码序列传递到输入端。这需要使用keras提供的
textvectorizer
之类的工具来完成。请阅读更多信息。

请为体系结构中指定的形状添加值,并提供错误的完整跟踪。哪一行代码导致了错误?非常感谢您的回复。问题肯定来自我的数据格式。请查看我编辑的问题,了解我格式化问题的方式。我会看看你提供的链接。我会接受答案,只要我得到我的模型工作。再次感谢。根据你的链接,在这里进行二进制分类是没有意义的。我需要修改这个,如果我可能会问,我应该如何为解码器\目标\数据进行编码?我问这个问题是因为我对你文章中解释的内容的理解仅适用于编码器输入数据和解码器输入数据(文本向量化(..输出模式='int')),这是二维向量。但是解码器\u目标\u数据是三维的,所以也可以使用文本向量化吗?如果可以,应该怎么做?非常感谢您的响应编码器和解码器都有相同的词汇表,所以它们都需要用相同的词汇表作为嵌入层的一部分进行标签编码。这就是为什么我用128作为他们两人的人声大小。
### INITIAL CONFIGURATION
num_observations = 40
max_encoder_seq_length = 13
max_decoder_seq_length = 15
num_encoder_tokens = 128
num_decoder_tokens = 128
latent_dim = 512
batch_size = 256
epochs = 5

### MODEL DEFINITION
encoder_inputs = Input(shape=(max_encoder_seq_length,))
x = Embedding(num_encoder_tokens, latent_dim)(encoder_inputs)
x, state_h, state_c = LSTM(latent_dim, return_state=True)(x)
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(max_decoder_seq_length,))
x = Embedding(num_decoder_tokens, latent_dim)(decoder_inputs)
x = LSTM(latent_dim, return_sequences=True)(x, initial_state=encoder_states)
decoder_outputs = Dense(num_decoder_tokens, activation='softmax')(x)

# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

model.summary()

model.compile(optimizer='rmsprop', loss='categorical_crossentropy')


### MODEL INPUT AND OUTPUT SHAPES
encoder_input_data = np.random.random((1000,13))
decoder_input_data = np.random.random((1000,15))
decoder_target_data = np.random.random((1000, 15, 128))

model.fit([encoder_input_data, decoder_input_data],
          decoder_target_data,
          batch_size=batch_size,
          epochs=epochs,
          shuffle=True,
          validation_split=0.05)
Model: "functional_210"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_176 (InputLayer)          [(None, 13)]         0                                            
__________________________________________________________________________________________________
input_177 (InputLayer)          [(None, 15)]         0                                            
__________________________________________________________________________________________________
embedding_33 (Embedding)        (None, 13, 512)      65536       input_176[0][0]                  
__________________________________________________________________________________________________
embedding_34 (Embedding)        (None, 15, 512)      65536       input_177[0][0]                  
__________________________________________________________________________________________________
lstm_94 (LSTM)                  [(None, 512), (None, 2099200     embedding_33[0][0]               
__________________________________________________________________________________________________
lstm_95 (LSTM)                  (None, 15, 512)      2099200     embedding_34[0][0]               
                                                                 lstm_94[0][1]                    
                                                                 lstm_94[0][2]                    
__________________________________________________________________________________________________
dense_95 (Dense)                (None, 15, 128)      65664       lstm_95[0][0]                    
==================================================================================================
Total params: 4,395,136
Trainable params: 4,395,136
Non-trainable params: 0
__________________________________________________________________________________________________
Epoch 1/5
4/4 [==============================] - 3s 853ms/step - loss: 310.7389 - val_loss: 310.3570
Epoch 2/5
4/4 [==============================] - 3s 638ms/step - loss: 310.6186 - val_loss: 310.3362
Epoch 3/5
4/4 [==============================] - 3s 852ms/step - loss: 310.6126 - val_loss: 310.3345
Epoch 4/5
4/4 [==============================] - 3s 797ms/step - loss: 310.6111 - val_loss: 310.3369
Epoch 5/5
4/4 [==============================] - 3s 872ms/step - loss: 310.6117 - val_loss: 310.3352