Python 3.x Tensorflow 1的编码和解码注意事项

Python 3.x Tensorflow 1的编码和解码注意事项,python-3.x,tensorflow,keras,tf.keras,attention-model,Python 3.x,Tensorflow,Keras,Tf.keras,Attention Model,我参加了使用TensofFlow 1.13.2的项目。 本项目使用时间序列的编码和解码。 编码和解码使用双向RNN完成 我在代码中有一部分: self.encoder_input = tf.placeholder(dtype=tf.float32, shape=(None, opts['input_length'], 1), name='encoder_input') self.decoder_input = tf.placeholder(dtype=tf.float32, sh

我参加了使用TensofFlow 1.13.2的项目。
本项目使用时间序列的编码和解码。

编码和解码使用双向RNN完成 我在代码中有一部分:

    self.encoder_input = tf.placeholder(dtype=tf.float32, shape=(None, opts['input_length'], 1), name='encoder_input')
    self.decoder_input = tf.placeholder(dtype=tf.float32, shape=(None, opts['input_length'], 1), name='decoder_input')
    self.classification_labels = tf.placeholder(dtype=tf.float32, shape=(None, 2), name='classification_labels')
   
    # seq2seq
    with tf.variable_scope('seq2seq'):
        self.D_ENCODER = dilated_encoder(opts)
        self.h = self.D_ENCODER.encoder(self.encoder_input)
        
        self.S_DECOER = single_layer_decoder(opts)
        recons_input = self.S_DECOER.decoder(self.h, self.decoder_input)
这是编码器和解码器的代码:

   cell_fw_list = [tf.nn.rnn_cell.GRUCell(num_units=units) for units in self.hidden_units]
        #state_fw.shape = [batchsize, units], ..., [batchsize, units]
        outputs_fw, states_fw = drnn.multi_dRNN_with_dilations(cell_fw_list, inputs, self.dilations, scope='forward_drnn')

        batch_axis = 0
        time_axis = 1
        inputs_bw = array_ops.reverse(inputs, axis=[time_axis])

        cell_bw_list = [tf.nn.rnn_cell.GRUCell(num_units=units) for units in self.hidden_units]
        outputs_bw, states_bw = drnn.multi_dRNN_with_dilations(cell_bw_list, inputs_bw, self.dilations, scope='backward_drnn')        
        outputs_bw = array_ops.reverse(outputs_bw, axis=[time_axis])# 与输出相对

        states_fw = tf.concat(states_fw, axis=1)# [batchsize, units1 + units2 + units3]
        states_bw = tf.concat(states_bw, axis=1)# [batchsize, units1 + units2 + units3]
        final_states = tf.concat([states_fw, states_bw], axis=1)# [batchsize, 2*(units1 + units2 + units3)]        
        
        return final_states
    
class single_layer_decoder():
    def __init__(self, opts):
        self.hidden_units = 2 * sum(opts['encoder_hidden_units'])
        
    def decoder(self, init_state, init_input):
        cell = tf.nn.rnn_cell.GRUCell(self.hidden_units)
        
        
        outputs, _ = tf.nn.dynamic_rnn(cell=cell, inputs=init_input, initial_state=init_state)
        
        recons = outputs[:, :, 0]
        recons = tf.expand_dims(recons, axis=2)
        
        return recons
我正在尝试将编码器和解码器从双向RNN替换为基于注意的层。
我在tensorflow 1.13版中注意到了这段代码:

我换了编码器:

 # seq2seq
 with tf.variable_scope('seq2seq'):
     self.D_ENCODER = AttentionWithContext()
     self.h = self.D_ENCODER(self.encoder_input)
        
     self.S_DECOER = single_layer_decoder(opts)
     recons_input = self.S_DECOER.decoder(self.h, self.decoder_input)
        
我的问题是如何使用解码器? 当我尝试运行该程序时,出现以下错误:

ValueError: Dimensions must be equal, but are 2 and 401 for 'seq2seq/rnn/while/gru_cell/MatMul' (op: 'MatMul') with input shapes: [?,2], [401,800].