Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/vba/15.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow中多层双向RNN的困惑_Python_Tensorflow_Lstm_Rnn_Seq2seq - Fatal编程技术网

Python Tensorflow中多层双向RNN的困惑

Python Tensorflow中多层双向RNN的困惑,python,tensorflow,lstm,rnn,seq2seq,Python,Tensorflow,Lstm,Rnn,Seq2seq,我正在使用Tensorflow构建一个多层双向RNN。但是我对实现有点困惑 我构建了两个创建多层双向RNN的函数第一个运行良好,但我不确定它的预测结果,因为它是作为一个单向多层RNN执行的。以下是我的实施: def encoding_layer_old(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size,

我正在使用Tensorflow构建一个多层双向RNN。但是我对实现有点困惑

我构建了两个创建多层双向RNN的函数第一个运行良好,但我不确定它的预测结果,因为它是作为一个单向多层RNN执行的。以下是我的实施:

def encoding_layer_old(rnn_inputs, rnn_size, num_layers, keep_prob, 
                   source_sequence_length, source_vocab_size, 
                   encoding_embedding_size):
    """
    Create encoding layer
    :param rnn_inputs: Inputs for the RNN
    :param rnn_size: RNN Size
    :param num_layers: Number of layers
    :param keep_prob: Dropout keep probability
    :param source_sequence_length: a list of the lengths of each sequence in the batch
    :param source_vocab_size: vocabulary size of source data
    :param encoding_embedding_size: embedding size of source data
    :return: tuple (RNN output, RNN state)
    """
    # Encoder embedding
    enc_embed = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
    
    def create_cell_fw(rnn_size):
        with tf.variable_scope("create_cell_fw"):
            lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size,initializer=tf.random_uniform_initializer(-0.1,0.1,seed=2), reuse=False)
            drop = tf.contrib.rnn.DropoutWrapper(lstm_cell, output_keep_prob=keep_prob)
        return drop
    def create_cell_bw(rnn_size):
        with tf.variable_scope("create_cell_bw"):
            lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size,initializer=tf.random_uniform_initializer(-0.1,0.1,seed=2), reuse=False)
            drop = tf.contrib.rnn.DropoutWrapper(lstm_cell, output_keep_prob=keep_prob)
        return drop    
    
    
    enc_cell_fw = tf.contrib.rnn.MultiRNNCell([create_cell_fw(rnn_size) for _ in range(num_layers)])
    enc_cell_bw = tf.contrib.rnn.MultiRNNCell([create_cell_bw(rnn_size) for _ in range(num_layers)])
    ((encoder_fw_outputs, encoder_bw_outputs),(encoder_fw_final_state,encoder_bw_final_state)) = tf.nn.bidirectional_dynamic_rnn(enc_cell_fw,enc_cell_bw, enc_embed, 
                                                        sequence_length=source_sequence_length,dtype=tf.float32)
    encoder_outputs = tf.concat([encoder_fw_outputs, encoder_bw_outputs], 2)
    print(encoder_outputs)
    #encoder_final_state_c=[]#tf.Variable([num_layers] , dtype=tf.int32)
    #encoder_final_state_h=[]#tf.Variable([num_layers] , dtype=tf.int32)
    encoder_final_state = ()
    for x in range((num_layers)):
        encoder_final_state_c=tf.concat((encoder_fw_final_state[x].c, encoder_bw_final_state[x].c), 1)#tf.stack(tf.concat((encoder_fw_final_state[x].c, encoder_bw_final_state[x].c), 1))
        encoder_final_state_h=tf.concat((encoder_fw_final_state[x].h, encoder_bw_final_state[x].h), 1)# tf.stack(tf.concat((encoder_fw_final_state[x].h, encoder_bw_final_state[x].h), 1))
        encoder_final_state =encoder_final_state+ (tf.contrib.rnn.LSTMStateTuple(c=encoder_final_state_c,h=encoder_final_state_h),)
    
    #encoder_final_state = tf.contrib.rnn.LSTMStateTuple(c=encoder_final_state_c,h=encoder_final_state_h)
    print('before')
    print(encoder_fw_final_state)
    return encoder_outputs, encoder_final_state

  
我发现了另一个实现,如下所示:

t

这个实现的问题是我得到了一个形状错误:

Trying to share variable bidirectional_rnn/fw/lstm_cell/kernel, but specified shape (168, 224) and found shape (256, 224).
似乎其他人在创建RNN单元时也遇到过类似的问题,解决方案是使用MultiRNNCell创建分层单元。但是如果使用MultiRNNCell,我将无法使用第二个实现,因为MultiRNNCell不支持索引。因此,我不会在单元格列表中循环并创建多个RNN

我非常感谢你在这方面的帮助


我使用的是tensorflow 1.3,这两个代码看起来都有点过于复杂。不管怎样,我尝试了一个简单得多的版本,它成功了。在代码中,从
create\u cell\u fw
create\u cell\u bw
中删除
reuse=tf.AUTO\u reuse
后重试。下面是我的简单实现

def encoding_layer(input_data, num_layers, rnn_size, sequence_length, keep_prob):

    output = input_data
    for layer in range(num_layers):
        with tf.variable_scope('encoder_{}'.format(layer),reuse=tf.AUTO_REUSE):

            cell_fw = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.truncated_normal_initializer(-0.1, 0.1, seed=2))
            cell_fw = tf.contrib.rnn.DropoutWrapper(cell_fw, input_keep_prob = keep_prob)

            cell_bw = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.truncated_normal_initializer(-0.1, 0.1, seed=2))
            cell_bw = tf.contrib.rnn.DropoutWrapper(cell_bw, input_keep_prob = keep_prob)

            outputs, states = tf.nn.bidirectional_dynamic_rnn(cell_fw, 
                                                              cell_bw, 
                                                              output,
                                                              sequence_length,
                                                              dtype=tf.float32)
            output = tf.concat(outputs,2)
            state = tf.concat(states,2)

    return output, state

这确实有效。我昨天试过类似的方法,效果很好。但这将返回与我的第一个函数类似的结果,即使用MultiRNNCell的函数。知道使用Multirncell和将多个双向动态网络连接在一起有什么不同吗。如果你认为这应该在一个单独的堆栈溢出问题中提出,请毫不犹豫地说出来。没关系@mousaalsulaimi,我相信这篇文章应该消除你所有的疑虑。那里解释得很好。
def encoding_layer(input_data, num_layers, rnn_size, sequence_length, keep_prob):

    output = input_data
    for layer in range(num_layers):
        with tf.variable_scope('encoder_{}'.format(layer),reuse=tf.AUTO_REUSE):

            cell_fw = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.truncated_normal_initializer(-0.1, 0.1, seed=2))
            cell_fw = tf.contrib.rnn.DropoutWrapper(cell_fw, input_keep_prob = keep_prob)

            cell_bw = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.truncated_normal_initializer(-0.1, 0.1, seed=2))
            cell_bw = tf.contrib.rnn.DropoutWrapper(cell_bw, input_keep_prob = keep_prob)

            outputs, states = tf.nn.bidirectional_dynamic_rnn(cell_fw, 
                                                              cell_bw, 
                                                              output,
                                                              sequence_length,
                                                              dtype=tf.float32)
            output = tf.concat(outputs,2)
            state = tf.concat(states,2)

    return output, state