Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow中Multirncell的输出和状态_Python_Tensorflow - Fatal编程技术网

Python Tensorflow中Multirncell的输出和状态

Python Tensorflow中Multirncell的输出和状态,python,tensorflow,Python,Tensorflow,我有一个堆叠的Multirncell,定义如下: batch_size = 256 rnn_size = 512 keep_prob = 0.5 lstm_1 = tf.nn.rnn_cell.LSTMCell(rnn_size) lstm_dropout_1 = tf.nn.rnn_cell.DropoutWrapper(lstm_1, output_keep_prob = keep_prob) lstm_2 = tf.nn.rnn_cell.LSTMCell(rnn_size) lstm

我有一个堆叠的Multirncell,定义如下:

batch_size = 256
rnn_size = 512
keep_prob = 0.5

lstm_1 = tf.nn.rnn_cell.LSTMCell(rnn_size)
lstm_dropout_1 = tf.nn.rnn_cell.DropoutWrapper(lstm_1, output_keep_prob = keep_prob)

lstm_2 = tf.nn.rnn_cell.LSTMCell(rnn_size)
lstm_dropout_2 = tf.nn.rnn_cell.DropoutWrapper(lstm_2, output_keep_prob = keep_prob)

stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_dropout_1, lstm_dropout_2])

rnn_inputs = tf.nn.embedding_lookup(embedding_matrix, ques_placeholder)

init_state = stacked_lstm.zero_state(batch_size, tf.float32)
rnn_outputs, final_state = tf.nn.dynamic_rnn(stacked_lstm, rnn_inputs, initial_state=init_state)
在这段代码中,有两个RNN层。我只想处理这个动态RNN的最终状态。我希望状态是一个形状为[batch\u size,rnn\u size*2]的二维张量

最终_状态的形状为4D-
[2,2256512]


有人能解释一下为什么我会变成这样吗?另外,我如何处理这个张量,使它通过一个完全连接的层?

我无法重现
[2,2256512]
形状。但有了这段代码:

rnn_size = 512
batch_size = 256
time_size = 5
input_size = 2
keep_prob = 0.5

lstm_1 = tf.nn.rnn_cell.LSTMCell(rnn_size)
lstm_dropout_1 = tf.nn.rnn_cell.DropoutWrapper(lstm_1, output_keep_prob=keep_prob)

lstm_2 = tf.nn.rnn_cell.LSTMCell(rnn_size)

stacked_lstm = tf.nn.rnn_cell.MultiRNNCell([lstm_dropout_1, lstm_2])

rnn_inputs = tf.placeholder(tf.float32, shape=[None, time_size, input_size])
# Shape of the rnn_inputs is (batch_size, time_size, input_size)

init_state = stacked_lstm.zero_state(batch_size, tf.float32)
rnn_outputs, final_state = tf.nn.dynamic_rnn(stacked_lstm, rnn_inputs, initial_state=init_state)
print(rnn_outputs)
print(final_state)
我得到了运行输出的正确形状:
(批大小、时间大小、rnn大小)

final_state
实际上是一对
LSTMStateTuple
(对于两个单元格
lstm_dropout_1
lstm_2
):


没有足够的代表发表评论。。 最终状态为:


[深度,lstmtuple.c和.h,批次大小,rnn大小]

非常感谢您的回答!唯一的潜在错误可能是动态rnn的输入,即rnn输入。我有一个单词序列,我想在将其传递给动态rnn之前嵌入它。根据您的代码,此rnn_输入的维度需要为[批量大小,最大句子长度,emdedding_长度]。现在是这样。所以我不确定会出什么问题。
Tensor("rnn/transpose_1:0", shape=(256, 5, 512), dtype=float32)
(LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_3:0' shape=(256, 512) dtype=float32>, h=<tf.Tensor 'rnn/while/Exit_4:0' shape=(256, 512) dtype=float32>),
 LSTMStateTuple(c=<tf.Tensor 'rnn/while/Exit_5:0' shape=(256, 512) dtype=float32>, h=<tf.Tensor 'rnn/while/Exit_6:0' shape=(256, 512) dtype=float32>))
  # 'outputs' is a tensor of shape [batch_size, max_time, 256]
  # 'state' is a N-tuple where N is the number of LSTMCells containing a
  # tf.contrib.rnn.LSTMStateTuple for each cell