Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/jenkins/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Tensorflow dynamic\n在批量大于1时传播NAN_Tensorflow_Lstm_Recurrent Neural Network - Fatal编程技术网

Tensorflow dynamic\n在批量大于1时传播NAN

Tensorflow dynamic\n在批量大于1时传播NAN,tensorflow,lstm,recurrent-neural-network,Tensorflow,Lstm,Recurrent Neural Network,希望有人能帮助我理解我在Tensorflow中使用带动态的LSTM时遇到的一个问题。根据这个MWE,当我有一个批量大小为1且序列不完整时(我用nan来填充短张量,而不是用零来突出显示),所有操作都正常,短序列中的nan会像预期的那样被忽略 import tensorflow as tf import numpy as np batch_1 = np.random.randn(1, 10, 8) batch_2 = np.random.randn(1, 10, 8) batch_1[6:] =

希望有人能帮助我理解我在Tensorflow中使用带动态的LSTM时遇到的一个问题。根据这个MWE,当我有一个批量大小为1且序列不完整时(我用nan来填充短张量,而不是用零来突出显示),所有操作都正常,短序列中的nan会像预期的那样被忽略

import tensorflow as tf
import numpy as np

batch_1 = np.random.randn(1, 10, 8)
batch_2 = np.random.randn(1, 10, 8)

batch_1[6:] = np.nan # lets make a short batch in batch 1 second sample of length 6 by padding with nans

seq_lengths_batch_1 = [6]
seq_lengths_batch_2 = [10]

tf.reset_default_graph()

input_vals = tf.placeholder(shape=[1, 10, 8], dtype=tf.float32)
lengths = tf.placeholder(shape=[1], dtype=tf.int32)

cell = tf.nn.rnn_cell.LSTMCell(num_units=5)
outputs, states  = tf.nn.dynamic_rnn(cell=cell, dtype=tf.float32, sequence_length=lengths, inputs=input_vals)
last_relevant_value = states.h
fake_loss = tf.reduce_mean(last_relevant_value)
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(fake_loss)

sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
_, fl, lrv = sess.run([optimizer, fake_loss, last_relevant_value], feed_dict={input_vals: batch_1, lengths: seq_lengths_batch_1})
print(fl, lrv)
_, fl, lrv = sess.run([optimizer, fake_loss, last_relevant_value], feed_dict={input_vals: batch_2, lengths: seq_lengths_batch_2})
print(fl, lrv)

sess.close()
输出正确填充的ilk值

0.00659429 [[ 0.11608966  0.08498846 -0.02892204 -0.01945034 -0.1197343 ]]
-0.080244 [[-0.03018401 -0.18946587 -0.19128899 -0.10388547  0.11360413]]
然而,例如,当我将批处理大小增加到3时,第一批处理正确执行,但第二批处理会导致NAN开始传播

import tensorflow as tf
import numpy as np

batch_1 = np.random.randn(3, 10, 8)
batch_2 = np.random.randn(3, 10, 8)

batch_1[1, 6:] = np.nan 
batch_2[0, 8:] = np.nan 

seq_lengths_batch_1 = [10, 6, 10]
seq_lengths_batch_2 = [8, 10, 10]

tf.reset_default_graph()

input_vals = tf.placeholder(shape=[3, 10, 8], dtype=tf.float32)
lengths = tf.placeholder(shape=[3], dtype=tf.int32)

cell = tf.nn.rnn_cell.LSTMCell(num_units=5)
outputs, states  = tf.nn.dynamic_rnn(cell=cell, dtype=tf.float32, sequence_length=lengths, inputs=input_vals)
last_relevant_value = states.h
fake_loss = tf.reduce_mean(last_relevant_value)
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(fake_loss)

sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
_, fl, lrv = sess.run([optimizer, fake_loss, last_relevant_value], feed_dict={input_vals: batch_1, lengths: seq_lengths_batch_1})
print(fl, lrv)
_, fl, lrv = sess.run([optimizer, fake_loss, last_relevant_value], feed_dict={input_vals: batch_2, lengths: seq_lengths_batch_2})
print(fl, lrv)

sess.close()
给予

0.0533635 [[ 0.33622459 -0.0284576   0.11914439  0.14402215 -0.20783389]
 [ 0.20805927  0.17591488 -0.24977767 -0.03432769  0.2944448 ]
 [-0.04508523  0.11878576  0.07287208  0.14114542 -0.24467923]]
nan [[ nan  nan  nan  nan  nan]
 [ nan  nan  nan  nan  nan]
 [ nan  nan  nan  nan  nan]]
我发现这种行为非常奇怪,因为我希望序列长度之后的所有值都被忽略,就像批大小为1时发生的情况一样,但批大小为2或更多时不起作用

显然,如果我使用0作为填充值,NAN不会被传播,但这并不能让我相信dynamic_rnn正在按照我的预期运行


此外,我还应该提到,如果我取消优化步骤,问题就不会发生,因此现在我感到很困惑,在尝试了一天多种不同的排列之后,我不明白为什么批量大小会在这里产生任何差异,我没有追溯到确切的操作,但我认为是这样的

为什么超过
序列长度的值不被忽略?它们被忽略,因为在执行某些操作时,它们被乘以
0
(它们被屏蔽)。从数学上讲,结果总是零,因此它们应该没有影响。不幸的是,
nan*0=nan
。因此,如果在示例中给出
nan
值,它们就会传播。您可能想知道为什么TensorFlow没有完全忽略它们,而只是屏蔽它们。原因在于现代硬件的性能。在一个有一堆零的大型规则形状上执行操作要比在几个小型形状上执行操作容易得多(这是通过分解一个不规则形状得到的)

为什么只在第二批发生?在第一批中,使用原始变量值计算丢失和最后隐藏状态。他们很好。因为您也在
sess.run()
中执行优化器更新,所以变量会在第一次调用中更新并变成
nan
。在第二次调用中,
nan
s从变量扩散到丢失和隐藏状态

我如何才能确信超出
序列长度的值真的被掩盖了呢?我修改了您的示例以重现问题,但也使其具有确定性

import tensorflow as tf
import numpy as np

batch_1 = np.ones((3, 10, 2))

batch_1[1, 7:] = np.nan

seq_lengths_batch_1 = [10, 7, 10]

tf.reset_default_graph()

input_vals = tf.placeholder(shape=[3, 10, 2], dtype=tf.float32)
lengths = tf.placeholder(shape=[3], dtype=tf.int32)

cell = tf.nn.rnn_cell.LSTMCell(num_units=3, initializer=tf.constant_initializer(1.0))
init_state = tf.nn.rnn_cell.LSTMStateTuple(*[tf.ones([3, c]) for c in cell.state_size])
outputs, states  = tf.nn.dynamic_rnn(cell=cell, dtype=tf.float32, sequence_length=lengths, inputs=input_vals,
        initial_state=init_state)
last_relevant_value = states.h
fake_loss = tf.reduce_mean(last_relevant_value)
optimizer = tf.train.AdamOptimizer(learning_rate=0.1).minimize(fake_loss)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for _ in range(1):
        _, fl, lrv = sess.run([optimizer, fake_loss, last_relevant_value],
                feed_dict={input_vals: batch_1, lengths: seq_lengths_batch_1})
        print "VARIABLES:", sess.run(tf.trainable_variables())
        print "LOSS and LAST HIDDEN:", fl, lrv

如果将
batch_1[1,7:]=np.nan
中的
np.nan
替换为任何数字(例如try-1M,1M,0),您将看到得到的值是相同的。您还可以运行循环进行更多的迭代。作为进一步的合理性检查,如果您将
seq_length_batch_1
设置为“错误”,例如[10,8,10],您可以看到,现在您在
batch_1[1,7::]=np.nan
中使用的值会影响输出。

非常感谢您花时间查看并提供如此详细的解释。现在对我来说更有意义了。尤其是使用LSTMStateTuples的确定性示例非常有用。