Python 训练RNN时,tensorflow损失为nan

Python 训练RNN时,tensorflow损失为nan,python,tensorflow,Python,Tensorflow,使用单个GRU单元运行RNN时,我会遇到以下情况: Traceback (most recent call last): File "language_model_test.py", line 15, in <module> test_model() File "language_model_test.py", line 12, in test_model model.train(random_data, s) File "/home/language_m

使用单个GRU单元运行RNN时,我会遇到以下情况:

Traceback (most recent call last):
  File "language_model_test.py", line 15, in <module>
    test_model()
  File "language_model_test.py", line 12, in test_model
    model.train(random_data, s)
  File "/home/language_model/language_model.py", line 120, in train
    train_pp = self._run_epoch(data, sess, inputs, rnn_ouputs, loss, trainOp, verbose)
  File "/home/language_model/language_model.py", line 92, in _run_epoch
    loss, _= sess.run([loss, trainOp], feed_dict=feed)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 767, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 952, in _run
    fetch_handler = _FetchHandler(self._graph, fetches, feed_dict_string)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 408, in __init__
    self._fetch_mapper = _FetchMapper.for_fetch(fetches)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 230, in for_fetch
    return _ListFetchMapper(fetch)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 337, in __init__
    self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 238, in for_fetch
    return _ElementFetchMapper(fetches, contraction_fn)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 271, in __init__
    % (fetch, type(fetch), str(e)))
TypeError: Fetch argument nan has invalid type <type 'numpy.float32'>, must be a string or Tensor. (Can not convert a float32 into a Tensor or Operation.)
这是
\u run\u epoch

任何有损失的地方都会回来
nan

def _run_epoch(self, data, session, inputs, rnn_ouputs, loss, trainOp, verbose=10):
    with session.as_default() as sess:
        total_steps = sum(1 for x in data_iterator(data, self._batch_size, self._max_steps))
        train_loss = []
        for step, (x,y, l) in enumerate(data_iterator(data, self._batch_size, self._max_steps)):
            print "step - {0}".format(step)
            feed = {
                self.input_placeholder: x,
                self.label_placeholder: y,
                self.sequence_length: l,
                self._dropout_placeholder: self._dropout,
            }
            loss, _= sess.run([loss, trainOp], feed_dict=feed)
            print "loss - {0}".format(loss)
            train_loss.append(loss)
            if verbose and step % verbose == 0:
                sys.stdout.write('\r{} / {} : pp = {}'. format(step, total_steps, np.exp(np.mean(train_loss))))
                sys.stdout.flush()
            if verbose:
                sys.stdout.write('\r')

        return np.exp(np.mean(train_loss))
当我使用以下数据测试代码时,就会出现这种情况
random\u data=np.random.normal(0,100,size=[42068,46])
设计用于使用字ID作为输入进行模拟。我的其余代码可以在下面的代码中找到

编辑以下是我在出现此问题时运行测试套件的方法:

def test_model():
    model = Language_model(vocab=range(0,101))
    s = tf.Session()
    #1 more than step size to acoomodate for the <eos> token at the end
    random_data = np.random.normal(0, 100, size=[42068, 46])
    # file = "./data/ptb.test.txt"
    print "Fitting started"
    model.train(random_data, s)

if __name__ == "__main__":
    test_model() 
def test_model():
模型=语言\模型(vocab=范围(0101))
s=tf.Session()
#在结尾处,要为令牌指定1个以上的步长
random_data=np.random.normal(0100,size=[42068,46])
#file=“./data/ptb.test.txt”
打印“开始安装”
模型列车(随机_数据,s)
如果名称=“\uuuuu main\uuuuuuuu”:
测试_模型()

如果我将
random_数据
替换为其他语言模型,它们也会输出
nan
作为成本。我的理解是,tensorflow通过传入提要dict,应该获取数值并检索对应于id的适当嵌入向量,我不明白为什么
随机_数据
会导致其他模型出现
nan

上面的代码有几个问题

让我们从这一行开始

random_data = np.random.normal(0, 100, size=[42068, 46])
np.random.normal(…)
不生成不同的值,而是生成浮点值,让我们尝试上面的示例,但大小可以控制

>>> np.random.normal(0, 100, size=[5])
array([-53.12407229,  39.57335574, -98.25406749,  90.81471139, -41.05069646])
机器学习算法无法学习这些,因为这些是嵌入模型的输入,我们得到的是负值和浮点值

实际需要的是以下代码:

random_data = np.random.randint(0, 101, size=...)
检查我们得到的输出

>>> np.random.randint(0, 100, size=[5])
array([27, 47, 33, 12, 24])
接下来,下面这一行实际上是在制造一个微妙的问题

def _run_epoch(self, data, session, inputs, rnn_ouputs, loss, train, verbose=10):
    with session.as_default() as sess:
        total_steps = sum(1 for x in data_iterator(data, self._batch_size, self._max_steps))
        train_loss = []
        for step, (x,y, l) in enumerate(data_iterator(data, self._batch_size, self._max_steps)):
            print "step - {0}".format(step)
            feed = {
                self.input_placeholder: x,
                self.label_placeholder: y,
                self.sequence_length: l,
                self._dropout_placeholder: self._dropout,
            }
            loss, _= sess.run([loss, train], feed_dict=feed)
            print "loss - {0}".format(loss)
            train_loss.append(loss)
            if verbose and step % verbose == 0:
                sys.stdout.write('\r{} / {} : pp = {}'. format(step, total_steps, np.exp(np.mean(train_loss))))
                sys.stdout.flush()
            if verbose:
                sys.stdout.write('\r')

        return np.exp(np.mean(train_loss))
loss
既是一个参数参数又是一个变量,因此第一次运行时,它将不再是张量,因此我们无法在会话中实际调用它

def _run_epoch(self, data, session, inputs, rnn_ouputs, loss, train, verbose=10):
    with session.as_default() as sess:
        total_steps = sum(1 for x in data_iterator(data, self._batch_size, self._max_steps))
        train_loss = []
        for step, (x,y, l) in enumerate(data_iterator(data, self._batch_size, self._max_steps)):
            print "step - {0}".format(step)
            feed = {
                self.input_placeholder: x,
                self.label_placeholder: y,
                self.sequence_length: l,
                self._dropout_placeholder: self._dropout,
            }
            loss, _= sess.run([loss, train], feed_dict=feed)
            print "loss - {0}".format(loss)
            train_loss.append(loss)
            if verbose and step % verbose == 0:
                sys.stdout.write('\r{} / {} : pp = {}'. format(step, total_steps, np.exp(np.mean(train_loss))))
                sys.stdout.flush()
            if verbose:
                sys.stdout.write('\r')

        return np.exp(np.mean(train_loss))