Tensorflow 张量流的RNN减速现象

Tensorflow 张量流的RNN减速现象,tensorflow,slice,lstm,Tensorflow,Slice,Lstm,我发现了tensorflow的lstm单元(不限于lstm,但我仅对此进行了检查)的一个特殊特性,据我所知,这一特性尚未被报道。 我不知道它是否真的有,所以我把这篇文章留在了某处。以下是此问题的玩具代码: import tensorflow as tf import numpy as np import time def network(input_list): input,init_hidden_c,init_hidden_m = input_list cell = tf.n

我发现了tensorflow的lstm单元(不限于lstm,但我仅对此进行了检查)的一个特殊特性,据我所知,这一特性尚未被报道。 我不知道它是否真的有,所以我把这篇文章留在了某处。以下是此问题的玩具代码:

import tensorflow as tf
import numpy as np
import time

def network(input_list):
    input,init_hidden_c,init_hidden_m = input_list
    cell = tf.nn.rnn_cell.BasicLSTMCell(256, state_is_tuple=True)
    init_hidden = tf.nn.rnn_cell.LSTMStateTuple(init_hidden_c, init_hidden_m)
    states, hidden_cm = tf.nn.dynamic_rnn(cell, input, dtype=tf.float32, initial_state=init_hidden)
    net = [v for v in tf.trainable_variables()]
    return states, hidden_cm, net

def action(x, h_c, h_m):
    t0 = time.time()
    outputs, output_h = sess.run([rnn_states[:,-1:,:], rnn_hidden_cm], feed_dict={
        rnn_input:x,
        rnn_init_hidden_c: h_c,
        rnn_init_hidden_m: h_m
    })
    dt = time.time() - t0
    return outputs, output_h, dt

rnn_input = tf.placeholder("float", [None, None, 512])
rnn_init_hidden_c = tf.placeholder("float", [None,256])
rnn_init_hidden_m = tf.placeholder("float", [None,256])
rnn_input_list = [rnn_input, rnn_init_hidden_c, rnn_init_hidden_m]
rnn_states, rnn_hidden_cm, rnn_net = network(rnn_input_list)

feed_input = np.random.uniform(low=-1.,high=1.,size=(1,1,512))
feed_init_hidden_c = np.zeros(shape=(1,256))
feed_init_hidden_m = np.zeros(shape=(1,256))

sess = tf.Session()
sess.run(tf.global_variables_initializer())
for i in range(10000):
    _, output_hidden_cm, deltat = action(feed_input, feed_init_hidden_c, feed_init_hidden_m)
    if i % 10 == 0:
        print 'Running time: ' + str(deltat)
    (feed_init_hidden_c, feed_init_hidden_m) = output_hidden_cm
    feed_input = np.random.uniform(low=-1.,high=1.,size=(1,1,512))
[不重要]此代码的作用是从包含LSTM的“network()”函数生成一个输出,其中输入的时间维度为1,因此输出也为1,并为运行的每个步骤拉入和拉出初始状态

[重要信息]查看“sess.run()”部分。在我的真实代码中,出于某些原因,我碰巧将[:,-1:,:]放在了“rnn_states”中。接下来发生的事情是,每次“sess.run()”花费的时间增加了。对于我自己的一些检查,我发现这种减速源于[:,-1:,:]。我只是想在最后一步得到输出。如果在“sess.run()”之后执行“outputs,output_h=sess.run”([rnn_states,rnn_hidden_cm])、feed_dict{w/o[:,-1:,:]并执行“last_output=outputs[:,-1:,:]”,则不会发生减速

我不知道为什么在[:,-1:,:]运行时会出现这种指数级的时间增量。这是tensorflow的特性没有被记录下来,但速度特别慢(可能会自己添加更多的图形?)?
谢谢,希望这篇文章不会给其他用户造成这种错误。

如上所述,对于这种情况,对于“sess.run()”没有切片输出是非常感谢的

def action(x, h_c, h_m):
    t0 = time.time()
    outputs, output_h = sess.run([rnn_states, rnn_hidden_cm], feed_dict={
        rnn_input:x,
        rnn_init_hidden_c: h_c,
        rnn_init_hidden_m: h_m
    })
    outputs = outputs[:,-1:,:]
    dt = time.time() - t0
    return outputs, output_h, dt

如上所述,“sess.run()”的切片输出在这种情况下非常有用

def action(x, h_c, h_m):
    t0 = time.time()
    outputs, output_h = sess.run([rnn_states, rnn_hidden_cm], feed_dict={
        rnn_input:x,
        rnn_init_hidden_c: h_c,
        rnn_init_hidden_m: h_m
    })
    outputs = outputs[:,-1:,:]
    dt = time.time() - t0
    return outputs, output_h, dt

我遇到了同样的问题,每次运行TensorFlow时,TensorFlow都会变慢。我在调试时发现了这个问题。以下是我的情况以及我如何解决它的简短描述,以供将来参考。希望它能为某人指明正确的方向,并为他们节省一些时间

在我的例子中,问题主要是我在执行sess.run()时没有使用
feed\u dict
来提供网络状态。相反,我在每次迭代中都重新声明
输出
最终状态
预测
。当时的答案让我意识到这有多愚蠢……我在每次迭代中不断创建新的图形节点,使其速度越来越慢。有问题的代码如下所示:

# defining the network
lstm_layer = rnn.BasicLSTMCell(num_units, forget_bias=1)
outputs, final_state = rnn.static_rnn(lstm_layer, input, initial_state=rnn_state, dtype='float32')
prediction = tf.nn.softmax(tf.matmul(outputs[-1], out_weights)+out_bias)

for input_data in data_seq:
    # redeclaring, stupid stupid...
    outputs, final_state = rnn.static_rnn(lstm_layer, input, initial_state=rnn_state, dtype='float32')
    prediction = tf.nn.softmax(tf.matmul(outputs[-1], out_weights)+out_bias)
    p, rnn_state = sess.run((prediction, final_state), feed_dict={x: input_data})
out_weights = tf.Variable(tf.random_normal([num_units, n_classes]), name="out_weights")
out_bias = tf.Variable(tf.random_normal([n_classes]), name="out_bias")

# placeholder for the network state
state_placeholder = tf.placeholder(tf.float32, [2, 1, num_units])
rnn_state = tf.nn.rnn_cell.LSTMStateTuple(state_placeholder[0], state_placeholder[1])

x = tf.placeholder('float', [None, 1, n_input])
input = tf.unstack(x, 1, 1)

# defining the network
lstm_layer = rnn.BasicLSTMCell(num_units, forget_bias=1)
outputs, final_state = rnn.static_rnn(lstm_layer, input, initial_state=rnn_state, dtype='float32')

prediction = tf.nn.softmax(tf.matmul(outputs[-1], out_weights)+out_bias)

# actual network state, which we input with feed_dict
_rnn_state = tf.nn.rnn_cell.LSTMStateTuple(np.zeros((1, num_units), dtype='float32'), np.zeros((1, num_units), dtype='float32'))

it = 0
for input_data in data_seq:
    encl_input = [[input_data]]
    p, _rnn_state = sess.run((prediction, final_state), feed_dict={x: encl_input, rnn_state: _rnn_state})
    print("{} - {}".format(it, p))
    it += 1
当然,解决方案是在开始时只声明一次节点,并使用
feed\u dict
提供新数据。代码从半慢(开始时>15毫秒)到每次迭代都变慢,在大约1毫秒内执行每次迭代。我的新代码如下所示:

# defining the network
lstm_layer = rnn.BasicLSTMCell(num_units, forget_bias=1)
outputs, final_state = rnn.static_rnn(lstm_layer, input, initial_state=rnn_state, dtype='float32')
prediction = tf.nn.softmax(tf.matmul(outputs[-1], out_weights)+out_bias)

for input_data in data_seq:
    # redeclaring, stupid stupid...
    outputs, final_state = rnn.static_rnn(lstm_layer, input, initial_state=rnn_state, dtype='float32')
    prediction = tf.nn.softmax(tf.matmul(outputs[-1], out_weights)+out_bias)
    p, rnn_state = sess.run((prediction, final_state), feed_dict={x: input_data})
out_weights = tf.Variable(tf.random_normal([num_units, n_classes]), name="out_weights")
out_bias = tf.Variable(tf.random_normal([n_classes]), name="out_bias")

# placeholder for the network state
state_placeholder = tf.placeholder(tf.float32, [2, 1, num_units])
rnn_state = tf.nn.rnn_cell.LSTMStateTuple(state_placeholder[0], state_placeholder[1])

x = tf.placeholder('float', [None, 1, n_input])
input = tf.unstack(x, 1, 1)

# defining the network
lstm_layer = rnn.BasicLSTMCell(num_units, forget_bias=1)
outputs, final_state = rnn.static_rnn(lstm_layer, input, initial_state=rnn_state, dtype='float32')

prediction = tf.nn.softmax(tf.matmul(outputs[-1], out_weights)+out_bias)

# actual network state, which we input with feed_dict
_rnn_state = tf.nn.rnn_cell.LSTMStateTuple(np.zeros((1, num_units), dtype='float32'), np.zeros((1, num_units), dtype='float32'))

it = 0
for input_data in data_seq:
    encl_input = [[input_data]]
    p, _rnn_state = sess.run((prediction, final_state), feed_dict={x: encl_input, rnn_state: _rnn_state})
    print("{} - {}".format(it, p))
    it += 1

将声明从for循环中移出也解决了sdrop2002的问题,在
sess.run()中执行切片
outputs[-1]
在for循环中。

我遇到了同样的问题,每次运行TensorFlow时都会减慢速度,我在调试时发现了这个问题。下面简要介绍一下我的情况以及我是如何解决这个问题的,以供将来参考。希望它能为人们指明正确的方向,并为他们节省一些时间

在我的例子中,问题主要是我在执行sess.run()时没有使用
feed\u dict
来提供网络状态。相反,我在每次迭代中都重新声明
输出
最终状态
预测
。当时的答案让我意识到这有多愚蠢……我在每次迭代中不断创建新的图形节点,使其速度越来越慢。有问题的代码如下所示:

# defining the network
lstm_layer = rnn.BasicLSTMCell(num_units, forget_bias=1)
outputs, final_state = rnn.static_rnn(lstm_layer, input, initial_state=rnn_state, dtype='float32')
prediction = tf.nn.softmax(tf.matmul(outputs[-1], out_weights)+out_bias)

for input_data in data_seq:
    # redeclaring, stupid stupid...
    outputs, final_state = rnn.static_rnn(lstm_layer, input, initial_state=rnn_state, dtype='float32')
    prediction = tf.nn.softmax(tf.matmul(outputs[-1], out_weights)+out_bias)
    p, rnn_state = sess.run((prediction, final_state), feed_dict={x: input_data})
out_weights = tf.Variable(tf.random_normal([num_units, n_classes]), name="out_weights")
out_bias = tf.Variable(tf.random_normal([n_classes]), name="out_bias")

# placeholder for the network state
state_placeholder = tf.placeholder(tf.float32, [2, 1, num_units])
rnn_state = tf.nn.rnn_cell.LSTMStateTuple(state_placeholder[0], state_placeholder[1])

x = tf.placeholder('float', [None, 1, n_input])
input = tf.unstack(x, 1, 1)

# defining the network
lstm_layer = rnn.BasicLSTMCell(num_units, forget_bias=1)
outputs, final_state = rnn.static_rnn(lstm_layer, input, initial_state=rnn_state, dtype='float32')

prediction = tf.nn.softmax(tf.matmul(outputs[-1], out_weights)+out_bias)

# actual network state, which we input with feed_dict
_rnn_state = tf.nn.rnn_cell.LSTMStateTuple(np.zeros((1, num_units), dtype='float32'), np.zeros((1, num_units), dtype='float32'))

it = 0
for input_data in data_seq:
    encl_input = [[input_data]]
    p, _rnn_state = sess.run((prediction, final_state), feed_dict={x: encl_input, rnn_state: _rnn_state})
    print("{} - {}".format(it, p))
    it += 1
当然,解决方案是在开始时只声明一次节点,并使用
feed\u dict
提供新数据。代码从半慢(开始时>15毫秒)到每次迭代都变慢,在大约1毫秒内执行每次迭代。我的新代码如下所示:

# defining the network
lstm_layer = rnn.BasicLSTMCell(num_units, forget_bias=1)
outputs, final_state = rnn.static_rnn(lstm_layer, input, initial_state=rnn_state, dtype='float32')
prediction = tf.nn.softmax(tf.matmul(outputs[-1], out_weights)+out_bias)

for input_data in data_seq:
    # redeclaring, stupid stupid...
    outputs, final_state = rnn.static_rnn(lstm_layer, input, initial_state=rnn_state, dtype='float32')
    prediction = tf.nn.softmax(tf.matmul(outputs[-1], out_weights)+out_bias)
    p, rnn_state = sess.run((prediction, final_state), feed_dict={x: input_data})
out_weights = tf.Variable(tf.random_normal([num_units, n_classes]), name="out_weights")
out_bias = tf.Variable(tf.random_normal([n_classes]), name="out_bias")

# placeholder for the network state
state_placeholder = tf.placeholder(tf.float32, [2, 1, num_units])
rnn_state = tf.nn.rnn_cell.LSTMStateTuple(state_placeholder[0], state_placeholder[1])

x = tf.placeholder('float', [None, 1, n_input])
input = tf.unstack(x, 1, 1)

# defining the network
lstm_layer = rnn.BasicLSTMCell(num_units, forget_bias=1)
outputs, final_state = rnn.static_rnn(lstm_layer, input, initial_state=rnn_state, dtype='float32')

prediction = tf.nn.softmax(tf.matmul(outputs[-1], out_weights)+out_bias)

# actual network state, which we input with feed_dict
_rnn_state = tf.nn.rnn_cell.LSTMStateTuple(np.zeros((1, num_units), dtype='float32'), np.zeros((1, num_units), dtype='float32'))

it = 0
for input_data in data_seq:
    encl_input = [[input_data]]
    p, _rnn_state = sess.run((prediction, final_state), feed_dict={x: encl_input, rnn_state: _rnn_state})
    print("{} - {}".format(it, p))
    it += 1

将声明从for循环中移出也解决了SDROP2002的问题,在for循环内部执行slice
sess.run()
输出[-1]。

只需将该切片移到for循环之外。@Aaron:我想“for”循环不是重点。'action()'outputs'在'outputs'的最后一个时间步骤中输出'outputs',我发布了从'outputs'中切出最后一个'outputs'的命令是否可以在'sess.run()'中进行-结果是有问题的-或者没有问题。您只需要将该切片移到for循环之外。@Aaron:我猜'for'循环不是重点。'action()“outputs”在“outputs”的最后一个临时步骤中输出“outputs”,我发布了从“outputs”中切掉最后一个“outputs”的命令是否可以在“sess.run()”中执行-这被证明是有问题的-或者没有问题。