Python 重复推理后,模型推理运行时间增加

Python 重复推理后,模型推理运行时间增加,python,performance,tensorflow,Python,Performance,Tensorflow,我正在编写一个tensorflow项目,在该项目中,我手动编辑每个权重和偏差,因此我使用字典设置权重和偏差,而不是使用tf.layers.dense并让tensorflow负责更新权重。(这是我想出的最干净的方法,尽管可能并不理想) 我在每次迭代中为固定模型提供相同的数据,但在整个程序执行过程中运行时间会增加 我几乎删掉了代码中的所有内容,这样我就可以看到问题所在,但我无法理解是什么导致了运行时间的增加 ---Games took 2.6591222286224365 seconds ---

我正在编写一个tensorflow项目,在该项目中,我手动编辑每个权重和偏差,因此我使用字典设置权重和偏差,而不是使用
tf.layers.dense
并让tensorflow负责更新权重。(这是我想出的最干净的方法,尽管可能并不理想)

我在每次迭代中为固定模型提供相同的数据,但在整个程序执行过程中运行时间会增加

我几乎删掉了代码中的所有内容,这样我就可以看到问题所在,但我无法理解是什么导致了运行时间的增加

---Games took   2.6591222286224365 seconds ---
---Games took   3.290001153945923 seconds ---
---Games took   4.250034332275391 seconds ---
---Games took   5.190149307250977 seconds ---
编辑:我已经设法通过使用占位符来减少运行时间,占位符不向图中添加额外的节点,但运行时间仍然以较慢的速度增加。我想删除此运行时间增长。(一段时间后从0.1秒变为1秒以上)

这是我的全部代码:

import numpy as np
import tensorflow as tf
import time

n_inputs = 9
n_class = 9

n_hidden_1 = 20

population_size = 10
weights = []
biases = []
game_steps = 20 #so we can see performance loss faster

# 2 games per individual
games_in_generation = population_size/2


def generate_initial_population(my_population_size):
    my_weights = []
    my_biases = []

    for key in range(my_population_size):
        layer_weights = {
            'h1': tf.Variable(tf.truncated_normal([n_inputs, n_hidden_1], seed=key)),
            'out': tf.Variable(tf.truncated_normal([n_hidden_1, n_class], seed=key))
        }
        layer_biases = {
            'b1': tf.Variable(tf.truncated_normal([n_hidden_1], seed=key)),
            'out': tf.Variable(tf.truncated_normal([n_class], seed=key))
        }
        my_weights.append(layer_weights)
        my_biases.append(layer_biases)
    return my_weights, my_biases


weights, biases = generate_initial_population(population_size)
data = tf.placeholder(dtype=tf.float32) #will add shape later

def model(x):
    out_layer = tf.add(tf.matmul([biases[1]['b1']], weights[1]['out']),  biases[1]['out'])
    return out_layer


def play_game():


     model_input = [0] * 9
     model_out = model(data)

     for game_step in range(game_steps):

        move = sess.run(model_out, feed_dict={data: model_input})[0]


sess = tf.Session()
sess.run(tf.global_variables_initializer())
while True:
    start_time = time.time()
    for _ in range(int(games_in_generation)):
        play_game()
    print("---Games took   %s seconds ---" % (time.time() - start_time))

这段代码中有一些奇怪的事情,所以要给你一个真正解决潜在问题的答案是很难的。但是,我可以解决您观察到的运行时间增长问题。下面,我修改了您的代码,以从游戏循环中提取输入模式生成和对
model
的调用

import numpy as np
import tensorflow as tf
import time

n_inputs = 9
n_class = 9

n_hidden_1 = 20

population_size = 10
weights = []
biases = []
game_steps = 20 #so we can see performance loss faster

# 2 games per individual
games_in_generation = population_size/2


def generate_initial_population(my_population_size):
    my_weights = []
    my_biases = []

    for key in range(my_population_size):
        layer_weights = {
            'h1': tf.Variable(tf.truncated_normal([n_inputs, n_hidden_1], seed=key)),
            'out': tf.Variable(tf.truncated_normal([n_hidden_1, n_class], seed=key))
        }
        layer_biases = {
            'b1': tf.Variable(tf.truncated_normal([n_hidden_1], seed=key)),
            'out': tf.Variable(tf.truncated_normal([n_class], seed=key))
        }
        my_weights.append(layer_weights)
        my_biases.append(layer_biases)
    return my_weights, my_biases


weights, biases = generate_initial_population(population_size)


def model(x):
    out_layer = tf.add(tf.matmul([biases[1]['b1']], weights[1]['out']),  biases[1]['out'])
    return out_layer


def play_game():

    # Extract input pattern generation.
    model_input = np.float32([[0]*9])
    model_out = model(model_input)

    for game_step in range(game_steps):

            start_time = time.time()
            move = sess.run(model_out)[0]

            # print("---Step took   %s seconds ---" % (time.time() - start_time))


sess = tf.Session()
sess.run(tf.global_variables_initializer())
for _ in range(5):
    start_time = time.time()
    for _ in range(int(games_in_generation)):
        play_game()
    print("---Games took   %s seconds ---" % (time.time() - start_time))
如果运行,此代码将为您提供如下信息:

---Games took   0.42223644256591797 seconds ---
---Games took   0.13168787956237793 seconds ---
---Games took   0.2452383041381836 seconds ---
---Games took   0.20023465156555176 seconds ---
---Games took   0.19905781745910645 seconds ---
显然,这解决了您所观察到的运行时间增长问题。它还将观察到的最大运行时间减少了一个数量级!发生这种情况的原因是,每次调用
model
时,实际上都在创建一组对象,并试图将其添加到图形中。这种误解很常见,是因为您试图在命令式python代码中使用张量,就好像它们是python变量一样。我建议在继续之前检查所有的


还需要注意的是,在TensorFlow中,这不是将值传递给图形的正确方法。我可以看出,在游戏的每次迭代中,您都希望向模型传递不同的值,但您无法通过向python函数传递值来实现这一点。必须在模型图中创建一个占位符,并将希望模型处理的值加载到该占位符上。有很多方法可以做到这一点,但你可以找到一个例子。我希望这对你有帮助

我添加了另一个答案,因为最近对该问题的编辑产生了实质性的变化。运行时间仍在增长,因为您仍在
sess
中多次调用
model
。您只是减少了向图形中添加节点的频率。您需要做的是为要构建的每个模型创建一个新会话,并在完成每个会话后关闭它。我已经修改了您的代码,如下所示:

import numpy as np
import tensorflow as tf
import time


n_inputs = 9
n_class = 9

n_hidden_1 = 20

population_size = 10
weights = []
biases = []
game_steps = 20 #so we can see performance loss faster

# 2 games per individual
games_in_generation = population_size/2


def generate_initial_population(my_population_size):
    my_weights = []
    my_biases = []

    for key in range(my_population_size):
        layer_weights = {
            'h1': tf.Variable(tf.truncated_normal([n_inputs, n_hidden_1], seed=key)),
            'out': tf.Variable(tf.truncated_normal([n_hidden_1, n_class], seed=key))
        }
        layer_biases = {
            'b1': tf.Variable(tf.truncated_normal([n_hidden_1], seed=key)),
            'out': tf.Variable(tf.truncated_normal([n_class], seed=key))
        }
        my_weights.append(layer_weights)
        my_biases.append(layer_biases)
    return my_weights, my_biases



def model(x):
    out_layer = tf.add(tf.matmul([biases[1]['b1']], weights[1]['out']),  biases[1]['out'])
    return out_layer


def play_game(sess):

    model_input = [0] * 9

    model_out = model(data)

    for game_step in range(game_steps):

        move = sess.run(model_out, feed_dict={data: model_input})[0]

while True:

    for _ in range(int(games_in_generation)):

        # Reset the graph.
        tf.reset_default_graph()

        weights, biases = generate_initial_population(population_size)
        data = tf.placeholder(dtype=tf.float32) #will add shape later

        # Create session.
        with tf.Session() as sess:

            sess.run(tf.global_variables_initializer())

            start_time = time.time()

            play_game(sess)

            print("---Games took   %s seconds ---" % (time.time() - start_time))

            sess.close()

我在这里所做的是将对
play\u game
的调用包装在
范围中定义的会话中,并使用
sess退出该会话。在调用
play\u game
后关闭
。我还重置了默认图形。我已经运行了几百次了,运行时间没有增加。

谢谢!我知道我应该传递一个占位符,但在这种情况下,它不会影响性能。在这种情况下,对吗?不,你实际上是在影响游戏步骤的运行时间,因为你对
model
的调用正在创建
tf.Tensor
s,这需要时间。如果您重构代码以使用
sess.run
feed_dict
参数和模型输入的占位符,则所有操作都应按预期执行。多亏了我在缩短代码时删除了占位符。代码仍然存在一些导致性能损失的问题。(虽然损失不是那么严重,所以需要更长的时间才能注意到)非常感谢。谢谢你不吝啬。我仍在努力学习,这对我很有帮助,非常欢迎!我只祝你成功,我很高兴这对你有所帮助。
---Games took   2.6591222286224365 seconds ---
---Games took   3.290001153945923 seconds ---
---Games took   4.250034332275391 seconds ---
---Games took   5.190149307250977 seconds ---