Python 如果Tensorflow对象保存到全局变量并检索到,则会话为None

Python 如果Tensorflow对象保存到全局变量并检索到,则会话为None,python,tensorflow,flask,keras,Python,Tensorflow,Flask,Keras,场景: 我遇到了一个Flask应用程序的问题,它试图在应用程序首次启动时将Tensorflow模型从磁盘保存到load_model()内部的内存(全局变量generator) 当用户访问/test端点时,预加载的模型生成器将用于生成一些数据 问题: generator.run()。但是,generator.run()的完全相同的代码在test()内部调用时会抛出错误 我设法缩小了发生在tf.compat.v1.Session()变量sess上的问题 从load\u model()调用时,sess

场景:

我遇到了一个Flask应用程序的问题,它试图在应用程序首次启动时将Tensorflow模型从磁盘保存到
load_model()
内部的内存(全局变量
generator

当用户访问
/test
端点时,预加载的模型
生成器将用于生成一些数据

问题:

generator.run()。但是,
generator.run()
的完全相同的代码在
test()内部调用时会抛出错误

我设法缩小了发生在
tf.compat.v1.Session()变量
sess
上的问题

load\u model()
调用时,
sess
。但是当从
test()
调用时,
sess
None

有人知道如何解决这个问题吗?最好只将大型模型加载到内存中一次(加载时间约为10秒),然后在每次查询端点时加载它

谢谢大家!

from flask import Flask

app = Flask(__name__)
generator = None

# EVERYTHING WORKS WELL HERE
def load_model():
    tflib.init_tf()

    # Load model from disk
    with open(model_path, "rb") as f:
        _G, _D, Gs = pickle.load(f, encoding='latin1')

    # Update global variable generator
    global generator
    generator = Gs

    # Run model
    latent = np.random.randn(1, generator.input_shape[1])
    img = generator.run(latent)[0]              # `sess` is <tensorflow.python.client.session.Session object at 0x12d317fd0>
    print(img.shape)                            # prints: (512, 512, 3)


# PROBLEM OCCURS HERE
@app.route('/test')
def test():
    print('generator: ', generator)             # prints: <dnnlib.tflib.network.Network object at 0x13a80e5d0>
    print('generator.run: ', generator.run)     # prints: <bound method Network.run of <dnnlib.tflib.network.Network object at 0x13a80e5d0>>

    # Run model
    latent = np.random.randn(1, generator.input_shape[1])
    img = generator.run(latent)[0]              # `sess` is None


if __name__ == '__main__':
    load_model()                                # Loads the model from disk
    app.run()
对sess.run()进行编码

。。。
对于mb_开始范围(0,num_项,minibatch_大小):
如果打印进度:
打印(“\r%d/%d”%(mb\u开始,num\u项目),结束=”“)
mb\u end=min(mb\u开始+小批量大小,数量项)
mb_num=mb_end-mb_begin
mb_-in=[src[mb_-begin:mb_-end]如果src不是None-else np.zero([mb_-num]+shape[1:])对于src,zip中的shape(在_数组中,self.input_shapes)]
#跑
# []
#:数组([],形状=(1,0),数据类型=浮点64)
#:latents.shape(1512)
#mb_out=tf.get_default_session().run(out_expr,dict(zip(in_expr,mb_in)))
sess=tf.get\u default\u session()
#writer=tf.summary.FileWriter(“logs/”,sess.graph)
mb_out=sess.run(out_expr,dict(zip(in_expr,mb_in)))
对于dst,src in-zip(out_数组,mb_out):
dst[mb_开始:mb_结束]=src
...

我认为您的问题在于Flask任务可以在不同的线程/进程中运行。不要使用
global
变量,试试其他类型的存储,比如共享内存。查看答案。
[2019-09-18 23:26:33,083] ERROR in app: Exception on /test [GET]
Traceback (most recent call last):
  File "/anaconda3/envs/ml/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
    response = self.full_dispatch_request()
  File "/anaconda3/envs/ml/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/anaconda3/envs/ml/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/anaconda3/envs/ml/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
    raise value
  File "/anaconda3/envs/ml/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
    rv = self.dispatch_request()
  File "/anaconda3/envs/ml/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "app.py", line 71, in test
    img = generator.run(latent)[0]
  File "/Users/x/foo/dnnlib/tflib/network.py", line 460, in run
    mb_out = sess.run(out_expr, dict(zip(in_expr, mb_in)))
AttributeError: 'NoneType' object has no attribute 'run'
...
        for mb_begin in range(0, num_items, minibatch_size):
            if print_progress:
                print("\r%d / %d" % (mb_begin, num_items), end="")

            mb_end = min(mb_begin + minibatch_size, num_items)
            mb_num = mb_end - mb_begin
            mb_in = [src[mb_begin : mb_end] if src is not None else np.zeros([mb_num] + shape[1:]) for src, shape in zip(in_arrays, self.input_shapes)]
            #跑
            # [<tf.Tensor 'Gs/_Run/concat:0' shape=(?, 3, 512, 512) dtype=float32>]

            # <tf.Tensor 'Gs/_Run/labels_in:0' shape=<unknown> dtype=float32>: array([], shape=(1, 0), dtype=float64)
            # <tf.Tensor 'Gs/_Run/latents_in:0' shape=<unknown> dtype=float32>: latents.shape (1, 512)
            #mb_out = tf.get_default_session().run(out_expr, dict(zip(in_expr, mb_in)))
            sess = tf.get_default_session()
            #writer = tf.summary.FileWriter("logs/", sess.graph)
            mb_out = sess.run(out_expr, dict(zip(in_expr, mb_in)))

            for dst, src in zip(out_arrays, mb_out):
                dst[mb_begin: mb_end] = src
...