Android 在Kivy应用程序中保持Tensorflow会话打开

Android 在Kivy应用程序中保持Tensorflow会话打开,android,python,tensorflow,kivy,python-multithreading,Android,Python,Tensorflow,Kivy,Python Multithreading,我正在尝试运行一个用Kivy制作的应用程序和Tensorflow会话,防止每次我做预测时都加载它。更准确地说,我想知道如何从会话内部调用函数 以下是会话的代码: def decode(): # Only allocate part of the gpu memory when predicting. gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2) config = tf.ConfigProt

我正在尝试运行一个用Kivy制作的应用程序和Tensorflow会话,防止每次我做预测时都加载它。更准确地说,我想知道如何从会话内部调用函数

以下是会话的代码:

def decode():
    # Only allocate part of the gpu memory when predicting.
    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2)
    config = tf.ConfigProto(gpu_options=gpu_options)

    with tf.Session(config=config) as sess:
        # Create model and load parameters.
        model = create_model(sess, True)
        model.batch_size = 1

        enc_vocab_path = os.path.join(gConfig['working_directory'],"vocab%d.enc" % gConfig['enc_vocab_size'])
        dec_vocab_path = os.path.join(gConfig['working_directory'],"vocab%d.dec" % gConfig['dec_vocab_size'])

        enc_vocab, _ = data_utils.initialize_vocabulary(enc_vocab_path)
        _, rev_dec_vocab = data_utils.initialize_vocabulary(dec_vocab_path)

        # !!! This is the function that I'm trying to call. !!!
        def answersqs(sentence):
            token_ids = data_utils.sentence_to_token_ids(tf.compat.as_bytes(sentence), enc_vocab)
            bucket_id = min([b for b in xrange(len(_buckets))
                            if _buckets[b][0] > len(token_ids)])
            encoder_inputs, decoder_inputs, target_weights = model.get_batch(
                {bucket_id: [(token_ids, [])]}, bucket_id)
            _, _, output_logits = model.step(sess, encoder_inputs, decoder_inputs,
                                            target_weights, bucket_id, True)
            outputs = [int(np.argmax(logit, axis=1)) for logit in output_logits]
            if data_utils.EOS_ID in outputs:
                outputs = outputs[:outputs.index(data_utils.EOS_ID)]

            return " ".join([tf.compat.as_str(rev_dec_vocab[output]) for output in outputs])
下面是我调用函数的地方:

def resp(self, msg):
    def p():
        if len(msg) > 0:
            # If I try to do decode().answersqs(msg), it starts a new session.
            ansr = answersqs(msg)
            ansrbox = Message()
            ansrbox.ids.mlab.text = str(ansr)
            ansrbox.ids.mlab.color = (1, 1, 1)
            ansrbox.pos_hint = {'x': 0}
            ansrbox.source = './icons/ansr_box.png'
            self.root.ids.chatbox.add_widget(ansrbox)
            self.root.ids.scrlv.scroll_to(ansrbox)

    threading.Thread(target=p).start()
这是最后一部分:

if __name__ == "__main__":
    if len(sys.argv) - 1:
        gConfig = brain.get_config(sys.argv[1])
    else:
        # get configuration from seq2seq.ini
        gConfig = brain.get_config()

    threading.Thread(target=decode()).start()

    KatApp().run()

另外,在将会话移植到Android上之前,是否应该将其从GPU更改为CPU?

您应该有两个变量图形和会话

加载模型时,您可以执行以下操作:

graph = tf.Graph()
session = tf.Session(config=config)
with graph.as_default(), session.as_default():
  # The reset of your model loading code.
当您需要进行预测时:

with graph.as_default(), session.as_default():
  return session.run([your_result_tensor])
发生的情况是,会话被加载并存储在内存中,您只需告诉系统这是您想要运行的上下文

在代码中,将def answersqs移动到with零件外部。它应该自动绑定到周围函数中的图形和会话(但您需要使它们在with外部可用)


对于第二部分,通常情况下,如果您遵循指南,导出的模型应该没有硬件绑定信息,并且当您加载它时,tensorflow将找到一个好的位置(如果可用并且足够强大,可能是GPU)。

它不应该在那里吗?此外,这是否消除了对
threading.Thread(target=decode()).start()的需要?