Python can';使用tensorflow和多处理时,t pickle\u thread.lock对象

Python can';使用tensorflow和多处理时,t pickle\u thread.lock对象,python,tensorflow,python-multiprocessing,Python,Tensorflow,Python Multiprocessing,我用tensorflow来构造张量。为了加快速度,我进一步使用了多处理。但是,我遇到了一个错误,无法pickle\u thread.lock对象 下面是代码 import math import tensorflow as tf import numpy as np import time import multiprocessing import os,sys from multiprocessing import Queue def mp_factorizer(input_layer, c

我用tensorflow来构造张量。为了加快速度,我进一步使用了多处理。但是,我遇到了一个错误,无法pickle\u thread.lock对象 下面是代码

import math
import tensorflow as tf
import numpy as np
import time
import multiprocessing
import os,sys
from multiprocessing import Queue

def mp_factorizer(input_layer, chunksize, nprocs):
    def worker(input_layer, chunksize, out_q):       

        temp = []
        with tf.device('/cpu:0'):
            for ii in range(chunksize):

                temp.append(tf.gather_nd(input_layer,[[ii,0]])) 
                #temp.append(1) 
        out_q.put(temp)
        print('ok')

    # Each process will get 'chunksize' nums and a queue to put his out
    # dict into
    out_q = Queue()
    procs = []

    for i in range(nprocs):
        p = multiprocessing.Process(
                target=worker,
                args=(input_layer,\
                      chunksize,\
                      out_q))
        procs.append(p)
        p.start()

    # Collect all results into a single result dict. We know how many dicts
    # with results to expect.
    resultdict = []
    for i in range(nprocs):
        resultdict = out_q.get() + resultdict

    # Wait for all worker processes to finish
    for p in procs:
        p.join()

    return resultdict


input_layer = tf.placeholder(tf.float64, shape=[64,4], name='input_layer')

nprocs=4
chunksize=16

aa=time.time()
collector = mp_factorizer(input_layer, chunksize, nprocs)
loss = tf.add_n(collector, name='loss')
如果我更改了
temp.append(tf.gather\u nd(输入层,[[ii,0]]),
插入
temp.append(1)
我不会被警告有任何错误

看来张量流张量被阻塞了。 有人知道如何解决这个问题吗