Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/296.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python多处理管道“;“僵局”;_Python_Multiprocessing_Pipe_Deadlock - Fatal编程技术网

Python多处理管道“;“僵局”;

Python多处理管道“;“僵局”;,python,multiprocessing,pipe,deadlock,Python,Multiprocessing,Pipe,Deadlock,我面临以下示例代码的问题: from multiprocessing import Lock, Process, Queue, current_process def worker(work_queue, done_queue): for item in iter(work_queue.get, 'STOP'): print("adding ", item, "to done queue") #this works: done_queue

我面临以下示例代码的问题:

from multiprocessing import Lock, Process, Queue, current_process

def worker(work_queue, done_queue):
    for item in iter(work_queue.get, 'STOP'):
            print("adding ", item, "to done queue")
            #this works: done_queue.put(item*10)
            done_queue.put(item*1000) #this doesnt!
    return True

def main():
    workers = 4
    work_queue = Queue()
    done_queue = Queue()
    processes = []

    for x in range(10):
        work_queue.put("hi"+str(x))

    for w in range(workers):
        p = Process(target=worker, args=(work_queue, done_queue))
        p.start()
        processes.append(p)
        work_queue.put('STOP')

    for p in processes:
        p.join()

    done_queue.put('STOP')

    for item in iter(done_queue.get, 'STOP'):
        print(item)


if __name__ == '__main__':
    main()
当完成队列变得足够大(我认为限制在64k左右)时,整个过程将冻结,不再另行通知


当队列变得太大时,这种情况的一般方法是什么?有什么方法可以在处理元素后即时删除它们吗,但在实际应用中,我无法估计过程何时完成。除了无限循环和使用.get_nowait(),还有什么简单的解决方案吗?

这对我来说适用于3.4.0alpha4、3.3、3.2、3.1和2.6。它可以追溯到2.7和3.0。顺便说一句,我知道了

#!/usr/local/cpython-3.3/bin/python

'''SSCCE for a queue deadlock'''

import sys
import multiprocessing

def worker(workerno, work_queue, done_queue):
    '''Worker function'''
    #reps = 10 # this worked for the OP
    #reps = 1000 # this worked for me
    reps = 10000 # this didn't

    for item in iter(work_queue.get, 'STOP'):
        print("adding", item, "to done queue")
        #this works: done_queue.put(item*10)
        for thing in item * reps:
            #print('workerno: {}, adding thing {}'.format(workerno, thing))
            done_queue.put(thing)
    done_queue.put('STOP')
    print('workerno: {0}, exited loop'.format(workerno))
    return True

def main():
    '''main function'''
    workers = 4
    work_queue = multiprocessing.Queue(maxsize=0)
    done_queue = multiprocessing.Queue(maxsize=0)
    processes = []

    for integer in range(10):
        work_queue.put("hi"+str(integer))

    for workerno in range(workers):
        dummy = workerno
        process = multiprocessing.Process(target=worker, args=(workerno, work_queue, done_queue))
        process.start()
        processes.append(process)
        work_queue.put('STOP')

    itemno = 0
    stops = 0
    while True:
        item = done_queue.get()
        itemno += 1
        sys.stdout.write('itemno {0}\r'.format(itemno))
        if item == 'STOP':
            stops += 1
            if stops == workers:
                break
    print('exited done_queue empty loop')


    for workerno, process in enumerate(processes):
        print('attempting process.join() of workerno {0}'.format(workerno))
        process.join()

    done_queue.put('STOP')

if __name__ == '__main__':
    main()

HTH

这对我来说在CPython 2.6、2.7、3.0、3.1、3.2、3.3和3.4alpha4上都很有效。2.5不包括多处理模块。你用的是什么版本的Python?我用的是3.3。尝试将数字从1000增加到更高,管道大小限制取决于您看到的OShave“这意味着,无论何时使用队列,您都需要确保在加入流程之前,队列中的所有项目最终都将被删除。”?甚至还有一个应该死锁的示例代码<在调用
p.join()
之前,code>done\u队列必须为空。删除
p.join()
。添加
尝试:。。。最后:完成队列。将('STOP')
放在worker中,并重复
iter(完成队列。获取,'STOP')
loop
len(进程)
times。在使用范围(len(进程)+1)时似乎起作用,thanks@Stefan:您可能应该删除
done\u队列。将('STOP')
放在主进程中,然后
len(进程)
次就足够了。顺便说一句,你为什么不呢?谢谢你的回答,不过在查看池后,这似乎是一个更容易解决问题的方法