Python多处理管道“;“僵局”;
我面临以下示例代码的问题:Python多处理管道“;“僵局”;,python,multiprocessing,pipe,deadlock,Python,Multiprocessing,Pipe,Deadlock,我面临以下示例代码的问题: from multiprocessing import Lock, Process, Queue, current_process def worker(work_queue, done_queue): for item in iter(work_queue.get, 'STOP'): print("adding ", item, "to done queue") #this works: done_queue
from multiprocessing import Lock, Process, Queue, current_process
def worker(work_queue, done_queue):
for item in iter(work_queue.get, 'STOP'):
print("adding ", item, "to done queue")
#this works: done_queue.put(item*10)
done_queue.put(item*1000) #this doesnt!
return True
def main():
workers = 4
work_queue = Queue()
done_queue = Queue()
processes = []
for x in range(10):
work_queue.put("hi"+str(x))
for w in range(workers):
p = Process(target=worker, args=(work_queue, done_queue))
p.start()
processes.append(p)
work_queue.put('STOP')
for p in processes:
p.join()
done_queue.put('STOP')
for item in iter(done_queue.get, 'STOP'):
print(item)
if __name__ == '__main__':
main()
当完成队列变得足够大(我认为限制在64k左右)时,整个过程将冻结,不再另行通知
当队列变得太大时,这种情况的一般方法是什么?有什么方法可以在处理元素后即时删除它们吗,但在实际应用中,我无法估计过程何时完成。除了无限循环和使用.get_nowait(),还有什么简单的解决方案吗?这对我来说适用于3.4.0alpha4、3.3、3.2、3.1和2.6。它可以追溯到2.7和3.0。顺便说一句,我知道了
#!/usr/local/cpython-3.3/bin/python
'''SSCCE for a queue deadlock'''
import sys
import multiprocessing
def worker(workerno, work_queue, done_queue):
'''Worker function'''
#reps = 10 # this worked for the OP
#reps = 1000 # this worked for me
reps = 10000 # this didn't
for item in iter(work_queue.get, 'STOP'):
print("adding", item, "to done queue")
#this works: done_queue.put(item*10)
for thing in item * reps:
#print('workerno: {}, adding thing {}'.format(workerno, thing))
done_queue.put(thing)
done_queue.put('STOP')
print('workerno: {0}, exited loop'.format(workerno))
return True
def main():
'''main function'''
workers = 4
work_queue = multiprocessing.Queue(maxsize=0)
done_queue = multiprocessing.Queue(maxsize=0)
processes = []
for integer in range(10):
work_queue.put("hi"+str(integer))
for workerno in range(workers):
dummy = workerno
process = multiprocessing.Process(target=worker, args=(workerno, work_queue, done_queue))
process.start()
processes.append(process)
work_queue.put('STOP')
itemno = 0
stops = 0
while True:
item = done_queue.get()
itemno += 1
sys.stdout.write('itemno {0}\r'.format(itemno))
if item == 'STOP':
stops += 1
if stops == workers:
break
print('exited done_queue empty loop')
for workerno, process in enumerate(processes):
print('attempting process.join() of workerno {0}'.format(workerno))
process.join()
done_queue.put('STOP')
if __name__ == '__main__':
main()
HTH这对我来说在CPython 2.6、2.7、3.0、3.1、3.2、3.3和3.4alpha4上都很有效。2.5不包括多处理模块。你用的是什么版本的Python?我用的是3.3。尝试将数字从1000增加到更高,管道大小限制取决于您看到的OShave“这意味着,无论何时使用队列,您都需要确保在加入流程之前,队列中的所有项目最终都将被删除。”?甚至还有一个应该死锁的示例代码<在调用
p.join()
之前,code>done\u队列必须为空。删除p.join()
。添加尝试:。。。最后:完成队列。将('STOP')
放在worker中,并重复iter(完成队列。获取,'STOP')
looplen(进程)
times。在使用范围(len(进程)+1)时似乎起作用,thanks@Stefan:您可能应该删除done\u队列。将('STOP')
放在主进程中,然后len(进程)
次就足够了。顺便说一句,你为什么不呢?谢谢你的回答,不过在查看池后,这似乎是一个更容易解决问题的方法