Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/python-2.7/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python “为什么?”;“断管”;只有在多处理的特定场景中访问共享列表时才会发生错误?_Python_Python 2.7_Concurrency_Ipc_Python Multiprocessing - Fatal编程技术网

Python “为什么?”;“断管”;只有在多处理的特定场景中访问共享列表时才会发生错误?

Python “为什么?”;“断管”;只有在多处理的特定场景中访问共享列表时才会发生错误?,python,python-2.7,concurrency,ipc,python-multiprocessing,Python,Python 2.7,Concurrency,Ipc,Python Multiprocessing,在我开始我的问题之前,让我提一下,我已经知道下面的多处理代码被破坏了。里面有虫子。以下代码旨在为我的教学目的服务,以便我能够了解更多关于代码是如何被破坏的。所以我的问题是关于坏代码的一个特定方面。首先,让我展示我的代码 现在,您可以完全忽略worker_b,因为我们现在没有在任何地方使用它。我们稍后再谈 import Queue import multiprocessing import time lock = multiprocessing.Lock() def pprint(s):

在我开始我的问题之前,让我提一下,我已经知道下面的多处理代码被破坏了。里面有虫子。以下代码旨在为我的教学目的服务,以便我能够了解更多关于代码是如何被破坏的。所以我的问题是关于坏代码的一个特定方面。首先,让我展示我的代码

现在,您可以完全忽略
worker_b
,因为我们现在没有在任何地方使用它。我们稍后再谈

import Queue
import multiprocessing
import time

lock = multiprocessing.Lock()

def pprint(s):
    lock.acquire()
    print(s)
    lock.release()

def worker_a(i, stack):
    if stack:
        data = stack.pop()
        pprint('worker %d got %d' % (i, data))
        time.sleep(2)
        pprint('worker %d exiting ...' % i)
    else:
        pprint('worker %d has nothing to do!' % i)

def worker_b(i, stack):
    if stack:
        data = stack.pop()
        pprint('worker %d got %d (stack length: %d)' % (i, data, len(stack)))
        time.sleep(2)
        pprint('worker %d exiting ... (stack length: %d)' % (i, len(stack)))
    else:
        pprint('worker %d has nothing to do!' % i)

manager = multiprocessing.Manager()
stack = manager.list()

def master():
    for i in range(5):
        stack.append(i)
        pprint('master put %d' % i)

    i = 0
    while stack:
        t = multiprocessing.Process(target=worker_a, args=(i, stack))
        t.start()
        time.sleep(1)
        i += 1

    pprint('master returning ...')

master()

pprint('master returned!')
上面的错误代码似乎工作正常

$ python mplifo.py 
master put 0
master put 1
master put 2
master put 3
master put 4
worker 0 got 4
worker 1 got 3
worker 0 exiting ...
worker 2 got 2
worker 1 exiting ...
worker 3 got 1
worker 2 exiting ...
worker 4 got 0
worker 3 exiting ...
master returning ...
master returned!
worker 4 exiting ...
但是,如果我调用
worker\u b
而不是
worker\u a
,即更改

        t = multiprocessing.Process(target=worker_a, args=(i, stack))

出现以下错误

$ python mplifo.py
master put 0
master put 1
master put 2
master put 3
master put 4
worker 0 got 4 (stack length: 4)
worker 1 got 3 (stack length: 3)
worker 0 exiting ... (stack length: 3)
worker 2 got 2 (stack length: 2)
worker 1 exiting ... (stack length: 2)
worker 3 got 1 (stack length: 1)
worker 2 exiting ... (stack length: 1)
worker 4 got 0 (stack length: 0)
worker 3 exiting ... (stack length: 0)
master returning ...
master returned!
Process Process-6:
Traceback (most recent call last):
  File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py", line 114, in run
    self._target(*self._args, **self._kwargs)
  File "mplifo.py", line 27, in worker_b
    pprint('worker %d exiting ... (stack length: %d)' % (i, len(stack)))
  File "<string>", line 2, in __len__
  File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/managers.py", line 758, in _callmethod
    conn.send((self._id, methodname, args, kwds))
IOError: [Errno 32] Broken pipe
$python mplifo.py
主放0
大师推杆1
大师推杆2
大师推杆3
大师推杆4
辅助进程0获得4(堆栈长度:4)
工人1得到3(堆栈长度:3)
工人0正在退出。。。(堆叠长度:3)
工人2得到2(堆栈长度:2)
工人1正在退出。。。(堆叠长度:2)
工人3得到1(堆栈长度:1)
工人2正在退出。。。(堆栈长度:1)
辅助进程4获得0(堆栈长度:0)
工人3正在退出。。。(堆栈长度:0)
大师归来。。。
师父回来了!
过程-6:
回溯(最近一次呼叫最后一次):
文件“/usr/local/ceral/python/2.7.13/Frameworks/python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py”,第258行,在_bootstrap中
self.run()
文件“/usr/local/ceral/python/2.7.13/Frameworks/python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py”,第114行,正在运行
自我目标(*自我参数,**自我参数)
文件“mplifo.py”,第27行,在worker_b中
pprint('worker%d正在退出…(堆栈长度:%d)'(i,len(堆栈)))
文件“”,第2行,在__
文件“/usr/local/ceral/python/2.7.13/Frameworks/python.framework/Versions/2.7/lib/python2.7/multiprocessing/managers.py”,第758行,in_callmethod
conn.send((self.\u id,methodname,args,kwds))
IOError:[Errno 32]管道破裂
  • 为什么只有在
    工作者b
    的情况下才会发生此错误
  • 为什么只有在
    worker\b
    中的第二次
    pprint()
    调用才会发生此错误,而第一次
    pprint()
    调用不会发生此错误

    • 这部分回溯给您一个提示:

        File "mplifo.py", line 27, in worker_b
          pprint('worker %d exiting ... (stack length: %d)' % (i, len(stack)))
        File "<string>", line 2, in __len__
        File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/managers.py", line 758, in _callmethod
          conn.send((self._id, methodname, args, kwds))
      
      文件“mplifo.py”,第27行,在worker_b中
      pprint('worker%d正在退出…(堆栈长度:%d)'(i,len(堆栈)))
      文件“”,第2行,在__
      文件“/usr/local/ceral/python/2.7.13/Frameworks/python.framework/Versions/2.7/lib/python2.7/multiprocessing/managers.py”,第758行,in_callmethod
      conn.send((self.\u id,methodname,args,kwds))
      
      在辅助进程中,
      stack
      不是Python列表。它是由
      multiprocessing.Manager
      创建的代理,它包装了驻留在主进程中的列表。当最后一个
      worker\u b
      退出时,它计算
      len(stack)
      ,代理必须从主进程请求该值。但是到那时,主机已经退出了,到它的通信管道断了

      这不会发生在
      worker\u a
      中,因为它不会在退出之前尝试评估
      len(stack)

        File "mplifo.py", line 27, in worker_b
          pprint('worker %d exiting ... (stack length: %d)' % (i, len(stack)))
        File "<string>", line 2, in __len__
        File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/managers.py", line 758, in _callmethod
          conn.send((self._id, methodname, args, kwds))