Python 将队列添加到apply_async func时,多处理的一种奇怪行为?

Python 将队列添加到apply_async func时,多处理的一种奇怪行为?,python,multithreading,python-2.7,parallel-processing,web-crawler,Python,Multithreading,Python 2.7,Parallel Processing,Web Crawler,我想把func在pool.apply_async方法中生成的结果放入一个队列,看起来一切都很好,但是错误让我很困惑 我的目的是尝试使多个异步化的生产者和多个消费者在这里可能不正确 以下是我的玩具示例: from multiprocessing import Pool import multiprocessing from threading import Thread from six.moves import xrange pool = Pool(processes=2, maxtasksp

我想把func在pool.apply_async方法中生成的结果放入一个队列,看起来一切都很好,但是错误让我很困惑

我的目的是尝试使多个异步化的生产者和多个消费者在这里可能不正确

以下是我的玩具示例:

from multiprocessing import Pool
import multiprocessing
from threading import Thread

from six.moves import xrange
pool = Pool(processes=2, maxtasksperchild=1000)


# resp_queue = multiprocessing.Queue(1000)
manager = multiprocessing.Manager()
resp_queue = manager.Queue()

rang = 10000


def fetch_page(url):
    resp_queue.put(url)


def parse_response():
    url = resp_queue.get()
    print(url)

r_threads = []


def start_processing():
    for i in range(2):
        r_threads.append(Thread(target=parse_response))
        print("start %s thread.." % i)
        r_threads[-1].start()


urls = map(lambda x: "this is url %s" % x, xrange(rang))
for i in xrange(rang):
    pool.apply_async(fetch_page, (urls[i],))

start_processing()

pool.close()
pool.join()
错误显示:

> Process PoolWorker-1: Process PoolWorker-2: Traceback (most recent
> call last): Traceback (most recent call last):   File
> "/usr/lib/python2.7/multiprocessing/process.py", line 258, in
> _bootstrap   File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
>     self.run()
>     self.run()   File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run   File
> "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
>     self._target(*self._args, **self._kwargs)
>     self._target(*self._args, **self._kwargs)   File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker  
> File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
>     task = get()
>     task = get()   File "/usr/lib/python2.7/multiprocessing/queues.py", line 376, in get  
> File "/usr/lib/python2.7/multiprocessing/queues.py", line 376, in get
>     return recv()
>     return recv() AttributeError: 'module' object has no attribute 'fetch_page' AttributeError: 'module' object has no attribute
> 'fetch_page' start 0 thread.. start 1 thread..
我读过,但觉得很奇怪,在我的Ubuntu机器上不工作


非常感谢您的任何建议。非常感谢

看看下面的代码。我对您的版本所做的更改:

我使用map而不是apply,因为它得到了一个iterable,可以很好地在工作人员之间分配工作。 我已经在parse_resp函数now get_url中添加了一个while循环,这样每个线程都可以从队列中获取值。 池实例化和调用在uuu name_uuuu='\uuuuuu main_uuuuu'之后,这是Python多处理所需的windows黑客,据我所知,可能是错误的,我在Ubuntu上。 来自多处理导入池 导入多处理 从线程导入线程

manager = multiprocessing.Manager()
url_queue = manager.Queue()

rang = 10000


def put_url(url):
    url_queue.put(url)


def get_url(thread_id):
    while not url_queue.empty():
        print('Thread {0} got url {1}'.format(str(thread_id), url_queue.get()))


r_threads = []


def start_threading():
    for i in range(2):
        r_threads.append(Thread(target=get_url, args=(i,)))
        print("start %s thread.." % i)
        r_threads[-1].start()
    for i in r_threads:
        i.join()


urls = ["url %s" % x for x in range(rang)]


if __name__ == '__main__':
    pool = Pool(processes=2, maxtasksperchild=1000)
    pool.map_async(put_url, urls)
    start_threading()
    pool.close()
    pool.join()
印刷品:

启动0个线程。。 开始1个线程。。 线程0获取了url 0 线程0获取了url 1 线程1得到了url 2 线程0获取了url 3 线程0获取了url 4 线程1得到了url 5 线程0获取了url 6


在声明池之前定义您的工作函数。@scriptboy和georgexsh您是对的!我知道,当您创建一个池时,工人是通过分叉当前流程创建的。非常感谢您的优化!