Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/reactjs/22.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python-多线程正在按顺序运行_Python_Multithreading_Python Multithreading - Fatal编程技术网

Python-多线程正在按顺序运行

Python-多线程正在按顺序运行,python,multithreading,python-multithreading,Python,Multithreading,Python Multithreading,我不明白为什么这个进程是按顺序运行的 from queue import Queue, Empty from concurrent.futures import ThreadPoolExecutor import threading import time import random pool = ThreadPoolExecutor(max_workers=3) to_crawl = Queue() #Import urls for i in range(100): to_craw

我不明白为什么这个进程是按顺序运行的

from queue import Queue, Empty
from concurrent.futures import ThreadPoolExecutor
import threading
import time
import random

pool = ThreadPoolExecutor(max_workers=3)
to_crawl = Queue()

#Import urls
for i in range(100):
    to_crawl.put(str(i))

def scraping(random_sleep):
    time.sleep(random_sleep)
    return

def post_scrape(url):
    print('URL %s finished' % url)

def my_crawler():
    while True:
        try:
            target_url = to_crawl.get()
            random_sleep = random.randint(1, 5)
            print("Current URL: %s, sleep: %s" % (format(target_url), random_sleep))
            executor = pool.submit(scraping(random_sleep))
            executor.add_done_callback(post_scrape(target_url))
        except Empty:
            return
        except Exception as e:
            print(e)
            continue

if __name__ == '__main__':
    my_crawler()
预期产出:

Current URL: 0, sleep: 5
Current URL: 1, sleep: 1
Current URL: 2, sleep: 2
URL 1 finished
URL 2 finished
URL 0 finished
实际产出:

Current URL: 0, sleep: 5
URL 0 finished
Current URL: 1, sleep: 1
URL 1 finished
Current URL: 2, sleep: 2
URL 2 finished

问题在于您调用池的方式。提交:

pool.submit(scraping(random_sleep))
这表示将
刮取(随机睡眠)
的结果提交到池中;事实上,我很惊讶它没有引起错误。您要做的是提交带有参数
random\u sleep
scraping
函数,这是通过以下方法实现的:

pool.submit(scraping, random_sleep)
同样,下一行应该是:

executor.add_done_callback(post_scrape)
并且回调应声明为:

def post_scrape(executor):
其中,
executor
将是未来本身,
executor
来自其他代码。请注意,没有简单的方法将用户参数附加到此回调,因此您可以执行类似的操作,并删除
add\u done\u回调

def scraping(random_sleep, url):
    time.sleep(random_sleep)
    print('URL %s finished' % url)
    return

#...

pool.submit(scraping, random_sleep, target_url)