Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/ssh/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 将多线程和多处理与concurrent.futures相结合_Python_Multithreading_Concurrency_Parallel Processing_Multiprocessing - Fatal编程技术网

Python 将多线程和多处理与concurrent.futures相结合

Python 将多线程和多处理与concurrent.futures相结合,python,multithreading,concurrency,parallel-processing,multiprocessing,Python,Multithreading,Concurrency,Parallel Processing,Multiprocessing,我有一个高度依赖于I/O和CPU密集型的函数。我试图通过多处理和多线程来并行化它,但它被卡住了。这个问题以前提过,但背景不同。我的函数是完全独立的,不返回任何内容。为什么卡住了?怎么能修好呢 import concurrent.futures import os import numpy as np import time ids = [1,2,3,4,5,6,7,8] def f(x): time.sleep(1) x**2 def multithread_accoun

我有一个高度依赖于I/O和CPU密集型的函数。我试图通过多处理和多线程来并行化它,但它被卡住了。这个问题以前提过,但背景不同。我的函数是完全独立的,不返回任何内容。为什么卡住了?怎么能修好呢

import concurrent.futures
import os
import numpy as np
import time


ids = [1,2,3,4,5,6,7,8]

def f(x):
    time.sleep(1)
    x**2

def multithread_accounts(AccountNumbers, f, n_threads = 2):

    slices = np.array_split(AccountNumbers, n_threads)
    slices = [list(i) for i in slices]

    with concurrent.futures.ThreadPoolExecutor() as executor:
        executor.map(f, slices)



def parallelize_distribute(AccountNumbers, f, n_threads = 2, n_processors = os.cpu_count()):

    slices = np.array_split(AccountNumbers, n_processors)
    slices = [list(i) for i in slices]

    with concurrent.futures.ProcessPoolExecutor(max_workers=n_processors) as executor:
        executor.map( lambda x: multithread_accounts(x, f, n_threads = n_threads) , slices)
        
parallelize_distribute(ids, f, n_processors=2, n_threads=2)

对不起,我没时间解释所有这些,所以我只给代码“那行得通”。我敦促你从更简单的事情开始,因为学习曲线是不平凡的。一开始就把努比排除在外;一开始只粘线;然后移到仅流程;除非您是专家,否则不要尝试并行化除命名模块级函数(不,不是函数本地匿名lambda)之外的任何东西

正如经常发生的那样,您“应该”得到的错误消息被抑制,因为它们是异步发生的,所以没有好的方法来报告它们。自由地添加
print()
语句,看看您的进展如何

注意:我去掉了numpy,并添加了所需的内容,这样它也可以在Windows上运行。我希望使用numpy的
array\u split()
可以很好地工作,但我当时使用的机器上没有numpy

import concurrent.futures as cf
import os
import time

def array_split(xs, n):
    from itertools import islice
    it = iter(xs)
    result = []
    q, r = divmod(len(xs), n)
    for i in range(r):
        result.append(list(islice(it, q+1)))
    for i in range(n - r):
        result.append(list(islice(it, q)))
    return result
    
ids = range(1, 11)

def f(x):
    print(f"called with {x}")
    time.sleep(5)
    x**2

def multithread_accounts(AccountNumbers, f, n_threads=2):
    with cf.ThreadPoolExecutor(max_workers=n_threads) as executor:
        for slice in array_split(AccountNumbers, n_threads):
            executor.map(f, slice)

def parallelize_distribute(AccountNumbers, f, n_threads=2, n_processors=os.cpu_count()):
    slices = array_split(AccountNumbers, n_processors)
    print("top slices", slices)
    with cf.ProcessPoolExecutor(max_workers=n_processors) as executor:
        executor.map(multithread_accounts, slices,
                                           [f] * len(slices),
                                           [n_threads] * len(slices))

if __name__ == "__main__":
    parallelize_distribute(ids, f, n_processors=2, n_threads=2)
顺便说一句,我建议这对螺纹部分更有意义:

def multithread_accounts(AccountNumbers, f, n_threads=2):
    with cf.ThreadPoolExecutor(max_workers=n_threads) as executor:
        executor.map(f, AccountNumbers)
也就是说,这里真的没有必要自己拆分列表-线程机制将自己拆分列表。您可能在最初的尝试中错过了这一点,因为您发布的代码中的
ThreadPoolExecutor()
调用忘记指定
max\u workers
参数