Python 只读numpy数组的快速队列

Python 只读numpy数组的快速队列,python,numpy,parallel-processing,multiprocessing,Python,Numpy,Parallel Processing,Multiprocessing,我有一个多处理工作,其中我将只读numpy数组排队,作为生产者-消费者管道的一部分 目前正在对它们进行酸洗,因为这是多处理.Queue的默认行为,会降低性能 是否有任何python方法可以将引用传递到共享内存,而不是对数组进行酸洗 不幸的是,阵列是在使用者启动后生成的,没有简单的方法可以解决这个问题。(因此,全局变量方法将是丑陋的…) [请注意,在下面的代码中,我们不希望h(x0)和h(x1)并行计算。相反,我们看到h(x0)和g(h(x1))并行计算(类似于CPU中的流水线)。] 您的示例似乎

我有一个多处理工作,其中我将只读numpy数组排队,作为生产者-消费者管道的一部分

目前正在对它们进行酸洗,因为这是
多处理.Queue
的默认行为,会降低性能

是否有任何python方法可以将引用传递到共享内存,而不是对数组进行酸洗

不幸的是,阵列是在使用者启动后生成的,没有简单的方法可以解决这个问题。(因此,全局变量方法将是丑陋的…)

[请注意,在下面的代码中,我们不希望h(x0)和h(x1)并行计算。相反,我们看到h(x0)和g(h(x1))并行计算(类似于CPU中的流水线)。]


您的示例似乎没有在我的计算机上运行,尽管这可能与我正在运行windows有关(问题是酸洗
\uuuuu main\uuuu
名称空间以外的任何内容(任何经过修饰的内容))。。。需要帮忙吗?(您必须在f()、g()和h()中分别进行打包和解包)

注*我不确定这是否会更快。。。只是试探一下别人的建议

from multiprocessing import Process, freeze_support
from multiprocessing.sharedctypes import Value, Array
import numpy as np

def package(arr):
    shape = Array('i', arr.shape, lock=False)

    if arr.dtype == float:
        ctype = Value('c', b'd') #d for double #f for single
    if arr.dtype == int:
        ctype = Value('c', b'i') #if statements could be avoided if data is always the same
    data = Array(ctype.value, arr.reshape(-1),lock=False)

    return data, shape

def unpack(data, shape):
    return np.array(data[:]).reshape(shape[:])

#test
def f(args):
    print(unpack(*args))

if __name__ == '__main__':
    freeze_support()

    a = np.array([1,2,3,4,5])
    a_packed = package(a)
    print('array has been packaged')

    p = Process(target=f, args=(a_packed,))
    print('passing to parallel process')
    p.start()

    print('joining to parent process')
    p.join()
    print('finished')
在线程或进程之间共享内存 使用线程而不是多处理 因为您使用的是numpy,所以可以利用以下事实。这意味着您可以使用标准线程和共享内存进行并行处理,而不是多处理和进程间通信。这是您的代码的一个版本,调整为使用threading.Thread和Queue.Queue,而不是multiprocessing.Process和multiprocessing.Queue。这将通过队列传递numpy ndarray,而不会对其进行酸洗。在我的电脑上,它的运行速度大约是你的代码的3倍。(然而,它只比串行版本的代码快20%左右。我已经建议了一些其他方法。)

在共享内存中存储numpy阵列 另一个接近您所请求的选项是继续使用多处理包,但使用存储在共享内存中的数组在进程之间传递数据。下面的代码创建了一个新的ArrayQueue类来实现这一点。应在生成子流程之前创建ArrayQueue对象。它创建并管理由共享内存支持的numpy阵列池。将结果数组推送到队列中时,ArrayQueue将该数组中的数据复制到现有共享内存数组中,然后通过队列传递共享内存数组的id。这比通过队列发送整个数组要快得多,因为这样可以避免对数组进行酸洗。这与上面的线程版本具有类似的性能(大约慢10%),并且如果全局解释器锁是一个问题(即,您在函数中运行了大量python代码),那么扩展性可能会更好

样本而非函数的并行处理 上述代码仅比单线程版本快20%左右(以下所示的串行版本分别为12.2s和14.8s)。这是因为每个函数都在单个线程或进程中运行,并且大部分工作都是由xs()完成的。上面示例的执行时间几乎与您刚刚运行
%time print sum(xs()中x的1)
时相同

如果您的实际项目有更多的中间函数和/或它们比您展示的更复杂,那么工作负载可能会更好地分布在处理器之间,这可能不是问题。但是,如果您的工作负载确实与您提供的代码相似,那么您可能需要重构代码,为每个线程分配一个示例,而不是为每个线程分配一个函数。这与下面的代码类似(显示了线程和多处理版本):

这段代码的线程版本只比我给出的第一个示例稍微快一点,比串行版本只快30%左右。这并不像我预期的那么快;也许Python还在部分地被GIL困住

多处理版本的执行速度明显快于原始多处理代码,这主要是因为所有函数在单个进程中链接在一起,而不是排队(和酸洗)中间结果。但是,它仍然比串行版本慢,因为所有结果数组都必须经过pickle(在工作进程中)和unpickle(在主进程中),然后才能由imap_无序返回。但是,如果您可以安排它,使管道返回聚合结果而不是完整的数组,那么就可以避免酸洗开销,并且多处理版本最快:大约比串行版本快43%

好的,为了完整起见,这里是第二个示例的一个版本,它使用原始生成器函数的多处理,而不是上面显示的更精细的函数。这会使用一些技巧在多个流程之间传播样本,这可能会使其不适用于许多工作流。但是使用生成器似乎比使用更精细的函数要快一些,与上面所示的串行版本相比,这种方法可以使您的速度提高54%。但是,只有在不需要从辅助函数返回完整数组的情况下,这才可用

import multiprocessing, itertools, math
import numpy as np

def f(xs):
    for x in xs:
        yield x + 1.0

def g(xs):
    for x in xs:
        yield x * 3

def h(xs):
    for x in xs:
        yield x * x

def xs():
    for i in range(1000):
        yield np.random.uniform(0,1,(500,2000))

def final():
    return f(g(h(xs())))

def final_sum():
    for x in f(g(h(xs()))):
        yield x.sum()

def get_chunk(args):
    """Retrieve n values (n=args[1]) from a generator function (f=args[0]) and return them as a list. 
    This runs in a worker process and does all the computation."""
    return list(itertools.islice(args[0](), args[1]))

def parallelize(gen_func, max_items, n_workers=4, chunk_size=50):
    """Pull up to max_items items from several copies of gen_func, in small groups in parallel processes.
    chunk_size should be big enough to improve efficiency (one copy of gen_func will be run for each chunk)
    but small enough to avoid exhausting memory (each worker will keep chunk_size items in memory)."""

    pool = multiprocessing.Pool(n_workers)

    # how many chunks will be needed to yield at least max_items items?
    n_chunks = int(math.ceil(float(max_items)/float(chunk_size)))

    # generate a suitable series of arguments for get_chunk()
    args_list = itertools.repeat((gen_func, chunk_size), n_chunks)

    # chunk_gen will yield a series of chunks (lists of results) from the generator function, 
    # totaling n_chunks * chunk_size items (which is >= max_items)
    chunk_gen = pool.imap_unordered(get_chunk, args_list)

    # parallel_gen flattens the chunks, and yields individual items
    parallel_gen = itertools.chain.from_iterable(chunk_gen)

    # limit the output to max_items items 
    return itertools.islice(parallel_gen, max_items)


# in this case, the parallel version is slower than a single process, probably
# due to overhead of gathering numpy arrays in imap_unordered (via pickle?)
print "serial, return arrays:"  # 15.3s
%time print sum(r.sum() for r in final())
print "parallel, return arrays:"  # 24.2s
%time print sum(r.sum() for r in parallelize(final, max_items=1000))


# in this case, the parallel version is more than twice as fast as the single-thread version
print "serial, return result:"  # 15.1s
%time print sum(r for r in final_sum())
print "parallel, return result:"  # 6.8s
%time print sum(r for r in parallelize(final_sum, max_items=1000))

查看,它避免了标准的
多处理
依赖酸洗。这将允许您绕过酸洗的低效性,并允许您访问只读共享资源的公共内存。注意,虽然Pathos即将在一个完整的pip包中部署,但在此期间,我建议使用
pip-install-git进行安装+https://github.com/uqfoundation/pathos

你能分享一些代码吗?嗯,不是实际的代码。将模拟类似的东西。只要它是一个…不确定它是否适用于您的情况,但我想您可以通过在共享内存中复制数组来避免对数组进行酸洗,这可能是有可能的(如果您要共享的数组包含更多
from threading import Thread
from Queue import Queue
import numpy as np

class __EndToken(object):
    pass

def parallel_pipeline(buffer_size=50):
    def parallel_pipeline_with_args(f):
        def consumer(xs, q):
            for x in xs:
                q.put(x)
            q.put(__EndToken())

        def parallel_generator(f_xs):
            q = Queue(buffer_size)
            consumer_process = Thread(target=consumer,args=(f_xs,q,))
            consumer_process.start()
            while True:
                x = q.get()
                if isinstance(x, __EndToken):
                    break
                yield x

        def f_wrapper(xs):
            return parallel_generator(f(xs))

        return f_wrapper
    return parallel_pipeline_with_args

@parallel_pipeline(3)
def f(xs):
    for x in xs:
        yield x + 1.0

@parallel_pipeline(3)
def g(xs):
    for x in xs:
        yield x * 3

@parallel_pipeline(3)
def h(xs):
    for x in xs:
        yield x * x

def xs():
    for i in range(1000):
        yield np.random.uniform(0,1,(500,2000))

rs = f(g(h(xs())))
%time print sum(r.sum() for r in rs)  # 12.2s
from multiprocessing import Process, Queue, Array
import numpy as np

class ArrayQueue(object):
    def __init__(self, template, maxsize=0):
        if type(template) is not np.ndarray:
            raise ValueError('ArrayQueue(template, maxsize) must use a numpy.ndarray as the template.')
        if maxsize == 0:
            # this queue cannot be infinite, because it will be backed by real objects
            raise ValueError('ArrayQueue(template, maxsize) must use a finite value for maxsize.')

        # find the size and data type for the arrays
        # note: every ndarray put on the queue must be this size
        self.dtype = template.dtype
        self.shape = template.shape
        self.byte_count = len(template.data)

        # make a pool of numpy arrays, each backed by shared memory, 
        # and create a queue to keep track of which ones are free
        self.array_pool = [None] * maxsize
        self.free_arrays = Queue(maxsize)
        for i in range(maxsize):
            buf = Array('c', self.byte_count, lock=False)
            self.array_pool[i] = np.frombuffer(buf, dtype=self.dtype).reshape(self.shape)
            self.free_arrays.put(i)

        self.q = Queue(maxsize)

    def put(self, item, *args, **kwargs):
        if type(item) is np.ndarray:
            if item.dtype == self.dtype and item.shape == self.shape and len(item.data)==self.byte_count:
                # get the ID of an available shared-memory array
                id = self.free_arrays.get()
                # copy item to the shared-memory array
                self.array_pool[id][:] = item
                # put the array's id (not the whole array) onto the queue
                new_item = id
            else:
                raise ValueError(
                    'ndarray does not match type or shape of template used to initialize ArrayQueue'
                )
        else:
            # not an ndarray
            # put the original item on the queue (as a tuple, so we know it's not an ID)
            new_item = (item,)
        self.q.put(new_item, *args, **kwargs)

    def get(self, *args, **kwargs):
        item = self.q.get(*args, **kwargs)
        if type(item) is tuple:
            # unpack the original item
            return item[0]
        else:
            # item is the id of a shared-memory array
            # copy the array
            arr = self.array_pool[item].copy()
            # put the shared-memory array back into the pool
            self.free_arrays.put(item)
            return arr

class __EndToken(object):
    pass

def parallel_pipeline(buffer_size=50):
    def parallel_pipeline_with_args(f):
        def consumer(xs, q):
            for x in xs:
                q.put(x)
            q.put(__EndToken())

        def parallel_generator(f_xs):
            q = ArrayQueue(template=np.zeros(0,1,(500,2000)), maxsize=buffer_size)
            consumer_process = Process(target=consumer,args=(f_xs,q,))
            consumer_process.start()
            while True:
                x = q.get()
                if isinstance(x, __EndToken):
                    break
                yield x

        def f_wrapper(xs):
            return parallel_generator(f(xs))

        return f_wrapper
    return parallel_pipeline_with_args


@parallel_pipeline(3)
def f(xs):
    for x in xs:
        yield x + 1.0

@parallel_pipeline(3)
def g(xs):
    for x in xs:
        yield x * 3

@parallel_pipeline(3)
def h(xs):
    for x in xs:
        yield x * x

def xs():
    for i in range(1000):
        yield np.random.uniform(0,1,(500,2000))

print "multiprocessing with shared-memory arrays:"
%time print sum(r.sum() for r in f(g(h(xs()))))   # 13.5s
import multiprocessing
import threading, Queue
import numpy as np

def f(x):
    return x + 1.0

def g(x):
    return x * 3

def h(x):
    return x * x

def final(i):
    return f(g(h(x(i))))

def final_sum(i):
    return f(g(h(x(i)))).sum()

def x(i):
    # produce sample number i
    return np.random.uniform(0, 1, (500, 2000))

def rs_serial(func, n):
    for i in range(n):
        yield func(i)

def rs_parallel_threaded(func, n):
    todo = range(n)
    q = Queue.Queue(2*n_workers)

    def worker():
        while True:
            try:
                # the global interpreter lock ensures only one thread does this at a time
                i = todo.pop()
                q.put(func(i))
            except IndexError:
                # none left to do
                q.put(None)
                break

    threads = []
    for j in range(n_workers):
        t = threading.Thread(target=worker)
        t.daemon=False
        threads.append(t)   # in case it's needed later
        t.start()

    while True:
        x = q.get()
        if x is None:
            break
        else:
            yield x

def rs_parallel_mp(func, n):
    pool = multiprocessing.Pool(n_workers)
    return pool.imap_unordered(func, range(n))

n_workers = 4
n_samples = 1000

print "serial:"  # 14.8s
%time print sum(r.sum() for r in rs_serial(final, n_samples))
print "threaded:"  # 10.1s
%time print sum(r.sum() for r in rs_parallel_threaded(final, n_samples))

print "mp return arrays:"  # 19.6s
%time print sum(r.sum() for r in rs_parallel_mp(final, n_samples))
print "mp return results:"  # 8.4s
%time print sum(r_sum for r_sum in rs_parallel_mp(final_sum, n_samples))
import multiprocessing, itertools, math
import numpy as np

def f(xs):
    for x in xs:
        yield x + 1.0

def g(xs):
    for x in xs:
        yield x * 3

def h(xs):
    for x in xs:
        yield x * x

def xs():
    for i in range(1000):
        yield np.random.uniform(0,1,(500,2000))

def final():
    return f(g(h(xs())))

def final_sum():
    for x in f(g(h(xs()))):
        yield x.sum()

def get_chunk(args):
    """Retrieve n values (n=args[1]) from a generator function (f=args[0]) and return them as a list. 
    This runs in a worker process and does all the computation."""
    return list(itertools.islice(args[0](), args[1]))

def parallelize(gen_func, max_items, n_workers=4, chunk_size=50):
    """Pull up to max_items items from several copies of gen_func, in small groups in parallel processes.
    chunk_size should be big enough to improve efficiency (one copy of gen_func will be run for each chunk)
    but small enough to avoid exhausting memory (each worker will keep chunk_size items in memory)."""

    pool = multiprocessing.Pool(n_workers)

    # how many chunks will be needed to yield at least max_items items?
    n_chunks = int(math.ceil(float(max_items)/float(chunk_size)))

    # generate a suitable series of arguments for get_chunk()
    args_list = itertools.repeat((gen_func, chunk_size), n_chunks)

    # chunk_gen will yield a series of chunks (lists of results) from the generator function, 
    # totaling n_chunks * chunk_size items (which is >= max_items)
    chunk_gen = pool.imap_unordered(get_chunk, args_list)

    # parallel_gen flattens the chunks, and yields individual items
    parallel_gen = itertools.chain.from_iterable(chunk_gen)

    # limit the output to max_items items 
    return itertools.islice(parallel_gen, max_items)


# in this case, the parallel version is slower than a single process, probably
# due to overhead of gathering numpy arrays in imap_unordered (via pickle?)
print "serial, return arrays:"  # 15.3s
%time print sum(r.sum() for r in final())
print "parallel, return arrays:"  # 24.2s
%time print sum(r.sum() for r in parallelize(final, max_items=1000))


# in this case, the parallel version is more than twice as fast as the single-thread version
print "serial, return result:"  # 15.1s
%time print sum(r for r in final_sum())
print "parallel, return result:"  # 6.8s
%time print sum(r for r in parallelize(final_sum, max_items=1000))