多处理共享numpy阵列

多处理共享numpy阵列,numpy,multiprocessing,Numpy,Multiprocessing,我需要在进程之间共享numpy数组,以便在其中存储一些结果。我不太确定到目前为止我所做的是否正确。这是我的简化代码 from multiprocessing import Process, Lock, Array import numpy as np def worker(shared,lock): numpy_arr = np.frombuffer(shared.get_obj()) # do some work ... with lock: fo

我需要在进程之间共享numpy数组,以便在其中存储一些结果。我不太确定到目前为止我所做的是否正确。这是我的简化代码

from multiprocessing import Process, Lock, Array
import numpy as np

def worker(shared,lock):
    numpy_arr = np.frombuffer(shared.get_obj())
    # do some work ...  
    with lock:
        for i in range(10):
            numpy_arr[0] += 1
        numpy_arr += 1
    return

if __name__ == '__main__':

    jobs = []
    lock = Lock()

    shared_array = Array('d', 1000000)


    for process in range(4):
        p = Process(target=worker, args=(shared_array,lock))
        jobs.append(p)
        p.start()

    for process in jobs:
        process.join()

    m = np.frombuffer(shared_array.get_obj())
    np.save('data', m)
    print (m[:5])
从这段代码中,我获得了预期的结果,但我不确定这是否是正确的方法。最后,multiprocessing.Array和multiprocessing.sharedTypes.Array之间有什么区别