如何在python中存储多处理过程中函数的返回值?

如何在python中存储多处理过程中函数的返回值?,python,multithreading,multiprocessing,Python,Multithreading,Multiprocessing,我正在编写一个python程序,它以并行方式执行函数。以下是代码: from multiprocessing import Process def sqr(args): results = [] for i in args: results.append(i*i) return results def cube(args): results = [] for i in args: results.append(i*i*i

我正在编写一个python程序,它以并行方式执行函数。以下是代码:

from multiprocessing import Process

def sqr(args):
    results = []
    for i in args:
        results.append(i*i)
    return results

def cube(args):
    results = []
    for i in args:
        results.append(i*i*i)
    return results

def main():
    data = [1,2,3,4,5]
    p1 = Process(target=sqr, args=(data,))
    p1.start()
    p2 = Process(target=cube, args=(data,))
    p2.start()
    p1.join()
    p2.join()


main()
我不知道如何获得
sqr
cube
函数的返回值

from multiprocessing import Pool, cpu_count

def sqr(n):
    return (n*n)

def cube(n):
    return (n*n*n)

def main(n):
    return (cube(n),sqr(n))

with Pool(cpu_count()) as p:
    inputs = [1,2,3,4,5,6,7,8,9]
    results = p.map(main, inputs)

print(list(results))

[(1, 1), (8, 4), (27, 9), (64, 16), (125, 25), (216, 36), (343, 49), (512, 64), (729, 81)]
我试过这个:

from multiprocessing import Process
from queue import Queue

def sqr(args, q):
    results = []
    for i in args:
        results.append(i*i)
    q.put(results)

def cube(args, q):
    results = []
    for i in args:
        results.append(i*i*i)
    q.put(results)

def main():
    q = Queue()
    data = [1,2,3,4,5]
    p1 = Process(target=sqr, args=(data, q))
    p1.start()
    p2 = Process(target=cube, args=(data, q))
    p2.start()
    p1.join()
    p2.join()
    print(q.get())

main()
此程序将无限期挂起。我不明白这里出了什么问题?
有人能帮我吗,我如何存储函数的返回结果?如果有任何帮助,我们将不胜感激。

让结果全球化如何?这应该起作用:

from multiprocessing import Process

results = []
def sqr(args):
    for i in args:
        results.append(i*i)
    return results

def cube(args):
    for i in args:
        results.append(i*i*i)
    return results

def main():
    data = [1,2,3,4,5]
    p1 = Process(target=sqr, args=(data,))
    p1.start()
    p2 = Process(target=cube, args=(data,))
    p2.start()
    p1.join()
    p2.join()


main()

我将使用multiprocessing.Pool,顺序执行cube()和sqr(),然后使用Pool.map()对每个输入进行并行化。这将消除排队的需要,并简化许多基本功能

from multiprocessing import Pool, cpu_count

def sqr(n):
    return (n*n)

def cube(n):
    return (n*n*n)

def main(n):
    return (cube(n),sqr(n))

with Pool(cpu_count()) as p:
    inputs = [1,2,3,4,5,6,7,8,9]
    results = p.map(main, inputs)

print(list(results))

[(1, 1), (8, 4), (27, 9), (64, 16), (125, 25), (216, 36), (343, 49), (512, 64), (729, 81)]

您有三种在流程之间共享资源的选项:

  • 文件(例如pickle、cPickle、marshal等)
  • 共享内存(例如队列)
  • 烟斗

  • 你用的是Python2还是Python3?我用的是Python3。你用的是什么操作系统?我用的是UbuntuOS@roganjosh刚刚用
    多处理.Queue
    进行了测试,原始代码运行良好@Amit所说的Ubuntu操作系统是指真正的Linux,还是Windows 10 Bash for plah plah?单独的进程不共享相同的
    结果
    列表,因为它们有单独的内存空间。