Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/293.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python多处理查询_Python_Python 3.x_Multiprocessing - Fatal编程技术网

Python多处理查询

Python多处理查询,python,python-3.x,multiprocessing,Python,Python 3.x,Multiprocessing,我正在学习python的多处理模块。我想让我的代码使用我所有的CPU资源。这是我写的代码: from multiprocessing import Process import time def work(): for i in range(1000): x=5 y=10 z=x+y if __name__ == '__main__': start1 = time.time() for i in range(100): p=P

我正在学习python的多处理模块。我想让我的代码使用我所有的CPU资源。这是我写的代码:

from multiprocessing import Process
import time

def work():
   for i in range(1000):
      x=5
      y=10
      z=x+y

if __name__ == '__main__':
   start1 = time.time()
   for i in range(100):
      p=Process(target=work)
      p.start()
      p.join()
   end1=time.time()
   start = time.time()
   for i in range(100):
      work()
   end=time.time()
   print(f'With Parallel {end1-start1}')
   print(f'Without Parallel {end-start}')
我得到的输出是:

 With Parallel 0.8802454471588135
 Without Parallel 0.00039649009704589844
我尝试在for循环中使用不同的范围值,或者只在work函数中使用print语句,但是每次没有并行时运行得更快。我有什么遗漏吗


提前谢谢

您的基准测试方法存在问题:

for i in range(100):
    p = Process(target=work)
    p.start()
    p.join()
我猜您希望并行运行100个进程,但是
Process.join()
,实际上是串行运行。此外,运行比CPU核心数更多的繁忙进程会导致高CPU争用,这是一种性能损失。正如一条评论所指出的,与
过程
创建的开销相比,您的
work()
函数太简单了

更好的版本:

import multiprocessing
import time


def work():
    for i in range(2000000):
        pow(i, 10)

n_processes = multiprocessing.cpu_count() # 8
total_runs = n_processes * 4
ps = []
n = total_runs

start1 = time.time()
while n:
    # ensure processes number limit
    ps = [p for p in ps if p.is_alive()]
    if len(ps) < n_processes:
        p = multiprocessing.Process(target=work)
        p.start()
        ps.append(p)
        n = n-1
    else:
        time.sleep(0.01)
# wait for all processes to finish
while any(p.is_alive() for p in ps):
    time.sleep(0.01)
end1=time.time()

start = time.time()
for i in range(total_runs):
    work()
end=time.time()

print(f'With Parallel {end1-start1:.4f}s')
print(f'Without Parallel {end-start:.4f}s')
print(f'Acceleration factor {(end-start)/(end1-start1):.2f}')

work()
函数太简单,不具有代表性。在你的例子中,你只是遇到了这样一种情况:实例化
过程
对象及其功能会导致开销,请给出任何反馈?
With Parallel 4.2835s
Without Parallel 33.0244s
Acceleration factor 7.71