在Python中如何在串行函数中运行并行函数?

在Python中如何在串行函数中运行并行函数?,python,multithreading,parallel-processing,multiprocessing,parallelism-amdahl,Python,Multithreading,Parallel Processing,Multiprocessing,Parallelism Amdahl,也许这真的很简单,但我在理解上有点困难 我面临的挑战是从母函数内部执行子并行函数。在等待子并行函数调用的结果时,该母函数应该只运行一次 我写了一个小例子来说明我的困境 import string from joblib import Parallel, delayed import multiprocessing def jobToDoById(id): #do some other logic based on the ID given rand_str = ''.join

也许这真的很简单,但我在理解上有点困难

我面临的挑战是从母函数内部执行子并行函数。在等待子并行函数调用的结果时,该母函数应该只运行一次

我写了一个小例子来说明我的困境

import string
from joblib import Parallel, delayed
import multiprocessing

def jobToDoById(id):
    #do some other logic based on the ID given
    rand_str  = ''.join(random.choice(string.ascii_lowercase + string.ascii_uppercase + string.digits) for i in range(10))
    return [id, rand_str]


def childFunctionParallel(jobs):
    num_cores = multiprocessing.cpu_count()
    num_cores = num_cores - 1

    if __name__ == '__main__':
        p = Parallel(n_jobs=num_cores)(delayed(jobToDoById)(i) for i in jobs)
        return p

def childFunctionSerial(jobs):
    result = []
    for job in jobs:
        job_result = jobToDoById(job)
        result.append(job_result)
    return result



def motherFunction(countries_cities, doInParallel):
    result = []
    print("Start mainLogic")
    for country in countries_cities:
        city_list = countries_cities[country]
        if(doInParallel):
            cities_result = childFunctionParallel(city_list)
        else:
            cities_result = childFunctionSerial(city_list)
        result.append(cities_result)
        # ..... do some more logic

    # ..... do some more logic before returning
    print("End mainLogic")
    return result



print("Start Program")

countries_cities = {
    "United States" : ["Alabama", "Hawaii", "Mississippi", "Pennsylvania"],
    "United Kingdom" : ["Cambridge", "Coventry", "Gloucester", "Nottingham"],
    "France" : ["Marseille", "Paris", "Saint-Denis", "Nanterre", "Aubervilliers"],
    "Denmark" : ["Aarhus", "Slagelse", "Nykøbing F", "Rønne", "Odense"],
    "Australia" : ["Sydney", "Townsville", "Bendigo", "Bathurst", "Busselton"],
}
result_mother = motherFunction(countries_cities, doInParallel=True) # should be executed only once
print(result_mother) 
print("End Program")
如果在
True
False
之间切换
doInParallel
,则可以看到问题。使用
childFunctionSerial()
运行时,
motherFunction()
只运行一次。但是当我们使用
childFunctionParallel
运行时,
motherFunction()
会执行多次。两者都给出了相同的结果,但我的问题是
motherFunction()
应该只执行一次

两个问题:

1.如何重新构造程序,使我们只执行一次母函数
从内部启动并行作业,而不运行同一母函数的多个实例?
2.除了
id
之外,我如何将第二个参数传递给
jobToDoById()

Ad 2:将其他参数放入元组并传递
(id,…)
这是一个简单的,是常用的,所以可以满足它在许多例子

def jobToDoById( aTupleOfPARAMs = ( -1, ) ): # jobToDoById(id):
    #                                        #    do some other logic based on the ID given
    if not type( aTupleOfPARAMs ) is tuple:  # FUSE PROTECTION
       return [-1, "call interface violated"]
    if aTupleOfPARAMs[0] == -1:              # FUSE PROTECTION
       return [-1, None]
    # .......................................# GO GET PROCESSED:
    rand_str  = ''.join( random.choice( string.ascii_lowercase
                                      + string.ascii_uppercase
                                      + string.digits
                                        )
                                  for i in range( 10 )
                         )
    return [id, rand_str]
第一个问题有点难,但更有趣,因为系统设计在大众媒体(有时甚至在学术界)并不总是受到尊重


广告1:你可能会感到惊讶,这在当前版本中永远不会发生 您的代码明确提到了joblib.Parallel和多处理模块,但文档中说:

默认情况下,
并行
使用Python
多处理
模块来分叉单独的Python工作进程在单独的CPU上并发执行任务。对于一般Python程序来说,这是一个合理的默认值,但它会导致一些开销,因为需要在队列中序列化输入和输出数据,以便与工作进程通信

有两种含义-您的处理将支付双重费用:

1) 整个Python解释器(包括它的数据和内部状态)是完全分叉的(因此,您可以按照指示获得尽可能多的副本,每个副本只运行一个进程流,这是为了在GIL循环碎片上不降低性能而创建的/Only-1-runs-All-Others-have-to-wait类型的GIL阻塞/步进如果在基于线程的池中创建,则显示任何1+的处理流等。)

2) 除了如上所述必须进行的所有完整的Python解释器+状态重新实例化之外,所有的
+
还有:

----------------------------MAIN-starts-to-escape-from-pure-[SERIAL]-processing--
  0:                        MAIN forks self
                                 [1]
                                 [2]
                                 ...
                                 [n_jobs] - as many copies of self as requested
   -------------------------MAIN-can-continue-in-"just"-[CONCURRENT]-after:
  1st-Data-IN-SERialised-in-MAIN's-"__main__"  
+ 2nd-Data-IN-QUEueed    in MAIN
+ 3rd-Data-IN-DEQueued              [ith_job]s
+ 4th-Data-IN-DESerialised          [ith_job]s
+ ( ...process operated the usefull [ith_job]s -<The PAYLOAD>-planned... )  
+ 5th-Data-OUT-SERialised           [ith_job]s
+ 6th-Data-OUT-QUEued               [ith_job]s
+ 7th-Data-OUT-DEQueued     in-MAIN
+ 8th-Data-OUT-DESerialised-in-MAIN's-"__main__"  
-------------------------------MAIN-can-continue-in-pure-[SERIAL]-processing-----

谢谢@user3666197的回答:)我现在明白了,这并不像我想的那么简单……嗯,在这种情况下,我将不得不完全重新设计我的程序的结构,以便它能够容纳不会在函数内部启动但在外部启动的并行作业。好吧,简单性不是目标(它可以像几个SLOC-s一样简单,但以性能损失为代价……,效率是……。接下来,如果您的计算需要对处理图的某些部分进行多次重复,则一次启动并多次重复使用的半持久性计算基础设施可能比任何重新实例化更适合多次调用ad-hoc。例如,我在大规模AI/ML计算中使用它,由于不必要的附加延迟和天文数据大小SER/ENQ/DEQ/DES成本,在这种情况下,开销成本将非常高。。。。
----------------------------MAIN-starts-escape-from-processing---in-pure-[SERIAL]
  0:                        MAIN forks self                     -in-pure-[SERIAL]
                                 [1]                            -in-pure-[SERIAL]
                                 [2]                            -in-pure-[SERIAL]
                                 ...                            -in-pure-[SERIAL]
                                 [n_jobs] as many copies of self-in-pure-[SERIAL]
                                          as requested          -in-pure-[SERIAL]
  --------------------------MAIN-can-continue-in-"just"-[CONCURRENT]after[SERIAL]
+ 1st-Data-IN-SERialised-in-MAIN's-"__main__"  , job(2), .., job(n_jobs):[SERIAL]
+ 2nd-Data-IN-QEUueed    in MAIN for all job(1), job(2), .., job(n_jobs):[SERIAL]
+ 3rd-Data-IN-DEQueued              [ith_job]s:       "just"-[CONCURRENT]||X||X||
+ 4th-Data-IN-DESerialised          [ith_job]s:       "just"-[CONCURRENT]|X||X|||
+ ( ...process operated the usefull [ith_job]s-<The PAYLOAD>-planned... )||X|||X|
+ 5th-Data-OUT-SERialised           [ith_job]s:       "just"-[CONCURRENT]||||X|||
+ 6th-Data-OUT-QUEued               [ith_job]s:       "just"-[CONCURRENT]|X|X|X||
+ 7th-Data-OUT-DEQueued     in-MAIN <--l job(1), job(2), .., job(n_jobs):[SERIAL]
+ 8th-Data-OUT-DESerialised-in-MAIN's-"__main__" job(2), .., job(n_jobs):[SERIAL]
-------------------------------MAIN-can-continue-processing------in-pure-[SERIAL]
...                                                             -in-pure-[SERIAL]