Python Joblib没有';当n_jobs>;1.
我有一个关于数据的例子 从代码中可以看出,函数的每次调用Python Joblib没有';当n_jobs>;1.,python,python-3.x,parallel-processing,joblib,parallelism-amdahl,Python,Python 3.x,Parallel Processing,Joblib,Parallelism Amdahl,我有一个关于数据的例子 从代码中可以看出,函数的每次调用fit\u by\u idx()都必须打印“here”,但实际上并非如此。当n_jobs=1时一切正常,但如果n_jobs大于,则joblib不调用该函数 代码: 这是链接到 Q:“如果n\u作业大于joblib则不调用该函数。” 是的(您可以检查PID和PPID编号), 它只是不显示打印的结果(“此处”) 使用API文档中的定义: print(*对象,sep='',end='\n',file=sys.stdout,flush=False)
fit\u by\u idx()
都必须打印“here”
,但实际上并非如此。当n_jobs=1
时一切正常,但如果n_jobs
大于,则joblib
不调用该函数
代码:
这是链接到
Q:“如果n\u作业
大于joblib
则不调用该函数。”
是的(您可以检查PID和PPID编号),它只是不显示打印的结果(“此处”) 使用API文档中的定义:
print(*对象,sep='',end='\n',file=sys.stdout,flush=False)
从强制执行flush=True开始
然而,未来将面临更多的麻烦joblib
-产生(除非另有强制,否则,如果返回到纯的GIL控制的再[SERIAL],会对性能产生不利影响)
-为任何n_作业执行代码,一步一步地重新运行,这是没有意义的,因为您支付了实例化的所有成本和其他开销,但没有从中获得任何加速效益,不是吗?)
此外,还应分别检查和检查您的O/S。您的实际joblib
和(隐藏)酸洗SER/DES工具版本
Q:“如果n\u作业
大于joblib
则不调用该函数。”
是的(您可以检查PID和PPID编号),
它只是不显示打印的结果(“此处”)
使用API文档中的定义:
print(*对象,sep='',end='\n',file=sys.stdout,flush=False)
从强制执行flush=True开始
然而,未来将面临更多的麻烦joblib
-产生(除非另有强制,否则,如果返回到纯的GIL控制的再[SERIAL],会对性能产生不利影响)
-为任何n_作业执行代码,一步一步地重新运行,这是没有意义的,因为您支付了实例化的所有成本和其他开销,但没有从中获得任何加速效益,不是吗?)
此外,还应分别检查和检查您的O/S。您的实际joblib
和(隐藏)酸洗SER/DES工具版本
import statsmodels.tsa.holtwinters as holtwinters
import pandas as pd
import numpy as np
from joblib import Parallel, delayed
train = pd.read_csv('train.csv').drop(columns=['id'])
def iter_predict(data, model, steps, fit_args=[], fit_kwargs={}): # steps - кол. предсказываемых точек
def fit_by_idx(idx):
print('here')
endog = data.iloc[idx]
fitted = model(endog).fit(*fit_args, optimized=False, **fit_kwargs)\
res[idx, :] = fitted.forecast(steps)
res = np.zeros((data.shape[0], steps))
Parallel(n_jobs=2)(delayed(fit_by_idx)(idx) for idx in range(data.shape[0]))
return res
iter_predict(train, holtwinters.SimpleExpSmoothing, 2, fit_kwargs={'smoothing_level': 0.5})
def iter_preDEMO( data, # Pandas DF-alike data
#other args removed for MCVE-clarity
):
def fit_by_idx( idx ): #-------------------------------------[FUNCTION]-def-<start> To be transferred to each remote-joblib-initiated process(es)
print( 'here[{0:_>4d}(PPID:PID={1:_>7d}:{2::>7d})]'.format( idx,
os.getppid(), # test joblib-[FUNCTION]-def-transfer here with: lambda x = "_{0:}_" : x.format( os.getppid() )
os.getpid() # test joblib-[FUNCTION]-def-transfer here with: lambda x = "_{0:}_" : x.format( os.getpid() )
),
end = "\t",
flush = True
)
#------------------------------------------------------------[FUNCTION]-def-<end>
res = np.zeros( ( data.shape[0], 3 ) )
for aBackEND in ( 'threading', 'loky', 'multiprocessing' ):
try:
print( "\n____________________________Going into ['{0:}']-backend".format( aBackEND ) )
with parallel_backend( aBackEND, n_jobs = N_JOBS ):
Parallel( n_jobs = N_JOBS )( delayed( fit_by_idx )( pickled_SER_DES_copy_of_idx )
for pickled_SER_DES_copy_of_idx in range( data.shape[0] )
)
finally:
print( "\n_____________________________Exit from ['{0:}']-backend".format( aBackEND ) )
return res
START: PID=_____22528
____________________________Going into ['threading']-backend
here[___0(PPID:PID=__22527:::22528)] here[___1(PPID:PID=__22527:::22528)] here[___2(PPID:PID=__22527:::22528)] here[___3(PPID:PID=__22527:::22528)] here[___4(PPID:PID=__22527:::22528)] here[___5(PPID:PID=__22527:::22528)] here[___6(PPID:PID=__22527:::22528)] here[___7(PPID:PID=__22527:::22528)] here[___8(PPID:PID=__22527:::22528)] here[___9(PPID:PID=__22527:::22528)] here[__10(PPID:PID=__22527:::22528)] here[__11(PPID:PID=__22527:::22528)] here[__12(PPID:PID=__22527:::22528)] here[__13(PPID:PID=__22527:::22528)] here[__14(PPID:PID=__22527:::22528)] here[__15(PPID:PID=__22527:::22528)] here[__16(PPID:PID=__22527:::22528)]
_____________________________Exit from ['threading']-backend
____________________________Going into ['loky']-backend
here[___0(PPID:PID=__22527:::22528)] here[___1(PPID:PID=__22527:::22528)] here[___2(PPID:PID=__22527:::22528)] here[___3(PPID:PID=__22527:::22528)] here[___4(PPID:PID=__22527:::22528)] here[___5(PPID:PID=__22527:::22528)] here[___6(PPID:PID=__22527:::22528)] here[___7(PPID:PID=__22527:::22528)] here[___8(PPID:PID=__22527:::22528)] here[___9(PPID:PID=__22527:::22528)] here[__10(PPID:PID=__22527:::22528)] here[__11(PPID:PID=__22527:::22528)] here[__12(PPID:PID=__22527:::22528)] here[__13(PPID:PID=__22527:::22528)] here[__14(PPID:PID=__22527:::22528)] here[__15(PPID:PID=__22527:::22528)] here[__16(PPID:PID=__22527:::22528)]
_____________________________Exit from ['loky']-backend
____________________________Going into ['multiprocessing']-backend
here[___0(PPID:PID=__22527:::22528)] here[___1(PPID:PID=__22527:::22528)] here[___2(PPID:PID=__22527:::22528)] here[___3(PPID:PID=__22527:::22528)] here[___4(PPID:PID=__22527:::22528)] here[___5(PPID:PID=__22527:::22528)] here[___6(PPID:PID=__22527:::22528)] here[___7(PPID:PID=__22527:::22528)] here[___8(PPID:PID=__22527:::22528)] here[___9(PPID:PID=__22527:::22528)] here[__10(PPID:PID=__22527:::22528)] here[__11(PPID:PID=__22527:::22528)] here[__12(PPID:PID=__22527:::22528)] here[__13(PPID:PID=__22527:::22528)] here[__14(PPID:PID=__22527:::22528)] here[__15(PPID:PID=__22527:::22528)] here[__16(PPID:PID=__22527:::22528)]
_____________________________Exit from ['multiprocessing']-backend
[[0. 0. 0.]
[0. 0. 0.]
...
]