Python “如何提取退回的”;“对象”;来自多处理?
我需要为numpy数组操作处理一个多处理函数。(我就是不能用numba加速功能) 它将一个数组作为输入,并将数据馈送到一个Python “如何提取退回的”;“对象”;来自多处理?,python,arrays,numpy,multiprocessing,iterable,Python,Arrays,Numpy,Multiprocessing,Iterable,我需要为numpy数组操作处理一个多处理函数。(我就是不能用numba加速功能) 它将一个数组作为输入,并将数据馈送到一个多处理.pool中,以在其上执行函数func。 func执行一些操作并返回与原始数组中的编号位置(=与其位置)堆叠的子数组。 pool函数返回iterable对象out中的in-multiprocessing processed函数func的结果。 如何将iterable转换回numpy数组 可复制代码: import numpy as np, time, multiproc
多处理.pool
中,以在其上执行函数func
。
func
执行一些操作并返回与原始数组中的编号位置(=与其位置)堆叠的子数组。
pool函数返回iterable对象out
中的in-multiprocessing processed函数func
的结果。
如何将iterable转换回numpy数组
可复制代码:
import numpy as np, time, multiprocessing as mp, pandas as pd; min_ = 0.7
def sub_sub_func(X,newmin,newmax):
if len(X) >1:
if (X[0] == X[1:]).all(): X.fill(newmax)
else:
X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
return X_std * (newmax - newmin)+newmin
elif len(X) ==1:
X[0] = newmax
return X
def sub_func(arr):
if np.min(arr)<=0:
arr[arr<=0] = sub_sub_func(arr[arr<=0],min_*0.65,min_)
elif np.min(arr)<min_:
arr[arr<=min_] = sub_sub_func(arr[arr<=min_],min_*0.80,min_)
return arr
def func(mol_subrange, arr):
result= np.array([slice_+slice for slice_ in arr[mol_subrange] ])
return np.column_stack((numberlocations, result)) # return it with its numberlocations
def mp_list_o_arr_comprehension(func, full_arr_to_process, numThreads=4, mpBatches=1):
molecule_subrange = np.array(range(len(full_arr_to_process))),
parts = linParts(len(molecule_subrange),numThreads*mpBatches)
jobs=[]
for i in range(1,len(parts)):
job={'mol_subrange':molecule_subrange[parts[i-1]:parts[i]],
'arr': full_arr_to_process[parts[i-1]:parts[i]],
'func':func}
jobs.append(job)
pool=mp.Pool(processes=numThreads)
outputs=pool.imap_unordered(expandCall,jobs)
out_list = []
for out_ in outputs:
out_list.append(out_.get())
pool.close(); pool.join() # this is needed to prevent memory leaks return out
locs_arr, out_arr = np.array([]), np.array([])
for out_ in out_list:
out_locs = np.asarray(out_)[:,0]
out_vals = np.asarray(out_)[:,1]
out_arr = np.concatenate((out_arr, out_vals))
locs_arr = np.concatenate((locs_arr, out_locs))
#sort order by converting it into a pandas series
result = pd.series(out_arr, index=locs_arr).sort_index()
return np.array(result)
def linParts(numAtoms,numThreads):
# partition of atoms with a single loop
parts=np.linspace(0,numAtoms,min(numThreads,numAtoms)+1)
parts=np.ceil(parts).astype(int)
return parts
def expandCall(kargs):
# Expand the arguments of a callback function, kargs[’func’]
func=kargs['func']
del kargs['func']
out=func(**kargs)
return out
if __name__=='__main__':
LEN = 10000; temp = np.random.randint(1,high=100, size=LEN)
a = [np.random.uniform(size=rand) for rand in temp]
result = mp_list_o_arr_comprehension(func, a, numThreads=4, mpBatches=10)
将numpy作为np导入,将时间、多处理作为mp导入,将pandas作为pd导入;最小值=0.7
def sub_sub_func(X,newmin,newmax):
如果len(X)>1:
如果(X[0]==X[1:]).all():X.fill(newmax)
其他:
X_标准=(X-X.min(轴=0))/(X.max(轴=0)-X.min(轴=0))
返回X_标准*(newmax-newmin)+newmin
elif len(X)==1:
X[0]=newmax
返回X
def sub_func(arr):
如果np.min(arr)
RemoteTraceback:
"""
Traceback (most recent call last):
File "c:\...\pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "D:\... .py", line 67, in expandCall
out=func(**kargs)
File "D:\... .py", line 22, in func
result= np.array([slice_+slice for slice_ in arr[mol_subrange] ])
TypeError: list indices must be integers or slices, not tuple
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\... .py", line 73, in <module>
result = mp_list_o_arr_comprehension(func, a, numThreads=4, mpBatches=10)
File "D:\... ", line 40, in mp_list_o_arr_comprehension
for out_ in outputs:
File "c:\...\pool.py", line 868, in next
raise value
TypeError: list indices must be integers or slices, not tuple