Python 并行读取numpy数组
考虑以下几点:Python 并行读取numpy数组,python,arrays,numpy,parallel-processing,python-multiprocessing,Python,Arrays,Numpy,Parallel Processing,Python Multiprocessing,考虑以下几点: fine = np.random.uniform(0,100,10) fine[fine<20] = 0 # introduce some intermittency coarse = np.sum(fine.reshape(-1,2),axis=1) 因此w_xx(精细,粗糙)将返回一个形状数组5,2,其中轴=1的元素是精细对于粗糙值的权重 对于较小的时间序列来说,这一切都很好,但我在~60k大小的fine数组上运行此分析,再加上300多次迭代的循环 我一直在尝试使用P
fine = np.random.uniform(0,100,10)
fine[fine<20] = 0 # introduce some intermittency
coarse = np.sum(fine.reshape(-1,2),axis=1)
因此w_xx(精细,粗糙)
将返回一个形状数组5,2
,其中轴=1的元素是精细
对于粗糙
值的权重
对于较小的时间序列来说,这一切都很好,但我在~60k大小的fine
数组上运行此分析,再加上300多次迭代的循环
我一直在尝试使用Python2.7中的多处理
库并行运行这个程序,但我还没有成功。我需要同时读取两个时间序列,以便为粗略
中的每个值获取精细
的对应值,并且只对大于0的值进行操作,这是我的分析所需要的
我希望能有更好的建议。我想,如果我可以在多处理
中定义一个映射函数来与Pool.map
一起使用,我应该能够并行化它?我刚刚开始使用多处理
,所以我不知道是否还有其他方法
谢谢。您只需执行以下操作即可在矢量化形式中获得相同的结果:
>>> (fine / np.repeat(coarse, 2)).reshape(-1, 2)
然后,您可以使用np.isfinite
筛选出rough
为零的行,因为如果rough
为零,则输出为inf
或nan
非常好!我不知道np。重复一下,非常感谢
为了以最初提出的形式回答我的问题,我还设法通过多处理:
import numpy as np
from multiprocessing import Pool
fine = np.random.uniform(0,100,100000)
fine[fine<20] = 0
coarse = np.sum(fine.reshape(-1,2),axis=1)
def wfunc(zipped):
return zipped[0]/zipped[1]
def wpar(zipped, processes):
p = Pool(processes)
calc = np.asarray(p.map(wfunc, zip(fine,np.repeat(coarse,2))))
p.close()
p.join()
return calc[np.isfinite(calc)].reshape(-1,2)
再次感谢 除了@behzad.nouri提出的NumPy表达式之外,您还可以使用编译器获得额外的加速:
$ cat w_xx.py
#pythran export w_xx(float[], float[])
import numpy as np
def w_xx(fine, coarse):
w = (fine / np.repeat(coarse, 2))
return w[np.isfinite(w)].reshape(-1, 2)
$ python -m timeit -s 'import numpy as np; fine = np.random.uniform(0, 100, 100000); fine[fine<20] = 0; coarse = np.sum(fine.reshape(-1, 2), axis=1); from w_xx import w_xx' 'w_xx(fine, coarse)'
1000 loops, best of 3: 1.5 msec per loop
$ pythran w_xx.py -fopenmp -march=native # yes, this generates parallel code
$ python -m timeit -s 'import numpy as np; fine = np.random.uniform(0, 100, 100000); fine[fine<20] = 0; coarse = np.sum(fine.reshape(-1, 2), axis=1); from w_xx import w_xx' 'w_xx(fine, coarse)'
1000 loops, best of 3: 867 usec per loop
$cat w_xx.py
#pythran导出w_xx(浮点[],浮点[])
将numpy作为np导入
def w_xx(细、粗):
w=(精细/np.重复(粗糙,2))
返回w[np.isfinite(w)]。重塑(-1,2)
$python-m timeit-s’将numpy作为np导入;精细=np.随机.均匀(0,100,100000);好的
def w_opt(fine, coarse):
w = (fine / np.repeat(coarse, 2))
return w[np.isfinite(w)].reshape(-1,2)
#using some iPython magic
%timeit w_opt(fine,coarse)
1000 loops, best of 3: 1.88 ms per loop
%timeit w_xx(fine,coarse)
1 loops, best of 3: 342 ms per loop
%timeit wpar(zip(fine,np.repeat(coarse,2)),6) #I've 6 cores at my disposal
1 loops, best of 3: 1.76 s per loop
$ cat w_xx.py
#pythran export w_xx(float[], float[])
import numpy as np
def w_xx(fine, coarse):
w = (fine / np.repeat(coarse, 2))
return w[np.isfinite(w)].reshape(-1, 2)
$ python -m timeit -s 'import numpy as np; fine = np.random.uniform(0, 100, 100000); fine[fine<20] = 0; coarse = np.sum(fine.reshape(-1, 2), axis=1); from w_xx import w_xx' 'w_xx(fine, coarse)'
1000 loops, best of 3: 1.5 msec per loop
$ pythran w_xx.py -fopenmp -march=native # yes, this generates parallel code
$ python -m timeit -s 'import numpy as np; fine = np.random.uniform(0, 100, 100000); fine[fine<20] = 0; coarse = np.sum(fine.reshape(-1, 2), axis=1); from w_xx import w_xx' 'w_xx(fine, coarse)'
1000 loops, best of 3: 867 usec per loop