Python 基于mpi4py的故障分割
我正在使用mpi4py将处理任务分散到内核集群上。 我的代码如下所示:Python 基于mpi4py的故障分割,python,arrays,python-2.7,multiprocessing,mpi4py,Python,Arrays,Python 2.7,Multiprocessing,Mpi4py,我正在使用mpi4py将处理任务分散到内核集群上。 我的代码如下所示: comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() '''Perform processing operations with each processor returning two arrays of equal size, array1 and array2''' all_data1 = comm.gather(array1
comm = MPI.COMM_WORLD
size = comm.Get_size()
rank = comm.Get_rank()
'''Perform processing operations with each processor returning
two arrays of equal size, array1 and array2'''
all_data1 = comm.gather(array1, root = 0)
all_data2 = comm.gather(array2, root = 0)
这将返回以下错误:
SystemError: Negative size passed to PyString_FromStringAndSize
我相信这个错误意味着all_data1
中存储的数据数组超过了Python中数组的最大大小,这是很有可能的
我试着把它分成小块,如下所示:
comm.isend(array1, dest = 0, tag = rank+1)
comm.isend(array2, dest = 0, tag = rank+2)
if rank == 0:
for proc in xrange(size):
partial_array1 = comm.irecv(source = proc, tag = proc+1)
partial_array2 = comm.irecv(source = proc, tag = proc+2)
但这将返回以下错误
[node10:20210] *** Process received signal ***
[node10:20210] Signal: Segmentation fault (11)
[node10:20210] Signal code: Address not mapped (1)
[node10:20210] Failing at address: 0x2319982b
接着是一大堆难以理解的路径信息和最后一条消息:
mpirun noticed that process rank 0 with PID 0 on node node10 exited on signal 11 (Segmentation fault).
无论我使用多少处理器,这似乎都会发生
对于C语言中的类似问题,解决方案似乎微妙地改变了
recv
调用中参数的解析方式。对于Python,语法是不同的,因此如果有人能够澄清出现此错误的原因以及如何修复它,我将不胜感激。我通过执行以下操作解决了我遇到的问题
if rank != 0:
comm.Isend([array1, MPI.FLOAT], dest = 0, tag = 77)
# Non-blocking send; allows code to continue before data is received.
if rank == 0:
final_array1 = array1
for proc in xrange(1,size):
partial_array1 = np.empty(len(array1), dtype = float)
comm.Recv([partial_array1, MPI.FLOAT], source = proc, tag = 77)
# A blocking receive is necessary here to avoid a Segfault.
final_array1 += partial_array1
if rank != 0:
comm.Isend([array2, MPI.FLOAT], dest = 0, tag = 135)
if rank == 0:
final_array2 = array2
for proc in xrange(1,size):
partial_array2 = np.empty(len(array2), dtype = float)
comm.Recv([partial_array2, MPI.FLOAT], source = proc, tag = 135)
final_array2 += partial_array2
comm.barrier() # This barrier call resolves the Segfault.
if rank == 0:
return final_array1, final_array2
else:
return None
你能在一边对数组进行pickle和gzip处理,在另一边进行gunzip/unpickle处理吗?如果我错了,请纠正我,但这不是mpi4py开始工作的方式吗?据我所知,要传达的数据是“隐藏”的。理论上,它应该。。。你能把东西从一边送到另一边吗?集群中的所有实体都相似吗?我刚刚尝试发送一个测试对象并在另一侧打印它,但它显示为
如何解包这个对象?这可能是问题的原因。如何发送和打印?