Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/python-2.7/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 多处理中的重叠写入任务(pool.map)_Python_Python 2.7_Multiprocessing_File Writing - Fatal编程技术网

Python 多处理中的重叠写入任务(pool.map)

Python 多处理中的重叠写入任务(pool.map),python,python-2.7,multiprocessing,file-writing,Python,Python 2.7,Multiprocessing,File Writing,在运行下面的代码时,当使用多处理时,我面临着重叠的编写问题 def spectrum(i): for j in range (num_x): coordinate = data[:,j,i] filtered = filter(lambda a: a != 0, coordinate) occupancy = float(len(filtered))/framespfile if filtered == [] or filt

在运行下面的代码时,当使用多处理时,我面临着重叠的编写问题

def spectrum(i):
    for j in range (num_x):
        coordinate = data[:,j,i]
        filtered = filter(lambda a: a != 0, coordinate)
        occupancy = float(len(filtered))/framespfile
        if filtered == [] or filtered[0] > 500:
            output = str([j, i]) + "\n" + str(filtered) + "\n"
            badpixelfile.write(output)
        else :
            output = str([j, i]) + "\n" + str(filtered) + "\n"
            coordinatefile.write(output)


pool2 = multiprocessing.Pool(multiprocessing.cpu_count())
pool2.map(spectrum, range(num_y))
pool2.close()
pool2.join()
它应该记录如下结果:

 [14,0]
 [50, 51, 84]
 [0, 314]
 [60, 74, 12, 202, 129]
但有时进程重叠,文件看起来像(这种情况偶尔发生,但会导致分析问题)


因此,它没有完成[149,27]的进程,并且在没有关闭[149,27]进程的情况下已经从[0,123]开始了。

您是否尝试过使用
多处理.Lock来保护对
write
的调用?通常,您最好不要尝试在子进程中写入文件;您无法并行化I/O,因此在那里进行写操作不会带来任何性能好处。您最好只从每个子级返回一个元组,其中包含
j
i
筛选的
,然后在父进程中编写文件。如果
filtered
可能非常大,这可能会增加太多的IPC开销,在这种情况下,尝试同步每个子级中的文件写入将是更好的选择。谢谢!这对我有用!您是否尝试过使用
多处理.Lock
保护对
写入的调用?通常,您最好不要尝试在子进程中写入文件;您无法并行化I/O,因此在那里进行写操作不会带来任何性能好处。您最好只从每个子级返回一个元组,其中包含
j
i
筛选的
,然后在父进程中编写文件。如果
filtered
可能非常大,这可能会增加太多的IPC开销,在这种情况下,尝试同步每个子级中的文件写入将是更好的选择。谢谢!这对我有用!
[149, 27]
[27, 34, 26, 25, 19, 45, 32, 36, 46, 29, 25, 25, 40, 62, 24, 31, 23, 46, 33, 35, 60, 33, 8, 24, 49, 29, 29, 42, 8, 22, 31, 28, 25, 25, 56, 32, 31, 27, 11, 20, 29, 23, 51, 28, 31, 29, 28, 30, 23, 16, 34, 36, 25, 17, 25, 19, 19, 51, 27, 37, 9, 32, 26, 28, 27, 3, 44, 4, 38, 20, 34, 28, 22, 26, 26, 19, 21, 25, 25, 48, 24, 29, 22, 20, 23, 29, 15, 32, 42, 3, 23, 26, 34, 28, 26, 39, 17, [0, 123]
[20, 43, 33, 34, 18, 44, 15, 22, 33, 20, 45, 30, 21, 33, 32, 43, 30, 8, 37, 54, 9, 46, 33, 16, 27, 29, 31, 47, 26, 38, 40, 29, 34, 38, 17, 33, 47, 28, 24, 33, 40, 47, 16, 32, 33, 21, 49, 34, 26, 21, 47, 46, 49, 13, 62, 62, 31, 41, 14, 65, 36, 49, 27, 38, 44, 54, 55, 64, 32, 50, 28, 34, 41, 49, 33, 40, 28, 32, 31, 56, 16, 35, 37, 50, 33, 41, 38, 26, 41, 26, 28, 25, 37, 27, 20, 47, 31, 35, 28, 43, 48, 37, 31, 24, 34, 36, 41, 19, 41, 41, 3, 36]
[1, 123]