Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/entity-framework/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 将共享列表与Paths多处理一起使用会引发“已拒绝摘要发送”错误_Python_Multiprocessing_Dill_Pathos - Fatal编程技术网

Python 将共享列表与Paths多处理一起使用会引发“已拒绝摘要发送”错误

Python 将共享列表与Paths多处理一起使用会引发“已拒绝摘要发送”错误,python,multiprocessing,dill,pathos,Python,Multiprocessing,Dill,Pathos,我试图使用多处理来生成复杂的、不可点击的对象,如下代码段所示: from multiprocessing import Manager from pathos.multiprocessing import ProcessingPool class Facility: def __init__(self): self.blocks = Manager().list() def __process_blocks(self, block): des

我试图使用多处理来生成复杂的、不可点击的对象,如下代码段所示:

from multiprocessing import Manager
from pathos.multiprocessing import ProcessingPool

class Facility:

    def __init__(self):
        self.blocks = Manager().list()

    def __process_blocks(self, block):
        designer = block["designer"]
        apply_terrain = block["terrain"]
        block_type = self.__block_type_to_string(block["type"])
        block = designer.generate_block(block_id=block["id"],
                                            block_type=block_type,
                                            anchor=Point(float(block["anchor_x"]), float(block["anchor_y"]),
                                                         float(block["anchor_z"])),
                                            pcu_anchor=Point(float(block["pcu_x"]), float(block["pcu_y"]), 0),
                                            corridor_width=block["corridor"],
                                            jb_height=block["jb_connect_height"],
                                            min_boxes=block["min_boxes"],
                                            apply_terrain=apply_terrain)
        self.blocks.append(block)

    def design(self, apply_terrain=False):
        designer = FacilityBuilder(string_locator=self._string_locator, string_router=self._string_router,
                                   box_router=self._box_router, sorter=self._sorter,
                                   tracker_configurator=self._tracker_configurator, config=self._config)
        blocks = [block.to_dict() for index, block in self._store.get_blocks().iterrows()]
        for block in blocks:
            block["designer"] = designer
            block["terrain"] = apply_terrain

        with ProcessingPool() as pool:
            pool.map(self.__process_blocks, blocks)
(难以用更简单的代码重现,因此我展示了实际的代码)

我需要更新一个可共享变量,因此我使用
多处理.Manager
初始化了一个类级变量,如下所示:

self.blocks = Manager().list()
import os
import math
from multiprocessing import Manager
from pathos.multiprocessing import ProcessingPool


class MyComplex:

    def __init__(self, x):
        self._z = x * x

    def me(self):
        return math.sqrt(self._z)


class Starter:

    def __init__(self):
        manager = Manager()
        self.my_list = manager.list()

    def _f(self, value):
        print(f"{value.me()} on {os.getpid()}")
        self.my_list.append(value.me)

    def start(self):
        names = [MyComplex(x) for x in range(100)]

        with ProcessingPool() as pool:
            pool.map(self._f, names)


if __name__ == '__main__':
    starter = Starter()
    starter.start()
这将导致以下错误(仅部分stacktrace):

作为最后的手段,我尝试使用
python
的标准
ThreadPool
实现来避免
pickle
问题,但这也没有成功。我读过很多类似的问题,但没有找到解决这个问题的方法。
dill
pathos
mulitprocessing.Manager
接口的方式有问题吗

编辑:所以我用下面的示例代码复制了它:

self.blocks = Manager().list()
import os
import math
from multiprocessing import Manager
from pathos.multiprocessing import ProcessingPool


class MyComplex:

    def __init__(self, x):
        self._z = x * x

    def me(self):
        return math.sqrt(self._z)


class Starter:

    def __init__(self):
        manager = Manager()
        self.my_list = manager.list()

    def _f(self, value):
        print(f"{value.me()} on {os.getpid()}")
        self.my_list.append(value.me)

    def start(self):
        names = [MyComplex(x) for x in range(100)]

        with ProcessingPool() as pool:
            pool.map(self._f, names)


if __name__ == '__main__':
    starter = Starter()
    starter.start()

添加
self.my_list=manager.list()
时出错,因此我已解决此问题。如果像mmckerns这样的人或者在多处理方面比我有更多知识的人能够评论为什么这是一个解决方案,我仍然会很高兴

问题似乎是
管理器().list()
是在
\uuuu init\uuuu
中声明的。以下代码可以正常工作,没有任何问题:

import os
import math
from multiprocessing import Manager
from pathos.multiprocessing import ProcessingPool


class MyComplex:

    def __init__(self, x):
        self._z = x * x

    def me(self):
        return math.sqrt(self._z)


class Starter:

    def _f(self, value):
        print(f"{value.me()} on {os.getpid()}")
        return value.me()

    def start(self):
        manager = Manager()
        my_list = manager.list()
        names = [MyComplex(x) for x in range(100)]

        with ProcessingPool() as pool:
            my_list.append(pool.map(self._f, names))
        print(my_list)


if __name__ == '__main__':
    starter = Starter()
    starter.start()

在这里,我将
列表
声明为
处理池
操作的本地。如果我愿意,我可以随后将结果分配给类级变量。

Hi@Paul:后一种代码传递的对象具有更简单的依赖链,因此成功的机会更大。如果您想查看传递给序列化程序的内容,可以使用:
import-dill;dill.detect.trace(True)
。它将在对象序列化时打印依赖链。谢谢@MikeMcKerns。顺便说一句,
dill
做得很好。由于
pickle
的收缩,它打开了很多被关闭的门。