Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/356.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/python-2.7/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python“析构函数”杀死模块中的子进程_Python_Python 2.7_Multiprocessing - Fatal编程技术网

Python“析构函数”杀死模块中的子进程

Python“析构函数”杀死模块中的子进程,python,python-2.7,multiprocessing,Python,Python 2.7,Multiprocessing,我正在开发一个MyModule,它从c库中读取一些巨大的实时数据,并将其写入PyTables。由于需要收集的数据量很大,甚至300MB/s的Pytable都是在单独的进程上访问的,数据包被推送到多处理队列中 class Tape(object): def __init___(self): self.writeq = multiprocessing.Queue() self.killq = multiprocessing.Queue() s

我正在开发一个MyModule,它从c库中读取一些巨大的实时数据,并将其写入PyTables。由于需要收集的数据量很大,甚至300MB/s的Pytable都是在单独的进程上访问的,数据包被推送到多处理队列中

class Tape(object):

    def __init___(self):
        self.writeq = multiprocessing.Queue()
        self.killq = multiprocessing.Queue()
        self.recorder = MyRecorder(self.writeq, self.killq)
        self.recorder.start()

    def Record(self, recording):
        if isinstance(recording, numpy.ndarray):
            self.writeq.put(recording)

    def Close(self):
        self.killq.put("time to go")
        self.recorder.join()
        for q in (self.writeq, self.killq):
            q._buffer.clear()
            q.close()
            q.join_thread()

class MyRecorder(multiprocessing.Process):

    def __init__(self, writeq, killq):
        self.writeq = writeq
        self.killq = killq
        self.handle = tables.openFile("/tmp/recoding.h5")
        self.number = 0
        super(MyRecorder, self).__init__()

    def run():
        running = True
        while running:
            record = None
            command = None
            try:
                record = self.writeq.get(block = False)
                if isinstance(record, numpy.ndarray):
                    a = self.handle.createArray("/", "record%08d" % self.number, recording)
                    a._f_close(True)
                    self.number += 1
            except Queue.Empty:
                pass

            try:
                command = self.killq.get(block = False)
                if isinstance(command, basestring) and command == "time to go":
                    running = False
            except Queue.Empty:
                pass

            if record is None and command is None:
                time.sleep(0.001)

        self.handle.flush()
        self.handle.close()
用户将通过以下方式使用我的模块:

tape = MyModule.Tape()
device = MyOtherModule.Device()
while True:
    record = device.get_latest_data_as_numpy_array()
    if record is not None:
        tape.Record(record)
    else:
        break

tape.Close()
exit(0)
问题

我想确保,如果用户忘记调用磁带。关闭磁带内的进程在调用出口或程序以其他方式终止时仍然终止。磁带。关闭呼叫必须以某种方式进行

向Tape类添加_del__调用不起作用,因为它从未被调用过。另外,我希望避免使用with tape:context语法—可能会同时创建和使用多个磁带。 我已经添加了看门狗线程来执行磁带和Myrecorder之间的心跳-但它只有在其中任何一个崩溃并被异常撕裂时才有用。 device.get_latest_data_as_numpy_数组调用可能需要几秒钟才能返回,因此在MyRecorder中添加计时器以检查x秒内是否没有数据并不是一个真正的解决方案。
使用上下文管理器并强制用户与MyModule一起使用。磁带作为磁带:您也可以查看atexit模块:@Torxed-谢谢,但我已经提到上下文管理器对我不起作用。@Maciek-atexit只对静态函数起作用。。。我必须维护一个对模块内磁带对象的引用的全局列表。讨厌但可能有用。