使用python多处理同时写入文件
我正在开发一个项目,在这个项目中,我使用了大约6个传感器,将数据输入到Beaglebone Black中,从而将数据连续保存到6个不同的文件中。通过另一个SO问题(),我了解到多处理模块可以为我实现这一点,但在运行我的新代码时,我只获得1个文件,而不是6个文件。如何修改此代码以获得所需的6个结果文件 *我已经根据skrrgwasme下面的建议编辑了我的文件,以包含使用python多处理同时写入文件,python,file,debian,multiprocessing,beagleboneblack,Python,File,Debian,Multiprocessing,Beagleboneblack,我正在开发一个项目,在这个项目中,我使用了大约6个传感器,将数据输入到Beaglebone Black中,从而将数据连续保存到6个不同的文件中。通过另一个SO问题(),我了解到多处理模块可以为我实现这一点,但在运行我的新代码时,我只获得1个文件,而不是6个文件。如何修改此代码以获得所需的6个结果文件 *我已经根据skrrgwasme下面的建议编辑了我的文件,以包含Manager,但是现在代码运行了,并且没有产生任何结果。没有错误,没有文件。只是跑步 守则: import Queue import
Manager
,但是现在代码运行了,并且没有产生任何结果。没有错误,没有文件。只是跑步
守则:
import Queue
import multiprocessing
import time
def emgacq(kill_queue, f_name, adcpin):
with open(f_name, '+') as f:
while True:
try:
val = kill_queue.get(block = False)
if val == STOP:
return
except Queue.Empty:
pass
an_val = ADC.read(adcpin) * 1.8
f.write("{}\t{}\n".format(ms, an_val))
def main():
#Timing stuff
start = time.time()
elapsed_seconds = time.time() - start
ms = elapsed_seconds * 1000
#Multiprcessing settings
pool = multiprocessing.Pool()
m = multiprocessing.Manager()
kill_queue = m.Queue()
#All the arguments we need run thru emgacq()
arg_list = [
(kill_queue, 'HamLeft', 'AIN1'),
(kill_queue, 'HamRight', 'AIN2'),
(kill_queue, 'QuadLeft', 'AIN3'),
(kill_queue, 'QuadRight', 'AIN4'),
(kill_queue, 'GastLeft', 'AIN5'),
(kill_queue, 'GastRight', 'AIN6'),
]
for a in arg_list:
pool.apply_async(emgacq, args=a)
try:
while True:
time.sleep(60)
except KeyboardInterrupt:
for a in arg_list:
kill_queue.put(STOP)
pool.close()
pool.join()
raise f.close()
if __name__ == "__main__":
main()
您的主要问题是子流程函数的参数列表不正确:
f_list = [
emgacq(kill_queue, 'HamLeft', 'AIN1'),
# this calls the emgacq function right here - blocking the rest of your
# script's execution
此外,您的apply\u async
调用错误:
for f in f_list:
pool.apply_async(f, args=(kill_queue))
# f is not a function here - the arguments to the apply_async function
# should be the one function you want to call followed by a tuple of
# arguments that should be provided to it
您需要它,它还包括队列的管理器
(请参阅),并将所有代码放入main
函数中:
# put your imports here
# followed by the definition of the emgacq function
def main():
#Timing stuff
start = time.time()
elapsed_seconds = time.time() - start
ms = elapsed_seconds * 1000
pool = multiprocessing.Pool()
m = multiprocessing.Manager()
kill_queue = m.Queue()
arg_list = [
(kill_queue, 'HamLeft', 'AIN1'),
(kill_queue, 'HamRight', 'AIN2'),
(kill_queue, 'QuadLeft', 'AIN3'),
(kill_queue, 'QuadRight', 'AIN4'),
(kill_queue, 'GastLeft', 'AIN5'),
(kill_queue, 'GastRight', 'AIN6'),
]
for a in arg_list:
pool.apply_async(emgacq, args=a)
# this will call the emgacq function with the arguments provided in "a"
if __name__ == "__main__":
# you want to have all of your code in a function, because the workers
# will start by importing the main module they are executing from,
# and you don't want them to execute that code all over again
main()
您的主要问题是子流程函数的参数列表不正确:
f_list = [
emgacq(kill_queue, 'HamLeft', 'AIN1'),
# this calls the emgacq function right here - blocking the rest of your
# script's execution
此外,您的apply\u async
调用错误:
for f in f_list:
pool.apply_async(f, args=(kill_queue))
# f is not a function here - the arguments to the apply_async function
# should be the one function you want to call followed by a tuple of
# arguments that should be provided to it
您需要它,它还包括队列的管理器
(请参阅),并将所有代码放入main
函数中:
# put your imports here
# followed by the definition of the emgacq function
def main():
#Timing stuff
start = time.time()
elapsed_seconds = time.time() - start
ms = elapsed_seconds * 1000
pool = multiprocessing.Pool()
m = multiprocessing.Manager()
kill_queue = m.Queue()
arg_list = [
(kill_queue, 'HamLeft', 'AIN1'),
(kill_queue, 'HamRight', 'AIN2'),
(kill_queue, 'QuadLeft', 'AIN3'),
(kill_queue, 'QuadRight', 'AIN4'),
(kill_queue, 'GastLeft', 'AIN5'),
(kill_queue, 'GastRight', 'AIN6'),
]
for a in arg_list:
pool.apply_async(emgacq, args=a)
# this will call the emgacq function with the arguments provided in "a"
if __name__ == "__main__":
# you want to have all of your code in a function, because the workers
# will start by importing the main module they are executing from,
# and you don't want them to execute that code all over again
main()
考虑到您在这两个问题上遇到的问题,我强烈建议您阅读一些基本的Python教程。似乎您对函数调用、变量赋值和参数传递等基本概念有些困惑。如果您能在深入研究下一个脚本/程序之前了解这些基础知识,您将获得更大的成功。鉴于您在这两个问题上一直遇到的问题,我强烈建议您阅读一些基本的Python教程。似乎您对函数调用、变量赋值和参数传递等基本概念有些困惑。如果你能在开始下一个脚本/程序之前了解这些基本知识,你会取得更大的成功。我不明白这里发生了什么。现在似乎有一个继承问题。具体地说,“队列对象只能通过继承在进程之间共享。”我编辑了主要帖子,以便您可以看到我尝试
Manager时发生的情况。
既然队列的定义已更改,我是否需要调整使用kill_Queue
的位置?(kill_queue=m.queue()
而不是kill_queue=multiprocessing.queue
)我不明白这里发生了什么。现在似乎有一个继承问题。具体地说,“队列对象只能通过继承在进程之间共享。”我编辑了主要帖子,以便您可以看到我尝试Manager时发生的情况。
既然队列的定义已更改,我是否需要调整使用kill_Queue
的位置?(kill_queue=m.queue()
而不是kill_queue=multiprocessing.queue
)