Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/330.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在python中实现协同程序 让我说,我有一个C++函数 ReultTyType Calm(InpDyType输入),我已经使用Python提供了Python。我的python代码执行多种计算,如下所示: def compute_total_result() inputs = ... total_result = ... for input in inputs: result = compute_python_wrapper(input) update_total_result(total_result) return total_result def compute_total_results_parallel() inputs = ... total_result = ... futures = [] for input in inputs: futures.append(compute_threaded_python_wrapper(input)) for future in futures: update_total_result(future.get()) return total_result def run_async(): loop = asyncio.get_event_loop() future = loop.create_future() def on_done(result): # when done, notify the future in a thread-safe manner loop.call_soon_threadsafe(future.set_result, resut) # start the worker in a thread owned by the pool pool.submit(_worker, on_done) # returning a future makes run_async() awaitable, and # passable to asyncio.gather() etc. return future def _worker(on_done): # this runs in a different thread ... processing goes here ... result = ... on_done(result) 由于计算时间较长,我实现了一个C++线程池(like),编写了一个函数 STD::未来的CytoTeTythOxEngult(输入类型输入),它返回一个线程池执行完毕后就准备好的未来。 我想做的是在Python中使用这个C++函数。一种简单的方法是包装std::future包括其get()函数,等待所有结果如下: def compute_total_result() inputs = ... total_result = ... for input in inputs: result = compute_python_wrapper(input) update_total_result(total_result) return total_result def compute_total_results_parallel() inputs = ... total_result = ... futures = [] for input in inputs: futures.append(compute_threaded_python_wrapper(input)) for future in futures: update_total_result(future.get()) return total_result def run_async(): loop = asyncio.get_event_loop() future = loop.create_future() def on_done(result): # when done, notify the future in a thread-safe manner loop.call_soon_threadsafe(future.set_result, resut) # start the worker in a thread owned by the pool pool.submit(_worker, on_done) # returning a future makes run_async() awaitable, and # passable to asyncio.gather() etc. return future def _worker(on_done): # this runs in a different thread ... processing goes here ... result = ... on_done(result)_Python_C++_Python Asyncio - Fatal编程技术网

在python中实现协同程序 让我说,我有一个C++函数 ReultTyType Calm(InpDyType输入),我已经使用Python提供了Python。我的python代码执行多种计算,如下所示: def compute_total_result() inputs = ... total_result = ... for input in inputs: result = compute_python_wrapper(input) update_total_result(total_result) return total_result def compute_total_results_parallel() inputs = ... total_result = ... futures = [] for input in inputs: futures.append(compute_threaded_python_wrapper(input)) for future in futures: update_total_result(future.get()) return total_result def run_async(): loop = asyncio.get_event_loop() future = loop.create_future() def on_done(result): # when done, notify the future in a thread-safe manner loop.call_soon_threadsafe(future.set_result, resut) # start the worker in a thread owned by the pool pool.submit(_worker, on_done) # returning a future makes run_async() awaitable, and # passable to asyncio.gather() etc. return future def _worker(on_done): # this runs in a different thread ... processing goes here ... result = ... on_done(result) 由于计算时间较长,我实现了一个C++线程池(like),编写了一个函数 STD::未来的CytoTeTythOxEngult(输入类型输入),它返回一个线程池执行完毕后就准备好的未来。 我想做的是在Python中使用这个C++函数。一种简单的方法是包装std::future包括其get()函数,等待所有结果如下: def compute_total_result() inputs = ... total_result = ... for input in inputs: result = compute_python_wrapper(input) update_total_result(total_result) return total_result def compute_total_results_parallel() inputs = ... total_result = ... futures = [] for input in inputs: futures.append(compute_threaded_python_wrapper(input)) for future in futures: update_total_result(future.get()) return total_result def run_async(): loop = asyncio.get_event_loop() future = loop.create_future() def on_done(result): # when done, notify the future in a thread-safe manner loop.call_soon_threadsafe(future.set_result, resut) # start the worker in a thread owned by the pool pool.submit(_worker, on_done) # returning a future makes run_async() awaitable, and # passable to asyncio.gather() etc. return future def _worker(on_done): # this runs in a different thread ... processing goes here ... result = ... on_done(result)

在python中实现协同程序 让我说,我有一个C++函数 ReultTyType Calm(InpDyType输入),我已经使用Python提供了Python。我的python代码执行多种计算,如下所示: def compute_total_result() inputs = ... total_result = ... for input in inputs: result = compute_python_wrapper(input) update_total_result(total_result) return total_result def compute_total_results_parallel() inputs = ... total_result = ... futures = [] for input in inputs: futures.append(compute_threaded_python_wrapper(input)) for future in futures: update_total_result(future.get()) return total_result def run_async(): loop = asyncio.get_event_loop() future = loop.create_future() def on_done(result): # when done, notify the future in a thread-safe manner loop.call_soon_threadsafe(future.set_result, resut) # start the worker in a thread owned by the pool pool.submit(_worker, on_done) # returning a future makes run_async() awaitable, and # passable to asyncio.gather() etc. return future def _worker(on_done): # this runs in a different thread ... processing goes here ... result = ... on_done(result) 由于计算时间较长,我实现了一个C++线程池(like),编写了一个函数 STD::未来的CytoTeTythOxEngult(输入类型输入),它返回一个线程池执行完毕后就准备好的未来。 我想做的是在Python中使用这个C++函数。一种简单的方法是包装std::future包括其get()函数,等待所有结果如下: def compute_total_result() inputs = ... total_result = ... for input in inputs: result = compute_python_wrapper(input) update_total_result(total_result) return total_result def compute_total_results_parallel() inputs = ... total_result = ... futures = [] for input in inputs: futures.append(compute_threaded_python_wrapper(input)) for future in futures: update_total_result(future.get()) return total_result def run_async(): loop = asyncio.get_event_loop() future = loop.create_future() def on_done(result): # when done, notify the future in a thread-safe manner loop.call_soon_threadsafe(future.set_result, resut) # start the worker in a thread owned by the pool pool.submit(_worker, on_done) # returning a future makes run_async() awaitable, and # passable to asyncio.gather() etc. return future def _worker(on_done): # this runs in a different thread ... processing goes here ... result = ... on_done(result),python,c++,python-asyncio,Python,C++,Python Asyncio,我想这在这种情况下已经足够好了,但它很快就会变得非常复杂,因为我必须传递未来 但是,我认为在概念上,等待这些C++结果与等待文件或网络I/O.没有什么不同。 为了方便I/O操作,python开发人员引入了async/await关键字。如果我的compute\u threaded\u python\u包装器是asyncio的一部分,我可以简单地将其重写为 async def compute_total_results_async() inputs = ... total_result =

我想这在这种情况下已经足够好了,但它很快就会变得非常复杂,因为我必须传递未来

<>但是,我认为在概念上,等待这些C++结果与等待文件或网络I/O.

没有什么不同。 为了方便I/O操作,python开发人员引入了
async
/
await
关键字。如果我的
compute\u threaded\u python\u包装器
asyncio
的一部分,我可以简单地将其重写为


async def compute_total_results_async()
  inputs = ...
  total_result = ...
  for input in inputs:
    result = await compute_threaded_python_wrapper(input)
    update_total_result(total_result)

  return total_result
我可以通过
result=asyncio.run(compute\u total\u results\u async())
执行整个代码

有很多关于python中异步编程的教程,但大多数都涉及使用协同程序,其中的基础似乎是对
asyncio
包的一些调用,大多数调用
asyncio.sleep(delay)
作为I/O代理


我的问题是:(如何)我可以在python中实现协同路由,使python能够
等待
包装好的未来对象(有人提到一个
\uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu>方法返回迭代器)?

首先,需要纠正问题中的错误:

如果我的
compute\u threaded\u python\u包装器
是asyncio的一部分,我可以简单地将其重写为[…]

重写不正确:
await
表示“等待计算完成”,因此编写的循环将按顺序执行代码。实际并行运行任务的重写类似于:

# a direct translation of the "parallel" version
def compute_total_results_async()
    inputs = ...
    total_result = ...
    tasks = []
    # first spawn all the tasks
    for input in inputs:
        tasks.append(
            asyncio.create_task(compute_threaded_python_wrapper(input))
        )
    # and then await them
    for task in tasks:
        update_total_result(await task)
    return total_result
这种“生成所有等待所有”模式非常单一,以至于asyncio提供了一个助手函数,这使它变得更短,尤其是与列表理解结合使用时:

# a more idiomatic version
def compute_total_results_async()
    inputs = ...
    total_result = ...
    results = await asyncio.gather(
        *[compute_threaded_python_wrapper(input) for input in inputs]
    )
    for result in results:
        update_total_result(result)
    return total_result
这样,我们就可以进入主要问题:

我的问题是:(如何)在python中实现协同路由,使python能够等待包装好的未来对象(这里提到了一个
\uuuuuuuuuuuuuuuuuuuuuuu
方法返回迭代器)

是的,等待对象是使用迭代器来实现的,迭代器产生用于指示暂停的结果。但对于你真正需要的工具来说,这太低级了。您不需要任何等待,而是需要一个与asyncio事件循环一起工作的循环,它对底层迭代器有特定的期望。当结果准备好时,您需要一种机制来恢复等待的任务,在这种情况下,您再次依赖于asyncio

Asyncio已经提供了可以从外部分配值的可等待对象:。asyncio未来表示将在将来某个时间点可用的异步值。它们与C++的期货相关,但在语义上不等同于C++期货,不应该与<代码>并发。期货< /C> > STDLIB模块。< /P> 要创建一个被另一个线程中发生的事件激活的等待对象,您需要创建一个未来,然后启动线程外任务,指示它在完成执行时将未来标记为已完成。由于asyncio未来不是线程安全的,因此必须使用asyncio为此类情况提供的事件循环方法来完成。在Python中,可以这样做:


def compute_total_result()
  inputs = ...
  total_result = ...

  for input in inputs:
    result = compute_python_wrapper(input)
    update_total_result(total_result)

  return total_result

def compute_total_results_parallel()
  inputs = ...
  total_result = ...
  futures = []
  for input in inputs:
    futures.append(compute_threaded_python_wrapper(input))

  for future in futures:
    update_total_result(future.get())
  
  return total_result
def run_async():
    loop = asyncio.get_event_loop()
    future = loop.create_future()
    def on_done(result):
        # when done, notify the future in a thread-safe manner
        loop.call_soon_threadsafe(future.set_result, resut)
    # start the worker in a thread owned by the pool
    pool.submit(_worker, on_done)
    # returning a future makes run_async() awaitable, and
    # passable to asyncio.gather() etc.
    return future

def _worker(on_done):
    # this runs in a different thread
    ... processing goes here ...
    result = ...
    on_done(result)

在你的情况下,工人大概会在Cython中与C++结合。

我很感激你在这件事上的回答,当然我不应该等待单一的结果。我试着按照你的建议装配一个MWE。问题是,据我了解,分配给线程池的C++任务需要回Python以设置未来,正确吗?当我尝试…@hfhc2 Correct时,我看到了
致命的Python错误:PyThreadState\u Get:no current thread
。您还需要使用
PyGILState\u确保
获取GIL,然后再从非python线程使用python执行任何操作(然后使用
PyGILState\u release
发布),有关详细信息,请参阅。如果你使用Pythnon包装的C++包装,它也可能提供一个RAII风格的保护。当然,HFHC2我不应该等待单一的结果-道歉如果我指出你已经知道的。许多asyncio初学者,尤其是那些之前没有接触过其他语言的async/await的人,期望
await
将自动并行化他们的代码,并且不愉快地惊讶地发现它几乎完全相反。感谢您的帮助,我在这里添加了一个工作示例: