Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/327.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/android/210.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python3.x运行时错误:事件循环已关闭_Python_Python 3.6_Python Asyncio_Aiohttp - Fatal编程技术网

Python3.x运行时错误:事件循环已关闭

Python3.x运行时错误:事件循环已关闭,python,python-3.6,python-asyncio,aiohttp,Python,Python 3.6,Python Asyncio,Aiohttp,我对aiohttp/asyncio做了一些错误的操作。当我尝试从循环中的另一个文件调用run\u my\u job()时,如果我只在一个关闭的状态下运行它,那么下面的代码工作得很好: main.py ======================================== count = 0 batch_count = math.ceil((abc.get_count()/100)) print("there are {} batches to complete.".f

我对aiohttp/asyncio做了一些错误的操作。当我尝试从循环中的另一个文件调用
run\u my\u job()
时,如果我只在一个关闭的状态下运行它,那么下面的代码工作得很好:

main.py
========================================
 count = 0
    batch_count = math.ceil((abc.get_count()/100))
    print("there are {} batches to complete.".format(batch_count))
    while count < batch_count:
        print("starting batch {}...".format(count))
        abc.run_my_job()
        print("batch {} completed...".format(count))
        count += 1


abc.py
===============================
def run_my_job(self):
    self.queue_manager(self.do_stuff(all_the_tasks))

def queue_manager(self, method):
    print('starting event queue')
    loop = asyncio.get_event_loop()
    future = asyncio.ensure_future(method)
    loop.run_until_complete(future)
    loop.close()

async def async_post(self, resource, session, data):
    async with session.post(self.api_attr.api_endpoint + resource, headers=self.headers, data=data) as response:
        resp = await response.read()
    return resp

async def do_stuff(self, data):
    print('queueing tasks')

    tasks = []
    async with aiohttp.ClientSession() as session:
        for row in data:
            task = asyncio.ensure_future(self.async_post('my_api_endpoint', session, row))
            tasks.append(task)
        result = await asyncio.gather(*tasks)
        self.load_results(result)
# goes on to load_results() method that parses json and updates the DB.
main.py
========================================
计数=0
batch\u count=math.ceil((abc.get\u count()/100))
打印(“有{}个批次需要完成。”.format(批次计数))
计数<批次计数时:
打印(“起始批次{}…”格式(计数))
abc.运行我的作业()
打印(“批{}已完成…”格式(计数))
计数+=1
abc.py
===============================
def运行我的作业(自我):
self.queue\u manager(self.do\u stuff(所有任务))
def队列管理器(自身、方法):
打印('启动事件队列')
loop=asyncio.get\u event\u loop()
future=asyncio.确保未来(方法)
循环。运行_直到_完成(未来)
loop.close()
async def async_post(自身、资源、会话、数据):
与session.post(self.api\u attr.api\u endpoint+resource,headers=self.headers,data=data)异步作为响应:
resp=wait response.read()
返回响应
异步def do_内容(自身、数据):
打印('正在排队的任务')
任务=[]
与aiohttp.ClientSession()作为会话异步:
对于数据中的行:
task=asyncio.sure_future(self.async_post('my_api_endpoint',session,row))
tasks.append(任务)
结果=等待asyncio.gather(*任务)
自加载结果(结果)
#继续加载解析json并更新DB的_results()方法。
我发现以下错误:

Traceback (most recent call last):
  File "C:/usr/PycharmProjects/api_framework/api_framework.py", line 37, in <module>
starting event queue
    abc.run_my_job()
  File "C:\usr\PycharmProjects\api_framework\api\abc\abc.py", line 77, in run_eligibility
    self.queue_manager(self.verify_eligibility(json_data))
  File "C:\usr\PycharmProjects\api_framework\api\abc\abc.py", line 187, in queue_manager
    future = asyncio.ensure_future(method)
  File "C:\Python36x64\lib\asyncio\tasks.py", line 512, in ensure_future
    task = loop.create_task(coro_or_future)
  File "C:\Python36x64\lib\asyncio\base_events.py", line 282, in create_task
    self._check_closed()
  File "C:\Python36x64\lib\asyncio\base_events.py", line 357, in _check_closed
    raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
sys:1: RuntimeWarning: coroutine 'Consumer.run_my_job' was never awaited
回溯(最近一次呼叫最后一次):
文件“C:/usr/PycharmProjects/api_framework/api_framework.py”,第37行,在
启动事件队列
abc.运行我的作业()
文件“C:\usr\PycharmProjects\api\u framework\api\abc\abc.py”,第77行,运行中
self.queue_管理器(self.verify_合格性(json_数据))
文件“C:\usr\PycharmProjects\api\u framework\api\abc\abc.py”,第187行,在队列管理器中
future=asyncio.确保未来(方法)
文件“C:\Python36x64\lib\asyncio\tasks.py”,第512行,以确保将来
任务=循环。创建任务(coro或未来)
文件“C:\Python36x64\lib\asyncio\base\u events.py”,第282行,位于创建\u任务中
自我检查关闭()
文件“C:\Python36x64\lib\asyncio\base\u events.py”,第357行,在\u check\u closed中
raise RUNTIMERROR('事件循环已关闭')
RuntimeError:事件循环已关闭
sys:1:RuntimeWarning:coroutine“Consumer.run\u my\u job”从未被等待过
看看这个函数:

def queue_manager(self, method):
    print('starting event queue')
    loop = asyncio.get_event_loop()
    future = asyncio.ensure_future(method)
    loop.run_until_complete(future)
    loop.close()
这就是你所说的安排每项工作的方法。在函数结束时,关闭事件循环。因此,在第一个作业运行后,关闭事件循环

如果在此之后您尝试运行更多作业,那么显然您正在尝试在闭合事件循环中运行它们。(并且您有运行更多作业的作业。)因此出现错误:

RuntimeError: Event loop is closed
只要删除
循环.close()
,问题就会消失


我不确定这是否足以使您的程序正常运行,因为您没有给我们提供任何接近可运行的示例。另外,在您的实际代码中,
run\u我的作业
显然是一个协同程序,但您在此处发布的代码中没有。我没有看到你发布的内容中有任何明显的错误,但我不知道这意味着什么。

这就解决了它。我的印象是,我需要关闭循环以防止出现问题。那么我需要关闭循环吗?@hyphen您确实想关闭循环,但在程序结束时,在所有作业完成后,而不是在每个作业完成后。