Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x 在python中使用aiohttp获取多个URL_Python 3.x_Api_Web Scraping_Python Asyncio_Aiohttp - Fatal编程技术网

Python 3.x 在python中使用aiohttp获取多个URL

Python 3.x 在python中使用aiohttp获取多个URL,python-3.x,api,web-scraping,python-asyncio,aiohttp,Python 3.x,Api,Web Scraping,Python Asyncio,Aiohttp,在上一篇文章中,用户建议使用aiohttp获取多个URL(API调用)的方法如下: import asyncio import aiohttp url_list = ['https://api.pushshift.io/reddit/search/comment/?q=Nestle&size=30&after=1530396000&before=1530436000', 'https://api.pushshift.io/reddit/search/comment/?

在上一篇文章中,用户建议使用
aiohttp
获取多个URL(API调用)的方法如下:

import asyncio
import aiohttp


url_list = ['https://api.pushshift.io/reddit/search/comment/?q=Nestle&size=30&after=1530396000&before=1530436000', 'https://api.pushshift.io/reddit/search/comment/?q=Nestle&size=30&after=1530436000&before=1530476000']

async def fetch(session, url):
    async with session.get(url) as response:
        return await response.json()['data']


async def fetch_all(session, urls, loop):
    results = await asyncio.gather(*[loop.create_task(fetch(session, url)) for url in urls], return_exceptions= True)
    return results

if __name__=='__main__':
    loop = asyncio.get_event_loop()
    urls = url_list
    with aiohttp.ClientSession(loop=loop) as session:
        htmls = loop.run_until_complete(fetch_all(session, urls, loop))
    print(htmls)
但是,这只会导致返回属性错误:

[AttributeError('__aexit__',), AttributeError('__aexit__',)]

(这是我启用的,否则它就会崩溃)。我真的希望这里有人能帮上忙,仍然很难找到
asyncio
等的资源。返回的数据是json格式的。最后,我想将所有json dict放在一个列表中。

工作示例:

import asyncio
import aiohttp
import ssl

url_list = ['https://api.pushshift.io/reddit/search/comment/?q=Nestle&size=30&after=1530396000&before=1530436000',
            'https://api.pushshift.io/reddit/search/comment/?q=Nestle&size=30&after=1530436000&before=1530476000']


async def fetch(session, url):
    async with session.get(url, ssl=ssl.SSLContext()) as response:
        return await response.json()


async def fetch_all(urls, loop):
    async with aiohttp.ClientSession(loop=loop) as session:
        results = await asyncio.gather(*[fetch(session, url) for url in urls], return_exceptions=True)
        return results


if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    urls = url_list
    htmls = loop.run_until_complete(fetch_all(urls, loop))
    print(htmls)

与aiohhtp.ClientSession异步(connector=aiohttp.TCPConnector(verify\u ssl=False))
无需再为cry@YuriiKramarenko不幸的是,这段代码不适用于我,它抛出了以下错误:@Jannik看起来很奇怪,您使用的是哪个版本的aiohttp\python?@yuriikramaranko我使用的是aiohttp==0.7.2,asyncio 3.4.3和python 3。6@Jannik尝试将aiohttp版本更新为3.3.2(最新版本),因为您的版本非常旧。