Concurrency 我们可以用请求库替换代码中的urlopen吗?

Concurrency 我们可以用请求库替换代码中的urlopen吗?,concurrency,python-requests,concurrent.futures,Concurrency,Python Requests,Concurrent.futures,我们可以用python 2.7中的请求库替换本例中并发请求的urlopen库吗 import concurrent.futures import urllib.request URLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up

我们可以用python 2.7中的请求库替换本例中并发请求的urlopen库吗

import concurrent.futures
import urllib.request

URLS = ['http://www.foxnews.com/',
        'http://www.cnn.com/',
        'http://europe.wsj.com/',
        'http://www.bbc.co.uk/',
        'http://some-made-up-domain.com/']

# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
    with urllib.request.urlopen(url, timeout=timeout) as conn:
        return conn.read()

# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
    # Start the load operations and mark each future with its URL
    future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
    for future in concurrent.futures.as_completed(future_to_url):
        url = future_to_url[future]
        try:
            data = future.result()
        except Exception as exc:
            print('%r generated an exception: %s' % (url, exc))
        else:
            print('%r page is %d bytes' % (url, len(data)))
谢谢

是的,你可以

您的代码似乎使用超时执行简单的HTTP get,因此与请求等效的是:

导入请求
def load_url(url,超时):
r=requests.get(url,timeout=timeout)
返回r.content