Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/ruby-on-rails/67.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x SSL导致的Python代码执行瓶颈-如何优化?_Python 3.x_Optimization - Fatal编程技术网

Python 3.x SSL导致的Python代码执行瓶颈-如何优化?

Python 3.x SSL导致的Python代码执行瓶颈-如何优化?,python-3.x,optimization,Python 3.x,Optimization,我想提高Python脚本的性能,并一直在使用cProfile生成性能报告: ncalls tottime percall cumtime percall filename:lineno(function) 75 23.514 0.314 23.514 0.314 {method 'read' of '_ssl._SSLSocket' objects} 75 8.452 0.113 8.452 0.113 {method 'do_ha

我想提高Python脚本的性能,并一直在使用cProfile生成性能报告:

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   75   23.514    0.314   23.514    0.314 {method 'read' of '_ssl._SSLSocket' objects}
   75    8.452    0.113    8.452    0.113 {method 'do_handshake' of '_ssl._SSLSocket' objects}
   75    2.113    0.028    2.113    0.028 {method 'load_verify_locations' of '_ssl._SSLContext' objects}
   75    1.479    0.020    1.479    0.020 {method 'connect' of '_socket.socket' objects}
示例代码:

import requests
import json
from collections import defaultdict

#Added for multiprocessing
from urllib.request import urlopen
from multiprocessing.dummy import Pool as ThreadPool 

results = defaultdict(list)

# Make the Pool of workers
pool = ThreadPool(4)

# Open the urls in their own threads
# and return the results
results = pool.map(urlopen, requests.post())

  #close the pool and wait for the work to finish
pool.close()
pool.join()

for store, data in results.items():
    print('Store: {}'.format(store), end=', ')
    if data:
        for inventory in data:
            print(inventory)

您正在有效地测量远程网站的响应时间,这可能不是您想要的。为了最大限度地提高吞吐量(每秒发送的HTTP请求数或接收的数据数),您应该异步发送多个同步请求。您可以使用异步HTTP库,如或仅使用本机Python asyncio/asyncore。

Hi!欢迎光临!这个问题属于同级董事会codereview.stackexchange.com!检查!我在上面添加了示例代码。我必须研究aiohttp——我不熟悉它,也不熟悉如何修改代码以一次发送多个请求。最简单的方法是运行多个线程来处理请求。post()。它的效率不如异步请求,但它仍然可以将您的速度提高数十倍。顺便说一句,不要全速敲打一个网站,如果你产生太多的负载,你的IP可能会被阻止。一个好的爬行器会轮询多个网站,每个网站都会收到缓慢的请求流。我曾尝试根据示例添加多处理/线程,但由于无法找到使用循环的示例,所以遇到了障碍。任何指针都将不胜感激(对代码的添加将在上面进行注释)。谢谢