Flask 为什么原始wsgi应用程序比烧瓶应用程序慢?

Flask 为什么原始wsgi应用程序比烧瓶应用程序慢?,flask,wsgi,gevent,wsgiserver,Flask,Wsgi,Gevent,Wsgiserver,我编写了两个简单的应用程序,一个是原始wsgi应用程序,如下图所示,另一个是用Flask构建的,都运行在gevent wsgi服务器上 如我所料,当应用程序中没有网络连接时,原始wsgi应用程序比flask应用程序快,但当应用程序中有一些网络连接时,原始wsgi应用程序比flask应用程序慢得多 未经加工的 瓶子 我正在使用ab进行基准测试: $ ab -c10 -n10000 http://127.0.0.1:8080/ 以下是原始wsgi应用程序结果: Concurrency Level:

我编写了两个简单的应用程序,一个是原始wsgi应用程序,如下图所示,另一个是用Flask构建的,都运行在gevent wsgi服务器上
如我所料,当应用程序中没有网络连接时,原始wsgi应用程序比flask应用程序快,但当应用程序中有一些网络连接时,原始wsgi应用程序比flask应用程序慢得多

未经加工的 瓶子 我正在使用
ab
进行基准测试:

$ ab -c10 -n10000 http://127.0.0.1:8080/
以下是原始wsgi应用程序结果:

Concurrency Level: 10 Time taken for tests: 306.216 seconds Requests per second: 1.52 [#/sec] (mean) Time per request: 6585.299 [ms] (mean) Time per request: 658.530 [ms] (mean, across all concurrent requests) Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.4 0 7 Processing: 1084 6499 3050.3 5951 15963 Waiting: 96 5222 3051.4 4577 15096 Total: 1085 6500 3050.2 5951 15963 Percentage of the requests served within a certain time (ms) 50% 5938 66% 7584 75% 8597 80% 9186 90% 10829 95% 12033 98% 13209 99% 14722 100% 15963 (longest request) 并发级别:10 测试时间:306.216秒 每秒请求数:1.52[#秒](平均值) 每次请求的时间:6585.299[ms](平均值) 每个请求的时间:658.530[ms](所有并发请求的平均时间) 连接时间(毫秒) 最小平均值[+/-sd]最大中值 连接:0.4 0 7 处理:10846493050.3595115963 轮候电话:96522251.4457715096 总数:1085 6500 3050.2 5951 15963 在特定时间内服务的请求百分比(毫秒) 50% 5938 66% 7584 75% 8597 80% 9186 90% 10829 95% 12033 98% 13209 99% 14722 100%15963(最长请求) 和烧瓶应用程序:

Concurrency Level: 10 Time taken for tests: 19.909 seconds Requests per second: 502.28 [#/sec] (mean) Time per request: 19.909 [ms] (mean) Time per request: 1.991 [ms] (mean, across all concurrent requests) Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 2 Processing: 3 20 9.0 19 87 Waiting: 2 20 8.9 19 86 Total: 3 20 9.0 19 87 Percentage of the requests served within a certain time (ms) 50% 19 66% 23 75% 25 80% 27 90% 31 95% 36 98% 41 99% 45 100% 87 (longest request) 并发级别:10 测试时间:19.909秒 每秒请求数:502.28[#秒](平均值) 每次请求的时间:19.909[ms](平均值) 每个请求的时间:1.991[ms](所有并发请求的平均时间) 连接时间(毫秒) 最小平均值[+/-sd]最大中值 连接:0.0.0 2 处理:3209.01987 等候时间:2208.91986 总数:3209.01987 在特定时间内服务的请求百分比(毫秒) 50% 19 66% 23 75% 25 80% 27 90% 31 95% 36 98% 41 99% 45 100%87(最长请求)
所以我想知道flask做了什么,我能做什么来更快地使用一个没有框架的简单wsgi应用程序?

我想你的问题不存在。您犯的最大错误是引入IO部分(网络IO和磁盘IO),这与web框架的性能无关

为了证明这一点,我将您的演示简化为:

import json
from gevent import monkey
monkey.patch_all() # monkey patch for both apps

def application(environ, start_response):    
    res = dict(hello='world')

    start_response('200 OK', [('Content-Type', 'application')])
    return json.dumps(res)

from gevent.wsgi import WSGIServer
http_server = WSGIServer(('', 8088), application)
http_server.serve_forever()
以及:

我对原始WSGI的结果是:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       5
Processing:     1    5   0.7      5      23
Waiting:        1    5   0.7      5      23
Total:          1    6   0.7      5      24
WARNING: The median and mean for the total time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%      5
  66%      6
  75%      6
  80%      6
  90%      6
  95%      6
  98%      7
  99%      8
 100%     24 (longest request)
对于烧瓶,它是:

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     1    6   0.6      6      11
Waiting:        1    6   0.6      6      11
Total:          2    6   0.6      6      11

Percentage of the requests served within a certain time (ms)
  50%      6
  66%      6
  75%      6
  80%      6
  90%      7
  95%      7
  98%      7
  99%      8
 100%     11 (longest request)

忽略最长的1%请求,您会发现原始WSGI比Flask快20%,这似乎是合理的。

不要使用
请求。get('http://www.baidu.com)
用于这些应用程序的基准测试!这两个应用程序都在网络IO上浪费了大部分时间。所以它并没有暗示任何东西。@kxxoling好的,但请注意我得到了相同的结果。您刚才有没有评论过
return resp.content
?我想你也应该评论一下分配部分。另一方面,数据库获取也是一个IO操作,它会在某种程度上影响结果。实际上我写了两个测试,一个做db连接,一个做请求,我在上面的代码中把它们写在一起,只是想简化问题,抱歉把你们搞糊涂了。
from gevent import monkey
monkey.patch_all()
import json
from flask import Flask

app = Flask(__name__)

@app.route('/')
def index():
    res = dict(hello='world')
    return json.dumps(res), 200

from gevent.wsgi import WSGIServer

http_server = WSGIServer(('', 8088), app)
http_server.serve_forever()
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       5
Processing:     1    5   0.7      5      23
Waiting:        1    5   0.7      5      23
Total:          1    6   0.7      5      24
WARNING: The median and mean for the total time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%      5
  66%      6
  75%      6
  80%      6
  90%      6
  95%      6
  98%      7
  99%      8
 100%     24 (longest request)
Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       1
Processing:     1    6   0.6      6      11
Waiting:        1    6   0.6      6      11
Total:          2    6   0.6      6      11

Percentage of the requests served within a certain time (ms)
  50%      6
  66%      6
  75%      6
  80%      6
  90%      7
  95%      7
  98%      7
  99%      8
 100%     11 (longest request)