Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/294.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 乌维康+;Gunicorn&x2B;Starlette上菜时卡住了,可以';t在没有sigkill的情况下重新启动服务_Python_Gunicorn_Supervisord_Uvicorn_Starlette - Fatal编程技术网

Python 乌维康+;Gunicorn&x2B;Starlette上菜时卡住了,可以';t在没有sigkill的情况下重新启动服务

Python 乌维康+;Gunicorn&x2B;Starlette上菜时卡住了,可以';t在没有sigkill的情况下重新启动服务,python,gunicorn,supervisord,uvicorn,starlette,Python,Gunicorn,Supervisord,Uvicorn,Starlette,我通过gunicorn+uvicorn在虚拟机上为一个模型提供服务 它由supervisord自动启动,运行api.sh api.sh包含: source /home/asd/.virtual_envs/myproject/bin/activate /home/asd/.virtual_envs/myproject/bin/gunicorn --max-requests-jitter 30 -w 6 -b 0.0.0.0:4080 api:app -k uvicorn.workers.Uvic

我通过gunicorn+uvicorn在虚拟机上为一个模型提供服务

它由supervisord自动启动,运行
api.sh

api.sh
包含:

source /home/asd/.virtual_envs/myproject/bin/activate

/home/asd/.virtual_envs/myproject/bin/gunicorn --max-requests-jitter 30 -w 6 -b 0.0.0.0:4080 api:app -k uvicorn.workers.UvicornWorker
在不深入了解api.py的情况下,它包含以下主要部分:

from starlette.applications import Starlette
from models import SomeModelClass


app = Starlette(debug=False)
model = SomeModelClass()


@app.route('/do_things', methods=['GET', 'POST', 'HEAD'])
async def add_styles(request):
    if request.method == 'GET':
        params = request.query_params
    elif request.method == 'POST':
        params = await request.json()
    elif request.method == 'HEAD':
        return UJSONResponse([])

    # Doing things
    result = model(params)
    return UJSONResponse(result)
发生的情况是,api运行几天后,我开始出现以下错误:

[INFO] Starting gunicorn 20.0.3
[ERROR] Connection in use: ('0.0.0.0', 4080)
[ERROR] Retrying in 1 second.
[ERROR] Connection in use: ('0.0.0.0', 4080)
[ERROR] Retrying in 1 second.
[ERROR] Connection in use: ('0.0.0.0', 4080)
[ERROR] Retrying in 1 second.
[ERROR] Connection in use: ('0.0.0.0', 4080)
[ERROR] Retrying in 1 second.
...
在supervisord中重新启动api没有任何作用,我得到了与上面相同的消息。我发现有效的唯一方法是:

  • 在supervisord中停止api
  • 查看4080端口上运行的pid(一个
    python3.8
    进程):
    sudo netstat-tulpn|grep LISTEN
  • 停止运行
    Kill-9[PID]
  • 重复步骤2-3 1-2次,直到4080端口无任何占用
  • 在supervisord中启动api

  • 您对如何解决这个问题有什么想法吗?

    实际使用的代码是
    多处理中的
    ,这很可能是导致此问题的原因

    例如:

    from starlette.applications import Starlette
    from models import SomeModelClass
    from multiprocessing import Pool
    from utils import myfun
    
    
    app = Starlette(debug=False)
    model = SomeModelClass()
    
    
    @app.route('/do_things', methods=['GET', 'POST', 'HEAD'])
    async def add_styles(request):
        if request.method == 'GET':
            params = request.query_params
        elif request.method == 'POST':
            params = await request.json()
        elif request.method == 'HEAD':
            return UJSONResponse([])
    
        # Doing things
        result = model(params)
        # Start of the offending code
        pool = Pool(4)
        result = pool.map(myfun, result, chunksize=1)
        # End of the offending code
        return UJSONResponse(result)
    
    解决方案是用并发性取代多处理:

    from starlette.applications import Starlette
    from models import SomeModelClass
    import concurrent.futures
    from utils import myfun
    
    
    app = Starlette(debug=False)
    model = SomeModelClass()
    
    
    @app.route('/do_things', methods=['GET', 'POST', 'HEAD'])
    async def add_styles(request):
        if request.method == 'GET':
            params = request.query_params
        elif request.method == 'POST':
            params = await request.json()
        elif request.method == 'HEAD':
            return UJSONResponse([])
    
        # Doing things
        result = model(params)
        # Start of the fix
        with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
            result = executor.map(myfun, result)
        result = list(result)
        # End of the fix
        return UJSONResponse(result)