Python 芹菜连接错误
我正在将一个带有芹菜的Django应用程序部署到Heroku,并且正在努力解决一个连接错误。我的AMQP提供商声称他能够访问他端的资源,而我有足够的连接。我想我的安装程序在我的特定工作进程中找到了芹菜应用程序,但未能在shell中获得正确的设置。有没有一种方法可以告诉调用任务的连接URL是什么?我的芹菜应用程序是否应该在下面检查并正确处理事情 问题调用:Python 芹菜连接错误,python,django,heroku,celery,Python,Django,Heroku,Celery,我正在将一个带有芹菜的Django应用程序部署到Heroku,并且正在努力解决一个连接错误。我的AMQP提供商声称他能够访问他端的资源,而我有足够的连接。我想我的安装程序在我的特定工作进程中找到了芹菜应用程序,但未能在shell中获得正确的设置。有没有一种方法可以告诉调用任务的连接URL是什么?我的芹菜应用程序是否应该在下面检查并正确处理事情 问题调用: $ heroku run bash ~$ python <app>/manage.py shell >>> fr
$ heroku run bash
~$ python <app>/manage.py shell
>>> from <app>.management.tasks.tasks import <task>
>>> t = <task>()
>>> dt = '20140101'
>>> t.delay(dt=dt)
核心问题是django、Cellery和gunicorn的django_设置_模块的不同预期格式。我通过将
DJANGO\u SETTINGS\u MODULE=SETTINGS.production
更改为DJANGO\u SETTINGS\u MODULE=.SETTINGS.production
,修复了shell broker连接,但断开了我的web和工作进程。这方面的工作Procfile规范是
web: cd <app> && gunicorn <app>.wsgi -w 1 --log-file -
worker: celery worker --app=<app> -E -Q <app>,celery --loglevel=INFO -c 1 --workdir=<app>
web:cd&&gunicorn.wsgi-w1--日志文件-
工人:芹菜工人——app=-E-Q,芹菜——loglevel=INFO-c1——workdir=
web之前的cd很蠢,但很管用。我在将Django项目部署到Tornado服务器时遇到了这个异常。 以下是部署代码:
import os
import tornado.httpserver
import tornado.ioloop
import tornado.wsgi
from django.core.wsgi import get_wsgi_application
# add this line when use celery.
# import app
def main():
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' # path to your settings module
application = get_wsgi_application()
container = tornado.wsgi.WSGIContainer(application)
http_server = tornado.httpserver.HTTPServer(container)
http_server.listen(8888)
tornado.ioloop.IOLoop.instance().start()
if __name__ == "__main__":
main()
当我添加导入应用程序时(包含芹菜应用程序初始化文件的应用程序),一切正常。你能检查
app.config.BROKER\u URL
?你读过吗?
CELERYD_CONCURRENCY=1
CELERY_IGNORE_RESULT=True
CELERYD_TASK_TIME_LIMIT=60
CLOUDAMQP_URL=<amqp url>
RESULT_EXPIRY_RATE=600
BROKER_CONNECTION_TIMEOUT=10
PWD=/app
DJANGO_SETTINGS_MODULE=settings.production
DJANGO_PROJECT_DIR=/app/<app>
BROKER_POOL_LIMIT=1
HOME=/app
PYTHONPATH=/app:/app/<app>:/app/<app>/<app>
web: gunicorn <app>.<app>.wsgi -w 1 --log-file -
worker: celery worker --app=<app>.<app> -E -Q <app>,celery --loglevel=INFO -c 1 --workdir=<app>
from __future__ import absolute_import
from os import getenv
from kombu import Exchange, Queue
from django.conf import settings
from celery import Celery
app = Celery('<app>')
class Config(object):
# List of modules to import when celery starts.
CELERY_IMPORTS = ("<imports>",)
BROKER_CONNECTION_RETRY = True
API_RATE_LIMIT = getenv('API_RATE_LIMIT')
BROKER_POOL_LIMIT = int(getenv('BROKER_POOL_LIMIT', 1))
BROKER_URL = getenv('CLOUDAMQP_URL')
BROKER_CONNECTION_TIMEOUT = int(getenv('BROKER_CONNECTION_TIMEOUT'))
CELERYD_CONCURRENCY = int(getenv('CELERYD_CONCURRENCY'))
app.config_from_object(Config)
if __name__ == '__main__':
app.start()
from __future__ import absolute_import
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery_app import app as celery_app # nolint
web: cd <app> && gunicorn <app>.wsgi -w 1 --log-file -
worker: celery worker --app=<app> -E -Q <app>,celery --loglevel=INFO -c 1 --workdir=<app>
import os
import tornado.httpserver
import tornado.ioloop
import tornado.wsgi
from django.core.wsgi import get_wsgi_application
# add this line when use celery.
# import app
def main():
os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' # path to your settings module
application = get_wsgi_application()
container = tornado.wsgi.WSGIContainer(application)
http_server = tornado.httpserver.HTTPServer(container)
http_server.listen(8888)
tornado.ioloop.IOLoop.instance().start()
if __name__ == "__main__":
main()