Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/django/22.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Django Cellery定期任务运行,但RabbitMQ队列不运行';不消耗 问题:_Python_Django_Heroku_Rabbitmq_Celery - Fatal编程技术网

Python Django Cellery定期任务运行,但RabbitMQ队列不运行';不消耗 问题:

Python Django Cellery定期任务运行,但RabbitMQ队列不运行';不消耗 问题:,python,django,heroku,rabbitmq,celery,Python,Django,Heroku,Rabbitmq,Celery,在通过芹菜的周期性任务调度程序beat运行任务之后,为什么RabbitMQ中还有这么多未使用的队列 安装程序 在Heroku上运行的Django web应用程序 通过芹菜节拍安排的任务 通过芹菜工人运行的任务 消息代理是ClouldAMQP的RabbitMQ 程序文件 设置.py tasks.py 结果 每次任务运行时,我(通过RabbitMQ web界面)都会得到: 我的“排队消息”下的另一条处于“就绪”状态的消息 单个消息处于“就绪”状态的附加队列 此队列没有列出的使用者 它最终成

在通过芹菜的周期性任务调度程序beat运行任务之后,为什么RabbitMQ中还有这么多未使用的队列

安装程序
  • 在Heroku上运行的Django web应用程序
  • 通过芹菜节拍安排的任务
  • 通过芹菜工人运行的任务
  • 消息代理是ClouldAMQP的RabbitMQ
程序文件 设置.py tasks.py 结果 每次任务运行时,我(通过RabbitMQ web界面)都会得到:

  • 我的“排队消息”下的另一条处于“就绪”状态的消息
  • 单个消息处于“就绪”状态的附加队列
    • 此队列没有列出的使用者

它最终成为了我的工作环境

以前是:

CELERY_RESULT_BACKEND = 'amqp'
我将RabbitMQ更改为以下内容后,RabbitMQ中不再有未使用的消息/队列:

CELERY_RESULT_BACKEND = 'database'
看起来,在执行任务后,Cellery通过rabbitmq将有关该任务的信息发送回,但是,没有设置来使用这些响应消息,因此,一堆未读的消息出现在队列中

注意:这意味着芹菜将添加记录任务结果的数据库条目。为了防止数据库中出现无用的消息,我添加了:

# Delete result records ("tombstones") from database after 4 hours
# http://docs.celeryproject.org/en/latest/configuration.html#celery-task-result-expires
CELERY_TASK_RESULT_EXPIRES = 14400

Settings.py中的相关零件
看起来您正在从已消耗的任务中获得响应

您可以通过以下方式避免这种情况:

@celery.task(ignore_result=True)

您好,我正在尝试在我的应用程序上实现相同的堆栈,但我无法正确地实现它,请您发布所有与芹菜和rabbitmq相关的设置,好吗?我将非常感激,这将帮助其他新手。当然,他补充道。这是基于优秀的推荐。这是正确的。我也有同样的问题。问答:
CELERY_RESULT_BACKEND = 'database'
# Delete result records ("tombstones") from database after 4 hours
# http://docs.celeryproject.org/en/latest/configuration.html#celery-task-result-expires
CELERY_TASK_RESULT_EXPIRES = 14400
########## CELERY CONFIGURATION
import djcelery
# https://github.com/celery/django-celery/
djcelery.setup_loader()

INSTALLED_APPS = INSTALLED_APPS + (
    'djcelery',
)

# Compress all the messages using gzip
# http://celery.readthedocs.org/en/latest/userguide/calling.html#compression
CELERY_MESSAGE_COMPRESSION = 'gzip'

# See: http://docs.celeryproject.org/en/latest/configuration.html#broker-transport
BROKER_TRANSPORT = 'amqplib'

# Set this number to the amount of allowed concurrent connections on your AMQP
# provider, divided by the amount of active workers you have.
#
# For example, if you have the 'Little Lemur' CloudAMQP plan (their free tier),
# they allow 3 concurrent connections. So if you run a single worker, you'd
# want this number to be 3. If you had 3 workers running, you'd lower this
# number to 1, since 3 workers each maintaining one open connection = 3
# connections total.
#
# See: http://docs.celeryproject.org/en/latest/configuration.html#broker-pool-limit
BROKER_POOL_LIMIT = 3

# See: http://docs.celeryproject.org/en/latest/configuration.html#broker-connection-max-retries
BROKER_CONNECTION_MAX_RETRIES = 0

# See: http://docs.celeryproject.org/en/latest/configuration.html#broker-url
BROKER_URL = os.environ.get('CLOUDAMQP_URL')

# Previously, had this set to 'amqp', this resulted in many read / unconsumed
# queues and messages in RabbitMQ
# See: http://docs.celeryproject.org/en/latest/configuration.html#celery-result-backend
CELERY_RESULT_BACKEND = 'database'

# Delete result records ("tombstones") from database after 4 hours
# http://docs.celeryproject.org/en/latest/configuration.html#celery-task-result-expires
CELERY_TASK_RESULT_EXPIRES = 14400
########## END CELERY CONFIGURATION
@celery.task(ignore_result=True)