Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/301.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Django芹菜节拍';在弹性豆茎上看不到周期性任务_Python_Django_Celery_Amazon Elastic Beanstalk_Celerybeat - Fatal编程技术网

Python Django芹菜节拍';在弹性豆茎上看不到周期性任务

Python Django芹菜节拍';在弹性豆茎上看不到周期性任务,python,django,celery,amazon-elastic-beanstalk,celerybeat,Python,Django,Celery,Amazon Elastic Beanstalk,Celerybeat,我已经在EB上配置了芹菜工人和芹菜节拍。部署期间日志中没有错误,芹菜工人工作正常,但看不到定期任务。在本地机器上,一切工作都很顺利 这是我的芹菜配置文件 files: "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh": mode: "000755" owner: root group: root content: | #!/usr/bin/env bash

我已经在EB上配置了芹菜工人和芹菜节拍。部署期间日志中没有错误,芹菜工人工作正常,但看不到定期任务。在本地机器上,一切工作都很顺利

这是我的芹菜配置文件

files:
  "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
    mode: "000755"
    owner: root
    group: root
    content: |
      #!/usr/bin/env bash

      # Create required directories
      sudo mkdir -p /var/log/celery/
      sudo mkdir -p /var/run/celery/

      # Create group called 'celery'
      sudo groupadd -f celery
      # add the user 'celery' if it doesn't exist and add it to the group with same name
      id -u celery &>/dev/null || sudo useradd -g celery celery
      # add permissions to the celery user for r+w to the folders just created
      sudo chown -R celery:celery /var/log/celery/
      sudo chmod -R 777 /var/log/celery/
      sudo chown -R celery:celery /var/run/celery/
      sudo chmod -R 777 /var/run/celery/

      # Get django environment variables
      celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
      celeryenv=${celeryenv%?}

      # Create CELERY configuration script
      celeryconf="[program:celeryd]
      directory=/opt/python/current/app
      ; Set full path to celery program if using virtualenv
      command=/opt/python/run/venv/bin/celery worker -A config.celery:app --loglevel=INFO --logfile="/var/log/celery/celery_worker.log" --pidfile="/var/run/celery/celery_worker_pid.pid"

      user=celery
      numprocs=1
      stdout_logfile=/var/log/std_celery_worker.log
      stderr_logfile=/var/log/std_celery_worker_errors.log
      autostart=true
      autorestart=true
      startsecs=10
      startretries=10

      ; Need to wait for currently executing tasks to finish at shutdown.
      ; Increase this if you have very long running tasks.
      stopwaitsecs = 60

      ; When resorting to send SIGKILL to the program to terminate it
      ; send SIGKILL to its whole process group instead,
      ; taking care of its children as well.
      killasgroup=true

      ; if rabbitmq is supervised, set its priority higher
      ; so it starts first
      priority=998

      environment=$celeryenv"


      # Create CELERY BEAT configuraiton script
      celerybeatconf="[program:celerybeat]
      directory=/opt/python/current/app
      ; Set full path to celery program if using virtualenv
      command=/opt/python/run/venv/bin/celery beat -A config.celery:app --loglevel=INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler --logfile="/var/log/celery/celery_beat.log" --pidfile="/var/run/celery/celery_beat_pid.pid"

      user=celery
      numprocs=1
      stdout_logfile=/var/log/std_celery_beat.log
      stderr_logfile=/var/log/std_celery_beat_errors.log
      autostart=true
      autorestart=true
      startsecs=10
      startretries=10

      ; Need to wait for currently executing tasks to finish at shutdown.
      ; Increase this if you have very long running tasks.
      stopwaitsecs = 60

      ; When resorting to send SIGKILL to the program to terminate it
      ; send SIGKILL to its whole process group instead,
      ; taking care of its children as well.
      killasgroup=true

      ; if rabbitmq is supervised, set its priority higher
      ; so it starts first
      priority=999

      environment=$celeryenv"

      # Create the celery supervisord conf script
      echo "$celeryconf" | tee /opt/python/etc/celery.conf
      echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf

      # Add configuration script to supervisord conf (if not there already)
      if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
        then
          echo "[include]" | tee -a /opt/python/etc/supervisord.conf
          echo "files: uwsgi.conf celery.conf celerybeat.conf" | tee -a /opt/python/etc/supervisord.conf
      fi

      # Enable supervisor to listen for HTTP/XML-RPC requests.
      # supervisorctl will use XML-RPC to communicate with supervisord over port 9001.
      # Source: https://askubuntu.com/questions/911994/supervisorctl-3-3-1-http-localhost9001-refused-connection
      if ! grep -Fxq "[inet_http_server]" /opt/python/etc/supervisord.conf
        then
          echo "[inet_http_server]" | tee -a /opt/python/etc/supervisord.conf
          echo "port = 127.0.0.1:9001" | tee -a /opt/python/etc/supervisord.conf
      fi

      # Reread the supervisord config
      supervisorctl -c /opt/python/etc/supervisord.conf reread

      # Update supervisord in cache without restarting all services
      supervisorctl -c /opt/python/etc/supervisord.conf update

      # Start/Restart celeryd through supervisord
      supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd
      supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat

commands:
  01_kill_other_beats:
    command: "ps auxww | grep 'celery beat' | awk '{print $2}' | sudo xargs kill -9 || true"
    ignoreErrors: true
  02_restart_beat:
    command: "supervisorctl -c /opt/python/etc/supervisord.conf restart celerybeat"
    leader_only: true
  03_upgrade_pip_global:
    command: "if test -e /usr/bin/pip; then sudo /usr/bin/pip install --upgrade pip; fi"
  04_upgrade_pip_global:
    command: "if test -e /usr/local/bin/pip; then sudo /usr/local/bin/pip install --upgrade pip; fi"
  05_upgrade_pip_for_venv:
    command: "if test -e /opt/python/run/venv/bin/pip; then sudo /opt/python/run/venv/bin/pip install --upgrade pip; fi"
有人能说出错误在哪里吗

我开始这样的定期任务:

@app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
    pass
更新: 监督日志

2018-07-10 12:56:18,683 INFO stopped: celerybeat (terminated by SIGTERM)
2018-07-10 12:56:18,691 INFO spawned: 'celerybeat' with pid 1626
2018-07-10 12:56:19,181 INFO stopped: celerybeat (terminated by SIGTERM)
2018-07-10 12:56:20,187 INFO spawned: 'celerybeat' with pid 1631
2018-07-10 12:56:30,200 INFO success: celerybeat entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2018-07-10 12:56:30,466 INFO stopped: celeryd (terminated by SIGTERM)
2018-07-10 12:56:31,472 INFO spawned: 'celeryd' with pid 1638
2018-07-10 12:56:41,486 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2018-07-10 13:28:32,572 CRIT Supervisor running as root (no user in config file)
2018-07-10 13:28:32,573 WARN No file matches via include "/opt/python/etc/uwsgi.conf"
2018-07-10 13:28:32,573 WARN Included extra file "/opt/python/etc/celery.conf" during parsing
2018-07-10 13:28:32,573 WARN Included extra file "/opt/python/etc/celerybeat.conf" during parsing
2018-07-10 13:28:32,591 INFO RPC interface 'supervisor' initialized
2018-07-10 13:28:32,591 CRIT Server 'inet_http_server' running without any HTTP authentication checking

尝试设置定期任务时,问题的来源是导入:

芹菜.py

@app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
    from my_app.tasks.task1 import some_task
    sender.some_task(
        60.0, some_task.s(), name='call every 60 seconds'
    )
@app.task
def temp_task():
    from my_app.tasks.task1 import some_task
    some_task()
解决方案是在芹菜应用程序中使用任务:

芹菜.py

@app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
    from my_app.tasks.task1 import some_task
    sender.some_task(
        60.0, some_task.s(), name='call every 60 seconds'
    )
@app.task
def temp_task():
    from my_app.tasks.task1 import some_task
    some_task()
因此,设置定期任务将如下所示

@app.on_after_configure.connect
def setup_periodic_tasks(sender, **kwargs):
    from my_app.tasks.task1 import some_task
    sender.some_task(
        60.0, temp_task.s(), name='call every 60 seconds'
    )

很难找到问题的根源,因为没有错误日志,常规日志为空,因为芹菜确实没有启动。

在使用supervisor
sudo supervisor时,首先重新启动所有芹菜任务,然后检查。通常,使用decorator可以使用定期任务
@定期任务(run\u every=(crontab(minute='*/30')),name=“nameoftask”,ignore\u result=True)
,没有sudo
http://localhost:9001 拒绝连接
。在我的本地机器上,一切正常,我在管理页面上看到定期任务。在EB上,管理页面中没有关于定期任务的信息。在您的情况下,请使用此命令重新启动所有任务,
supervisorctl-c/opt/python/etc/supervisord.conf restart all
错误:,[Errno 13]权限被拒绝:文件:/usr/lib64/python2.7/socket.py行:228
您正在使用的芹菜版本?