Python 芹菜Django部署因ImportError而使用Elastic Beanstalk失败:无法导入名称';芹菜';(ElasticBeanstalk::ExternalInvocationError)

Python 芹菜Django部署因ImportError而使用Elastic Beanstalk失败:无法导入名称';芹菜';(ElasticBeanstalk::ExternalInvocationError),python,django,deployment,amazon-elastic-beanstalk,Python,Django,Deployment,Amazon Elastic Beanstalk,我在配置芹菜后尝试部署django应用程序时出错。它在当地环境下工作良好。看起来好像有更多的芹菜在跳动,或者工人开始工作了。我试图通过主管运行芹菜工人时出错 [i-063a3b57f40eb2ffa] [2019-02-27T13:04:39.139Z] INFO [22820] - [Application update app-8bc8-190227_130333@187/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuil

我在配置芹菜后尝试部署django应用程序时出错。它在当地环境下工作良好。看起来好像有更多的芹菜在跳动,或者工人开始工作了。我试图通过主管运行芹菜工人时出错

[i-063a3b57f40eb2ffa] [2019-02-27T13:04:39.139Z] INFO  [22820] - [Application update app-8bc8-190227_130333@187/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_django_brain_dev/Command 04_start_celery_beat] : Completed activity. Result:
  celeryd-beat: ERROR (not running)
  celeryd-beat: ERROR (abnormal termination)

[i-063a3b57f40eb2ffa] [2019-02-27T13:04:40.021Z] INFO  [22820] - [Application update app-8bc8-190227_130333@187/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_django_brain_dev/Command 05_start_celery_worker] : Starting activity...
[i-063a3b57f40eb2ffa] [2019-02-27T13:04:42.397Z] INFO  [22820] - [Application update app-8bc8-190227_130333@187/AppDeployStage0/EbExtensionPostBuild/Infra-EmbeddedPostBuild/postbuild_0_django_brain_dev/Command 05_start_celery_worker] : Completed activity. Result:
  celeryd-worker: ERROR (not running)
  celeryd-worker: ERROR (abnormal termination)


  from celery import Celery
  File "/opt/python/current/app/django_app/celery.py", line 3, in <module>
  from celery import Celery
  ImportError: cannot import name 'Celery'
   (ElasticBeanstalk::ExternalInvocationError)
芹菜配置.txt包含

#!/usr/bin/env bash

# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}

# Create celery configuraiton script
celeryworkerconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A djangobrain --loglevel=INFO

directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv"

celerybeatconf="[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A djangobrain --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid

directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv"

# Create the celery supervisord conf script
echo "$celeryworkerconf" | tee /opt/python/etc/celeryworker.conf
echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf

# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
  then
  echo "[include]" | tee -a /opt/python/etc/supervisord.conf
  echo "files: celerybeat.conf celeryworker.conf" | tee -a /opt/python/etc/supervisord.conf
fi

# reread the supervisord config
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update

# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker

这不是答案,但我放弃了芹菜路径,并将其作为解决方案进行无服务器部署。

这不是答案,但我放弃了芹菜路径,将其作为解决方案进行无服务器部署。

您是否使用
pip install芹菜安装了芹菜?
是的,它是通过需求安装的。txt您是否安装了芹菜使用
pip安装芹菜
?是,通过requirements.txt安装
#!/usr/bin/env bash

# Get django environment variables
celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g' | sed 's/%/%%/g'`
celeryenv=${celeryenv%?}

# Create celery configuraiton script
celeryworkerconf="[program:celeryd-worker]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery worker -A djangobrain --loglevel=INFO

directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv"

celerybeatconf="[program:celeryd-beat]
; Set full path to celery program if using virtualenv
command=/opt/python/run/venv/bin/celery beat -A djangobrain --loglevel=INFO --workdir=/tmp -S django --pidfile /tmp/celerybeat.pid

directory=/opt/python/current/app
user=nobody
numprocs=1
stdout_logfile=/var/log/celery-beat.log
stderr_logfile=/var/log/celery-beat.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv"

# Create the celery supervisord conf script
echo "$celeryworkerconf" | tee /opt/python/etc/celeryworker.conf
echo "$celerybeatconf" | tee /opt/python/etc/celerybeat.conf

# Add configuration script to supervisord conf (if not there already)
if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
  then
  echo "[include]" | tee -a /opt/python/etc/supervisord.conf
  echo "files: celerybeat.conf celeryworker.conf" | tee -a /opt/python/etc/supervisord.conf
fi

# reread the supervisord config
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf reread
# update supervisord in cache without restarting all services
/usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf update

# Start/Restart celeryd through supervisord
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-beat
supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-worker