芹菜工人无法连接到Docker Swarm中的RabbitMQ,但他们在标准Docker compose中工作良好
我试图在单节点VPS上的Docker Swarm中运行我的应用程序。堆栈如下所示:芹菜工人无法连接到Docker Swarm中的RabbitMQ,但他们在标准Docker compose中工作良好,docker,docker-compose,rabbitmq,celery,docker-swarm,Docker,Docker Compose,Rabbitmq,Celery,Docker Swarm,我试图在单节点VPS上的Docker Swarm中运行我的应用程序。堆栈如下所示: 前端:React.js节点使用service-s构建提供静态文件 命令 后端:使用gunicorn的Django rest框架API RabbitMQ:用作芹菜工人的消息代理 芹菜工人:与后端图像相同,但开始芹菜工作 使用多启动命令 芹菜节拍:与后端图像相同,但开始芹菜节拍 排队 数据库:使用外部托管的云数据库 Nginx:使用docker gen的反向代理自动生成 容器的反向代理 Nginx-letsencr
- 前端:React.js节点使用
service-s构建提供静态文件 命令
- 后端:使用gunicorn的Django rest框架API
- RabbitMQ:用作芹菜工人的消息代理
- 芹菜工人:与后端图像相同,但开始芹菜工作
使用
命令多启动
- 芹菜节拍:与后端图像相同,但开始芹菜节拍 排队
- 数据库:使用外部托管的云数据库
- Nginx:使用docker gen的反向代理自动生成 容器的反向代理
- Nginx-letsencrypt:创建SSL证书的代理的伙伴
docker-compose-f docker-compose.staging.yml-up--build
时,所有容器都运转良好。芹菜工作者连接到rabbitmq容器并启动工作者,后端连接到数据库,前端提供静态文件,反向代理正确路由请求
但是,当我尝试在群中使用命令docker stack-c docker-compose.staging.yml deploy app
运行此操作时,芹菜工人不会启动工人。所有芹菜工人带着9号信号离开。芹菜节拍调度程序容器的日志表明,工作容器无法解析代理(RabbitMQ)的主机名
从docker compose
到docker swarm
的网络中有哪些变化阻止了工人连接和启动
我的撰写文件(已编辑)-docker compose.staging.yml
version: "3.3"
services:
# RabbitMQ - queue
rabbitmq:
container_name: rabbit
image: rabbitmq:3-management
expose:
- "15672"
- "5672"
environment:
- RABBITMQ_DEFAULT_USER=example
- RABBITMQ_DEFAULT_PASS=password
- VIRTUAL_PORT=15672
- VIRTUAL_HOST=rabbit.example.com
- LETSENCRYPT_HOST=rabbit.example.com
- LETSENCRYPT_EMAIL=example@me.com
restart: on-failure
deploy:
restart_policy:
condition: on-failure
placement:
max_replicas_per_node: 1
constraints:
- "node.role==manager"
# Backend API and admin portal
backend:
container_name: backend
image: mycustomprivate/backend-image
command:
[
"gunicorn",
"--bind",
"0.0.0.0:8000",
"main.wsgi:application",
]
entrypoint: /backend/production-entrypoint.sh
expose:
- "8000"
depends_on:
- rabbitmq
restart: on-failure
env_file:
- ./backend/.env
environment:
- DJANGO_SETTINGS_MODULE=main.staging
- VIRTUAL_PORT=8000
- VIRTUAL_HOST=api.example.com
- LETSENCRYPT_HOST=api.example.com
- LETSENCRYPT_EMAIL=example@me.com
deploy:
replicas: 5
update_config:
parallelism: 2
restart_policy:
condition: on-failure
# Static files for React.js frontend served
frontend:
container_name: frontend
image: mycustomprivate/frontend-image
command: serve -s build
entrypoint: /frontend/staging-entrypoint.sh
expose:
- "5000"
environment:
- VIRTUAL_HOST=example.com
- VIRTUAL_PORT=5000
- LETSENCRYPT_HOST=example.com
- LETSENCRYPT_EMAIL=example@me.com
stdin_open: true
deploy:
replicas: 5
update_config:
parallelism: 2
restart_policy:
condition: on-failure
# # Celery - workers
celery_worker:
container_name: celery_worker
image: mycustomprivate/backend-image
command:
[
"./wait-for-it",
"rabbitmq:5672",
"--",
"../configuration/docker_run_workers.sh",
]
depends_on:
- backend
- rabbitmq
restart: on-failure
env_file:
- ./backend/.env
environment:
- DJANGO_SETTINGS_MODULE=main.staging
volumes:
- ./configuration:/configuration
deploy:
replicas: 5
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
resources:
limits:
cpus: "0.50"
memory: 50M
reservations:
cpus: "0.25"
memory: 20M
# # Celery - beat
celery_beat:
container_name: celery_beat
image: mycustomprivate/backend-image
command:
[
"./wait-for-it",
"rabbitmq:5672",
"--",
"../configuration/docker_run_beat.sh",
]
depends_on:
- backend
- rabbitmq
- celery_worker
restart: on-failure
environment:
- DJANGO_SETTINGS_MODULE=main.staging
volumes:
- ./configuration:/configuration
deploy:
restart_policy:
condition: on-failure
placement:
max_replicas_per_node: 1
constraints:
- "node.role==manager"