nginx 502妨碍eb工作人员和SQS,导致不遵守可见性超时
我的症状与EB Docker预配置Python上的症状相同(即不遵守可见性超时)。首先,我的队列可见性超时(在eb和sqs中配置)是1800s 我在60秒后收到502,因为我的消息处理时间超过60秒(60秒后,队列当然会尝试重试消息,因为它收到502)。我尝试了.ebextensions proxy.conf解决方案(在ecd_bm的链接中提到),但没有成功 My/var/log/nginx/access.log提供:nginx 502妨碍eb工作人员和SQS,导致不遵守可见性超时,nginx,amazon-elastic-beanstalk,uwsgi,amazon-sqs,Nginx,Amazon Elastic Beanstalk,Uwsgi,Amazon Sqs,我的症状与EB Docker预配置Python上的症状相同(即不遵守可见性超时)。首先,我的队列可见性超时(在eb和sqs中配置)是1800s 我在60秒后收到502,因为我的消息处理时间超过60秒(60秒后,队列当然会尝试重试消息,因为它收到502)。我尝试了.ebextensions proxy.conf解决方案(在ecd_bm的链接中提到),但没有成功 My/var/log/nginx/access.log提供: 127.0.0.1 - - [18/May/2015:08:56:58 +0
127.0.0.1 - - [18/May/2015:08:56:58 +0000] "POST /scrape-emails HTTP/1.1" 502 172 "-" "aws-sqsd/2.0"
2015/05/18 08:56:58 [error] 12465#0: *32 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: , request: "POST /scrape-emails HTTP/1.1", upstream: "http://172.17.0.4:8080/scrape-emails", host: "localhost"
My nginx/var/log/nginx/error.log给出:
127.0.0.1 - - [18/May/2015:08:56:58 +0000] "POST /scrape-emails HTTP/1.1" 502 172 "-" "aws-sqsd/2.0"
2015/05/18 08:56:58 [error] 12465#0: *32 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: , request: "POST /scrape-emails HTTP/1.1", upstream: "http://172.17.0.4:8080/scrape-emails", host: "localhost"
My/var/log/aws sqsd/default.log给出:
2015-05-18T08:56:58Z http-err: 8240b585-61c3-4fba-b99a-265ace312308 (1) 502 - 60.050
首先,my/etc/nginx/nginx.conf如下所示:
# Elastic Beanstalk Nginx Configuration File
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
我曾经在60秒后收到504s,但将以下内容添加到/etc/nginx/sites enabled/elasticbeanstalk-nginx-docker-proxy.conf(包含在/etc/nginx/nginx.conf中)后,它们就消失了(但被502s取代):
我已将每个参数的默认值设置为60秒到1800秒--
我注意到uwsgi日志显示:
你对工人的仁慈是60秒
这可能是问题所在吗?--如果是,我该如何修复它?如果不是,我如何阻止502
此外,我在/etc/nginx/uwsgi_参数中添加了以下内容,但没有效果:
uwsgi_read_timeout 1800s;
uwsgi_send_timeout 1800s;
uwsgi_connect_timeout 1800s;
在编辑nginx配置文件(使用ssh)之后,我总是在ebweb界面中“重启应用服务器”,然后进行测试
在处理消息时,如何摆脱502并使可视性超时得到尊重,有什么想法吗 以下是我到目前为止所做的工作。不知道这是否是访问队列可见性超时的“安全”方法,但目前它似乎在我的ruby worker环境中起了作用:
packages:
yum:
jq: []
commands:
match_nginx_timeout_to_sqs_timeout:
command: |
VISIBILITY_TIMEOUT=$(
/opt/aws/bin/cfn-get-metadata --region `{"Ref": "AWS::Region"}` --stack `{"Ref": "AWS::StackName"}` \
--resource AWSEBBeanstalkMetadata --key AWS::ElasticBeanstalk::Ext |
jq -r '.Parameters.AWSEBVisibilityTimeout'
)
if [[ -n "${VISIBILITY_TIMEOUT}" ]]; then
echo "proxy_read_timeout ${VISIBILITY_TIMEOUT}s;" > /etc/nginx/conf.d/worker.conf
service nginx restart
fi
实际上,我对这些数据有第二个用途,因此最终将其拆分为属性缓存
文件。有关详细信息,请参阅
我得到的印象是,从beanstalk UI更新可见性超时在下次部署之前不会更新此值,但我同意这种情况,因为它在环境中不会经常更改。您是否在uwsgi文件中设置了harakiri?你有没有试着设置限制杆?可能客户的最大体型也会有所帮助。我以前也遇到过类似的问题,很难弄清楚……你找到解决办法了吗?我也有同样的问题。我也希望看到答案。似乎可以根据环境的可见性超时设置nginx,并使用精心编制的ebextension。