Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Django-Cellery-SQS-S3-接收消息_Django_Amazon S3_Celery_Amazon Sqs_Django Celery - Fatal编程技术网

Django-Cellery-SQS-S3-接收消息

Django-Cellery-SQS-S3-接收消息,django,amazon-s3,celery,amazon-sqs,django-celery,Django,Amazon S3,Celery,Amazon Sqs,Django Celery,我有一个Django应用程序,我使用芹菜、SQS和S3。 当我使用Django、Cellery和SQS运行下面的函数时,该函数可以正常工作,并且它每分钟都会打印“hello” from celery.task import periodic_task from celery.schedules import crontab @periodic_task(run_every=crontab(hour='*', minute='*', day_of_week="*")) def print_hell

我有一个Django应用程序,我使用芹菜、SQS和S3。 当我使用Django、Cellery和SQS运行下面的函数时,该函数可以正常工作,并且它每分钟都会打印“hello”

from celery.task import periodic_task
from celery.schedules import crontab
@periodic_task(run_every=crontab(hour='*', minute='*', day_of_week="*"))
def print_hello():
    print('hello world')
但该应用程序也链接到一个S3存储桶。每当将新文件保存到时。当向SQS队列发送通知消息时,就会出现问题。当通知到达队列时,工作进程失败。它停止定期任务print_hello(),并给出以下错误消息:

[2019-11-07 22:10:57173:关键/主要流程]不可恢复的错误: 错误('填充不正确') …parserinvoker/lib64/python3.7/base64.py“,第87行,B64解码 返回binascii.a2b_base64(s)binascii。错误:填充不正确

然后退出。我一直在查看文档,整个星期都在尝试排除故障,但没有找到解决方案。我正在添加我的settings.py,以防出现配置问题

设置.py

BROKER_URL = "sqs://"
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
CELERY_DEFAULT_QUEUE = env('CELERY_DEFAULT_QUEUE')
CELERY_RESULT_BACKEND = None 
BROKER_TRANSPORT_OPTIONS = {
    'region': 'us-east-1',
    'polling_interval':20,
    'visibility_timeout': 3600,
    'task_default_queue': env('CELERY_DEFAULT_QUEUE'),
}

celery在队列上期望的json有效负载的格式与SQS从s3接收的json有效负载的格式不同;为了正确地处理这些内容,您可能希望有一个单独的定期任务来定期检查这些内容并耗尽s3通知队列,而不是将s3通知发送到celery broker队列消息正文将如图所示。下面是从S3发送到SQS的示例2.1记录:

   "Records":[  
      {  
         "eventVersion":"2.1",
         "eventSource":"aws:s3",
         "awsRegion":"us-west-2",
         "eventTime":The time, in ISO-8601 format, for example, 1970-01-01T00:00:00.000Z, when Amazon S3 finished processing the request,
         "eventName":"event-type",
         "userIdentity":{  
            "principalId":"Amazon-customer-ID-of-the-user-who-caused-the-event"
         },
         "requestParameters":{  
            "sourceIPAddress":"ip-address-where-request-came-from"
         },
         "responseElements":{  
            "x-amz-request-id":"Amazon S3 generated request ID",
            "x-amz-id-2":"Amazon S3 host that processed the request"
         },
         "s3":{  
            "s3SchemaVersion":"1.0",
            "configurationId":"ID found in the bucket notification configuration",
            "bucket":{  
               "name":"bucket-name",
               "ownerIdentity":{  
                  "principalId":"Amazon-customer-ID-of-the-bucket-owner"
               },
               "arn":"bucket-ARN"
            },
            "object":{  
               "key":"object-key",
               "size":object-size,
               "eTag":"object eTag",
               "versionId":"object version if bucket is versioning-enabled, otherwise null",
               "sequencer": "a string representation of a hexadecimal value used to determine event sequence, 
                   only used with PUTs and DELETEs"
            }
         },
         "glacierEventData": {
            "restoreEventData": {
               "lifecycleRestorationExpiryTime": "The time, in ISO-8601 format, for example, 1970-01-01T00:00:00.000Z, of Restore Expiry",
               "lifecycleRestoreStorageClass": "Source storage class for restore"
            }
         }
      }
   ]
}

芹菜消息格式。

谢谢您的解释。我不知道格式不同。我创建了一个单独的队列,到目前为止一切都正常。我有一个专用于芹菜的队列和一个用于接收SQS消息的队列。