Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/321.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python boto3:Lambda函数在将文件写入S3时多次执行_Python_Lambda_Boto3 - Fatal编程技术网

Python boto3:Lambda函数在将文件写入S3时多次执行

Python boto3:Lambda函数在将文件写入S3时多次执行,python,lambda,boto3,Python,Lambda,Boto3,我正在开发一个实用程序,它可以读取sqs消息并写入s3文件。使用boto3进行相同的测试。这有两个部分: A:python客户端:它调用Lambda函数。传递将用于s3对象名的文件名。 B:Lambda函数,它读取sqs消息并将(100条记录)写入文件(在/tmp文件夹中),然后将其上载到s3存储桶。从sqs队列读取消息并将其写入文件后,该消息将从队列中删除。 lambda函数配置:256 Mb,5分钟超时。没有VPC 当我从python客户端调用lambda函数时,lambda函数会执行多次。

我正在开发一个实用程序,它可以读取sqs消息并写入s3文件。使用boto3进行相同的测试。这有两个部分: A:python客户端:它调用Lambda函数。传递将用于s3对象名的文件名。 B:Lambda函数,它读取sqs消息并将(100条记录)写入文件(在/tmp文件夹中),然后将其上载到s3存储桶。从sqs队列读取消息并将其写入文件后,该消息将从队列中删除。 lambda函数配置:256 Mb,5分钟超时。没有VPC

当我从python客户端调用lambda函数时,lambda函数会执行多次。从sqs队列中删除200条记录,而不是100条记录。 在日志中获取以下错误:

[DEBUG] 2017-08-01T10:24:59.613Z 99fdbfb1-76a3-11e7-9b81-c76e4a7f8294 Event needs-retry.s3.PutObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f78edd899a8>>

REPORT RequestId: 99fdbfb1-76a3-11e7-9b81-c76e4a7f8294 Duration: 29881.32 ms Billed Duration: 29900 ms Memory Size: 256 MB Max Memory Used: 46 MB 

[DEBUG] 2017-08-01T10:25:30.871Z ad06a4a7-76a3-11e7-b7d8-a7a246d0544c Event needs-retry.s3.PutObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f78edd28770>>

REPORT RequestId: ad06a4a7-76a3-11e7-b7d8-a7a246d0544c Duration: 29847.54 ms Billed Duration: 29900 ms Memory Size: 256 MB Max Memory Used: 52 MB
我的lambda函数
是否将100或200条记录复制到文件中?(您说删除了200条记录)将100条记录复制到文件中(根据numofrecords)。但从sqs队列中删除了200个。
nooflambdafunc = 1
for i in range(0, nooflambdafunc):
    num = str(i)
    print("file no:", i)
    js = json.dumps({'fkey':num})
    #s3response = lambdaclient.invoke(FunctionName=sqstos3lambdafuncname,Payload=js)
    print("SQS to S3 lambda function invoked for .", js)
def lambda_handler(event, context):
      
    config = Config(region_name='us-east-2', connect_timeout=300, read_timeout=500)
    #SQS connection
    sqsconnclient = boto3.client('sqs', config=config)
    sqsconnresource = boto3.resource('sqs', config=config)
    sourcesqsn = "target"
    #sqs queue url
    queueurl = sqsconnclient.get_queue_url(QueueName=sourcesqsn)
    sqsstring = queueurl.get('QueueUrl')    
     
    #boto3 S3 connection
    s3connresource = boto3.resource('s3', config=config)
    sourcebucket = "testcy"
    s3connclient = boto3.client('s3', config=config)    

    #get payload from lambda invoke function 
    js1 = json.dumps(event, indent=2)
    resp = json.loads(js1)
    filekey = resp["fkey"]
    print("key is:", filekey)

    #get # msgs from sqs queue as defined in numofrecords.
    filenamectr = filekey #take input from controller
    key = str(sourcesqsn + str(filenamectr) + ".json")
    numofrecords = 100 #number of records to write to file.

    #lambda file name.     
    filename = '/tmp/' + key
            
    # using client conn, write msgs (as per numofrecords)to single file
    for i in range(0, numofrecords):
        messages = sqsconnclient.receive_message(QueueUrl=sqsstring)        
        if messages.get('Messages'):
            m = messages.get('Messages')[0]
            mbody = m['Body']
            msg_body = mbody.replace('\n',' ')            
            mreceipt_handle = m['ReceiptHandle']            
            writer = open(filename, 'a') #open file in /tmp/            
            writer.write(msg_body + '\n') #write each msg on new line
            sqsconnclient.delete_message(QueueUrl=sqsstring, ReceiptHandle=mreceipt_handle)
        time.sleep(0.25) 
            
    writer.close()
    time.sleep(0.25)
    
    print("File size: ", os.path.getsize(filename) >> 20)

    # write the file to s3 bucket. 
    print("uploading file to s3 bucket...", filename)
    s3connresource.meta.client.upload_file(filename, sourcebucket, key)
    print("file uploaded to s3 bucket.")