在AWS Lambda脚本中使用多个Python函数
形势 我正在使用一个Lambda函数,它从传入的电子邮件中获取一个CSV附件,并将其放入实际上是S3 bucket的子文件夹中。Lambda的这一部分工作得很好,但是我需要在同一Lambda函数中执行其他udf,以执行后续任务 代码在AWS Lambda脚本中使用多个Python函数,python,amazon-web-services,amazon-s3,aws-lambda,boto3,Python,Amazon Web Services,Amazon S3,Aws Lambda,Boto3,形势 我正在使用一个Lambda函数,它从传入的电子邮件中获取一个CSV附件,并将其放入实际上是S3 bucket的子文件夹中。Lambda的这一部分工作得很好,但是我需要在同一Lambda函数中执行其他udf,以执行后续任务 代码 import boto3 import email import base64 import math import pickle import numpy as np impo
import boto3
import email
import base64
import math
import pickle
import numpy as np
import pandas as pd
import io
###############################
### GET THE ATTACHMENT ###
###############################
#s3 = boto3.client('s3')
FILE_MIMETYPE = 'text/csv'
#'application/octet-stream'
# destination folder
S3_OUTPUT_BUCKETNAME = 'my_bucket'
print('Loading function')
s3 = boto3.client('s3')
def lambda_handler(event, context):
#source email bucket
inBucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.quote(event['Records'][0]['s3']['object']['key'].encode('utf8'))
try:
response = s3.get_object(Bucket=inBucket, Key=key)
msg = email.message_from_string(response['Body'].read().decode('utf-8'))
except Exception as e:
print(e)
print('Error retrieving object {} from source bucket {}. Verify existence and ensure bucket is in same region as function.'.format(key, inBucket))
raise e
attachment_list = []
try:
#scan each part of email
for message in msg.walk():
# Check filename and email MIME type
if (message.get_content_type() == FILE_MIMETYPE and message.get_filename() != None):
attachment_list.append ({'original_msg_key':key, 'attachment_filename':message.get_filename(), 'body': base64.b64decode(message.get_payload()) })
except Exception as e:
print(e)
print ('Error processing email for CSV attachments')
raise e
# if multiple attachments send all to bucket
for attachment in attachment_list:
try:
s3.put_object(Bucket=S3_OUTPUT_BUCKETNAME, Key='attachments/' + attachment['original_msg_key'] + '-' + attachment['attachment_filename'] , Body=attachment['body']
)
except Exception as e:
print(e)
print ('Error sending object {} to destination bucket {}. Verify existence and ensure bucket is in same region as function.'.format(attachment['attachment_filename'], S3_OUTPUT_BUCKETNAME))
raise e
#################################
### ADDITIONAL FUNCTIONS ###
#################################
def my_function():
print("Hello, this is another function")
结果
import boto3
import email
import base64
import math
import pickle
import numpy as np
import pandas as pd
import io
###############################
### GET THE ATTACHMENT ###
###############################
#s3 = boto3.client('s3')
FILE_MIMETYPE = 'text/csv'
#'application/octet-stream'
# destination folder
S3_OUTPUT_BUCKETNAME = 'my_bucket'
print('Loading function')
s3 = boto3.client('s3')
def lambda_handler(event, context):
#source email bucket
inBucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.quote(event['Records'][0]['s3']['object']['key'].encode('utf8'))
try:
response = s3.get_object(Bucket=inBucket, Key=key)
msg = email.message_from_string(response['Body'].read().decode('utf-8'))
except Exception as e:
print(e)
print('Error retrieving object {} from source bucket {}. Verify existence and ensure bucket is in same region as function.'.format(key, inBucket))
raise e
attachment_list = []
try:
#scan each part of email
for message in msg.walk():
# Check filename and email MIME type
if (message.get_content_type() == FILE_MIMETYPE and message.get_filename() != None):
attachment_list.append ({'original_msg_key':key, 'attachment_filename':message.get_filename(), 'body': base64.b64decode(message.get_payload()) })
except Exception as e:
print(e)
print ('Error processing email for CSV attachments')
raise e
# if multiple attachments send all to bucket
for attachment in attachment_list:
try:
s3.put_object(Bucket=S3_OUTPUT_BUCKETNAME, Key='attachments/' + attachment['original_msg_key'] + '-' + attachment['attachment_filename'] , Body=attachment['body']
)
except Exception as e:
print(e)
print ('Error sending object {} to destination bucket {}. Verify existence and ensure bucket is in same region as function.'.format(attachment['attachment_filename'], S3_OUTPUT_BUCKETNAME))
raise e
#################################
### ADDITIONAL FUNCTIONS ###
#################################
def my_function():
print("Hello, this is another function")
CSV附件已成功检索并放置在s3.put_对象指定的目标中,但是Cloudwatch日志中没有证据表明my_函数正在运行
我尝试过的
我尝试使用def my_函数(事件、上下文):试图确定函数是否需要与第一个函数相同的条件来执行。我还尝试将my_函数()作为第一个函数的一部分,但这似乎也不起作用
如何确保这两个函数都在Lambda中执行?基于注释
该问题是因为在lambda处理程序中未调用my_函数function
解决方案是将my_函数()
添加到处理程序lambda_处理程序
中,以便实际调用my_函数
。您根本没有调用my_函数。只需在处理程序末尾的lambda_处理程序中调用它即可。lambda服务调用您在创建/更新lambda函数时配置的Python函数。它不会神奇地调用“具有给定参数签名的所有函数”。我的意思是在lambda\u处理程序中添加my\u function()
以调用函数。在调用my\u function()
的lambda\u处理程序(…)
函数中编写代码。正如您在常规Python中所做的那样,我可以确认,由于Cloudwatch日志中存在函数输出,因此函数已经执行