Amazon web services 通过AWS Lambda使用python代码执行EMR spark作业
在触发s3事件后,我想通过AWS Lambda使用python代码触发EMR spark作业。如果有人能共享配置/命令从AWS Lambda函数调用EMR spark作业,我将不胜感激。由于这个问题非常一般,我将尝试给出一个执行此操作的示例代码。您必须根据实际值更改某些参数 我通常的做法是将主处理函数放在一个名为Amazon web services 通过AWS Lambda使用python代码执行EMR spark作业,amazon-web-services,apache-spark,amazon-s3,aws-lambda,amazon-emr,Amazon Web Services,Apache Spark,Amazon S3,Aws Lambda,Amazon Emr,在触发s3事件后,我想通过AWS Lambda使用python代码触发EMR spark作业。如果有人能共享配置/命令从AWS Lambda函数调用EMR spark作业,我将不胜感激。由于这个问题非常一般,我将尝试给出一个执行此操作的示例代码。您必须根据实际值更改某些参数 我通常的做法是将主处理函数放在一个名为lambda\u handler.py的文件中,将EMR的所有配置和步骤放在一个名为EMR\u configuration\u and_steps.py的文件中 请检查下面的代码片段以了
lambda\u handler.py
的文件中,将EMR的所有配置和步骤放在一个名为EMR\u configuration\u and_steps.py
的文件中
请检查下面的代码片段以了解lambda\u handler.py
import boto3
import emr_configuration_and_steps
import logging
import traceback
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
formatter = logging.Formatter('%(levelname)s:%(name)s:%(message)s')
def create_emr(name):
try:
emr = boto3.client('emr')
cluster_id = emr.run_job_flow(
Name=name,
VisibleToAllUsers=emr_configuration_and_steps.visible_to_all_users,
LogUri=emr_configuration_and_steps.log_uri,
ReleaseLabel=emr_configuration_and_steps.release_label,
Applications=emr_configuration_and_steps.applications,
Tags=emr_configuration_and_steps.tags,
Instances=emr_configuration_and_steps.instances,
Steps=emr_configuration_and_steps.steps,
Configurations=emr_configuration_and_steps.configurations,
ScaleDownBehavior=emr_configuration_and_steps.scale_down_behavior,
ServiceRole=emr_configuration_and_steps.service_role,
JobFlowRole=emr_configuration_and_steps.job_flow_role
)
logger.info("EMR is created successfully")
return cluster_id['JobFlowId']
except Exception as e:
traceback.print_exc()
raise Exception(e)
def lambda_handler(event, context):
logger.info("starting the lambda function for spawning EMR")
try:
emr_cluster_id = create_emr('Name of Your EMR')
logger.info("emr_cluster_id is = " + emr_cluster_id)
except Exception as e:
logger.error("Exception at some step in the process " + str(e))
现在,包含所有配置的第二个文件(emr\u configuration\u和\u steps.py
)将如下所示
visible_to_all_users = True
log_uri = 's3://your-s3-log-path-here/'
release_label = 'emr-5.29.0'
applications = [{'Name': 'Spark'}, {'Name': 'Hadoop'}]
tags = [
{'Key': 'Project', 'Value': 'Your-Project Name'},
{'Key': 'Service', 'Value': 'Your-Service Name'},
{'Key': 'Environment', 'Value': 'Development'}
]
instances = {
'Ec2KeyName': 'Your-key-name',
'Ec2SubnetId': 'your-subnet-name',
'InstanceFleets': [
{
"InstanceFleetType": "MASTER",
"TargetOnDemandCapacity": 1,
"TargetSpotCapacity": 0,
"InstanceTypeConfigs": [
{
"WeightedCapacity": 1,
"BidPriceAsPercentageOfOnDemandPrice": 100,
"InstanceType": "m3.xlarge"
}
],
"Name": "Master Node"
},
{
"InstanceFleetType": "CORE",
"TargetSpotCapacity": 8,
"InstanceTypeConfigs": [
{
"WeightedCapacity": 8,
"BidPriceAsPercentageOfOnDemandPrice": 50,
"InstanceType": "m3.xlarge"
}
],
"Name": "Core Node"
},
],
'KeepJobFlowAliveWhenNoSteps': False
}
steps = [
{
'Name': 'Setup Hadoop Debugging',
'ActionOnFailure': 'TERMINATE_CLUSTER',
'HadoopJarStep': {
'Jar': 'command-runner.jar',
'Args': ['state-pusher-script']
}
},
{
"Name": "Active Marker for digital panel",
"ActionOnFailure": 'TERMINATE_CLUSTER',
'HadoopJarStep': {
"Jar": "command-runner.jar",
"Args": [
"spark-submit",
"--deploy-mode",
"cluster",
"--driver-memory", "4g",
"--executor-memory", "4g",
"--executor-cores", "2",
"--class", "your-main-class-full-path-name",
"s3://your-jar-path-SNAPSHOT-jar-with-dependencies.jar"
]
}
}
]
configurations = [
{
"Classification": "spark-log4j",
"Properties": {
"log4j.logger.root": "INFO",
"log4j.logger.org": "INFO",
"log4j.logger.com": "INFO"
}
}
]
scale_down_behavior = 'TERMINATE_AT_TASK_COMPLETION'
service_role = 'EMR_DefaultRole'
job_flow_role = 'EMR_EC2_DefaultRole'
请根据您的用例调整特定的路径和名称。要部署它,您需要在一个zip文件中安装boto3和package/zip这两个文件,并将其上载到lambda函数。这样您就可以生成EMR了。您看过这个吗?