Python 需要在Apache中运行spark submit的帮助吗

Python 需要在Apache中运行spark submit的帮助吗,python,bash,apache-spark,airflow,spark-submit,Python,Bash,Apache Spark,Airflow,Spark Submit,我是Python和Airflow的新用户,在Airflow任务中运行spark submit非常困难。我的目标是成功运行以下DAG任务 from datetime import datetime, timedelta from airflow import DAG from airflow.contrib.operators.spark_submit_operator import SparkSubmitOperator from airflow.operators.bash_operator

我是Python和Airflow的新用户,在Airflow任务中运行
spark submit
非常困难。我的目标是成功运行以下DAG任务

from datetime import datetime, timedelta
from airflow import DAG
from airflow.contrib.operators.spark_submit_operator import SparkSubmitOperator
from airflow.operators.bash_operator import BashOperator

default_args = {
    'owner': 'matthew',
    'start_date': datetime(2019, 7, 8)
}

dag = DAG('CustomCreate_test2',
          default_args=default_args,
          schedule_interval=timedelta(days=1))

t3 = BashOperator(
    task_id='run_test',
    bash_command='spark-submit --class CLASSPATH.CustomCreate ~/IdeaProjects/custom-create-job/build/libs/custom-create.jar',
    dag=dag
)
我知道问题在于气流,而不是bash,因为当我在终端中运行命令
spark submit--class CLASSPATH.CustomCreate~/IdeaProjects/custom create job/build/libs/custom create.jar时,它成功运行

我从气流日志中得到以下错误

...
[2019-08-28 15:55:34,750] {bash_operator.py:132} INFO - Command exited with return code 1
[2019-08-28 15:55:34,764] {taskinstance.py:1047} ERROR - Bash command failed
Traceback (most recent call last):
  File "/Users/matcordo2/.virtualenv/airflow/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 922, in _run_raw_task
    result = task_copy.execute(context=context)
  File "/Users/matcordo2/.virtualenv/airflow/lib/python3.7/site-packages/airflow/operators/bash_operator.py", line 136, in execute
    raise AirflowException("Bash command failed")
airflow.exceptions.AirflowException: Bash command failed
...
我也尝试过使用SparkSubmitOperator(…)
,但没有成功运行过,我只得到了如下错误日志

...
[2019-08-28 15:54:49,749] {logging_mixin.py:95} INFO - [[34m2019-08-28 15:54:49,749[0m] {[34mspark_submit_hook.py:[0m427} INFO[0m - at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)[0m
[2019-08-28 15:54:49,803] {taskinstance.py:1047} ERROR - Cannot execute: ['spark-submit', '--master', 'yarn', '--num-executors', '2', '--total-executor-cores', '1', '--executor-cores', '1', '--executor-memory', '2g', '--driver-memory', '1g', '--name', 'CustomCreate', '--class', 'CLASSPATH.CustomCreate', '--verbose', '--queue', 'root.default', '--deploy-mode', 'cluster', '~/IdeaProjects/custom-create-job/build/libs/custom-create.jar']. Error code is: 1.
...
在运行
bash操作符(…)
任务中的
spark submit…
命令之前,是否必须使用
SparkSubmitOperator(…)
执行某些操作

有没有办法直接从
SparkSubmitOperator(…)
任务运行我的
spark submit
命令

在Airflow的Admin->Connections页面中,我需要对spark\u default做些什么吗

是否有任何内容必须在“管理->用户”页面中设置?
是否有任何东西必须设置为允许气流运行spark或运行由特定用户创建的jar文件?如果是,什么/如何?

类似的问题已经得到了回答-

我想上面的链接会对你有所帮助

将来,如果您想在AWS EMRAZURE上实现相同的功能,那么您有一个很好的方法来安排spark作业-

上述示例-(AWS EMR)

=cover\u open(json.load(open(气流\u home+'/'))
['Job']['Name']=['Job']['Name']+
气流\u swperformance\u cpu\u creator=EmrRunJobFlowOperator(
任务id=“”,
作业流覆盖=['job'],
aws_conn_id='aws_default',
emr\u conn\u id='emr\u default',
重试次数=1,
dag=dag
)
一个简单的JSON应该是-(与上面提到的JSON文件相同)

{
“工作”:{
“名称”:“,
“LogUri”:“,
“发布标签”:“emr-5.6.0”,
“应用程序”:[
{
“名称”:“火花”
},
{
“名称”:“蜂巢”
}
],
“标签”:[
{
“密钥”:“密钥”,
“值”:”
},
{
“密钥”:“密钥”,
“值”:”
},
{
“密钥”:“密钥”,
“值”:”
}
],
“JobFlowRole”:“EMR\u EC2\u DefaultRole\u Stable”,
“ServiceRole”:“EMR\U DefaultRole”,
“所有用户均可访问”:正确,
“配置”:[
{
“分类”:“火花默认值”,
“财产”:{
“spark.driver.extraJavaOptions”:“-XX:+UseParallelGC-XX:+UseParallelOldGC-XX:CMSinitiatingOccinecyFraction=70-XX:MaxHeapFreeRatio=70-XX:+CMSClassUnloadingEnabled-XX:+ExitoOutofMemoryError-Dlog4j.configuration=log4j custom.properties”,
“spark.executor.extraJavaOptions”:-verbose:gc-XX:+PrintGCDetails-XX:+PrintGCDateStamps-XX:+UseParallelGC-XX:+UseParallelOldGC-XX:cmSinitiatingOccupencyFraction=70-XX:MaxHeapFreeRatio=70-XX:+CMSClassUnloadingEnabled-XX:+ExitOnOutOfMemoryError-Dlog4j.configuration=log4j custom.properties”,
“spark.scheduler.mode”:“FAIR”,
“spark.eventLog.enabled”:“true”,
“spark.serializer”:“org.apache.spark.serializer.KryoSerializer”,
“spark.sql.orc.filterPushdown”:“true”,
spark.DynamicLocation.enabled:“false”
},
“配置”:[]
},
{
“分类”:“火花”,
“财产”:{
“maximizeResourceAllocation”:“true”
},
“配置”:[]
},
{
“分类”:“蜂巢站点”,
“财产”:{
“javax.jdo.option.ConnectionUserName”:”,
“javax.jdo.option.ConnectionPassword”:”,

“javax.jdo.option.ConnectionURL”:“已经回答了一个类似的问题-

我想上面的链接会对你有所帮助

将来,如果您想在AWS EMRAZURE上实现相同的功能,那么您有一个很好的方法来安排spark作业-

上述示例-(AWS EMR)

=cover\u open(json.load(open(气流\u home+'/'))
['Job']['Name']=['Job']['Name']+
气流\u swperformance\u cpu\u creator=EmrRunJobFlowOperator(
任务id=“”,
作业流覆盖=['job'],
aws_conn_id='aws_default',
emr\u conn\u id='emr\u default',
重试次数=1,
dag=dag
)
一个简单的JSON应该是-(与上面提到的JSON文件相同)

{
“工作”:{
“名称”:“,
“LogUri”:“,
“发布标签”:“emr-5.6.0”,
“应用程序”:[
{
“名称”:“火花”
},
{
“名称”:“蜂巢”
}
],
“标签”:[
{
“密钥”:“密钥”,
“值”:”
},
{
“密钥”:“密钥”,
“值”:”
},
{
“密钥”:“密钥”,
“值”:”
}
],
“JobFlowRole”:“EMR\u EC2\u DefaultRole\u Stable”,
“ServiceRole”:“EMR\U DefaultRole”,
“所有用户均可访问”:正确,
“配置”:[
{
“分类”:“火花默认值”,
“财产”:{
“spark.driver.extraJavaOptions”:“-XX:+UseParallelGC-XX:+UseParallelOldGC-XX:cmsinitiatingoccuncyf
 <airflow_EMR_task> =cover_open(json.load(open(airflow_home+'/<tasks_json_containing_all_spark_configurations>')))
 <airflow_EMR_task>['Job']['Name'] =  <airflow_EMR_task>['Job']['Name'] + <'optional_postfix'>
airflow_swperformance_cpu_creator = EmrRunJobFlowOperator(
    task_id='<task_id>',
    job_flow_overrides= <airflow_EMR_task>['Job'],
    aws_conn_id='aws_default',
    emr_conn_id='emr_default',
    retries=1,
    dag=dag
)
{
    "Job": {
        "Name": "<task_name>",
        "LogUri": "<task_log_uri>",
        "ReleaseLabel": "emr-5.6.0",
        "Applications": [
            {
                "Name": "Spark"
            },
            {
                "Name": "Hive"
            }
        ],
        "Tags": [
            {
                "Key" : "<any_tag>",
                "Value" : "<any_tag>"
            },
            {
                "Key" : "<any tag>",
                "Value": "<any_tag>"
            },
            {
                "Key" : "<any_tag>",
                "Value": "<any_tag value>"
            }
        ],
        "JobFlowRole": "EMR_EC2_DefaultRole_Stable",
        "ServiceRole": "EMR_DefaultRole",
        "VisibleToAllUsers": true,
        "Configurations": [
            {
                "Classification": "spark-defaults",
                "Properties": {
                    "spark.driver.extraJavaOptions":"-XX:+UseParallelGC -XX:+UseParallelOldGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:+ExitOnOutOfMemoryError -Dlog4j.configuration=log4j-custom.properties",
                    "spark.executor.extraJavaOptions":"-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseParallelGC -XX:+UseParallelOldGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:+ExitOnOutOfMemoryError -Dlog4j.configuration=log4j-custom.properties",
                    "spark.scheduler.mode": "FAIR",
                    "spark.eventLog.enabled": "true",
                    "spark.serializer": "org.apache.spark.serializer.KryoSerializer",
                    "spark.sql.orc.filterPushdown": "true",
                    "spark.dynamicAllocation.enabled": "false"
                },
                "Configurations": []
            },
            {
                "Classification": "spark",
                "Properties": {
                    "maximizeResourceAllocation": "true"
                },
                "Configurations": []
            },
            {
                "Classification": "hive-site",
                "Properties": {
                    "javax.jdo.option.ConnectionUserName": "<HIVE USERNAME IF ANY>",
                    "javax.jdo.option.ConnectionPassword": "<<hive_connection_password>>",
                    "javax.jdo.option.ConnectionURL": "<Hive_URL_IF_ANY"
                },
                "Configurations": []
            },
            {
                "Classification": "emrfs-site",
                "Properties": {
                    "fs.s3.serverSideEncryption.kms.keyId": "<<encryption_key>>",
                    "fs.s3.enableServerSideEncryption": "true"
                },
                "Configurations": []
            },
            {
                "Classification":"spark-env",
                "Configurations":[{
                    "Classification":"export",
                    "Configurations":[],
                    "Properties": {
                        "ANY_ENV_VARIABLE_REQUIRED_FOR_SPECIFIC_JOB",
                        "ANY_ENV_VARIABLE_REQUIRED_FOR_SPECIFIC_JOB",
                        "ANY_ENV_VARIABLE_REQUIRED_FOR_SPECIFIC_JOB",
                        "ANY_ENV_VARIABLE_REQUIRED_FOR_SPECIFIC_JOB",
                        "ANY_ENV_VARIABLE_REQUIRED_FOR_SPECIFIC_JOB"
            "S3_BUCKET_NAME":"<S3_bucekt_naem_if_Required>"
                    }
                }
                ]}
        ],
        "Instances": {
            "Ec2KeyName": "<ssh_key>",
            "KeepJobFlowAliveWhenNoSteps": false,
            "Ec2SubnetId": "<subnet>",
            "EmrManagedSlaveSecurityGroup": "<security_group>",
            "EmrManagedMasterSecurityGroup": "<security_group_parameter>",
            "AdditionalSlaveSecurityGroups": [
                "<self_explanatory>"
            ],
            "AdditionalMasterSecurityGroups": [
                "<self_explanatory>"
            ],
            "InstanceGroups": [
                {
                    "InstanceCount": 4,
                    "InstanceRole": "CORE",
                    "InstanceType": "r3.xlarge",
                    "Name": "Core instance group - 2"
                },
                {
                    "InstanceCount": 1,
                    "InstanceRole": "MASTER",
                    "InstanceType": "r3.xlarge",
                    "Name": "Master instance group - 1"
                }
            ]
        },
        "BootstrapActions": [],
        "Steps": [
            {
                "Name": "download-dependencies",
                "HadoopJarStep": {
                    "Jar": "command-runner.jar",
                    "Args": [
                        "aws",
                        "s3",
                        "cp",
                        "<appropriate_s3_location>",
                        "/home/hadoop",
                        "--recursive"
                    ],
                    "Properties": []
                },
                "ActionOnFailure": "TERMINATE_CLUSTER"
            },
            {
                "Name": "run-script",
                "HadoopJarStep": {
                    "Jar": "command-runner.jar",
                    "Args": [
                        "sudo",
                        "/bin/sh",
                        "/home/hadoop/pre-executor.sh"
                    ],
                    "Properties": []
                },
                "ActionOnFailure": "TERMINATE_CLUSTER"
            },
            {
                "Name": "spark-submit",
                "HadoopJarStep": {
                    "Jar": "command-runner.jar",
                    "Args": [
                        "spark-submit",
                        "/home/hadoop/analytics-job.jar",
            "--run-gold-job-only"
                    ],
                    "Properties": []
                },
                "ActionOnFailure": "TERMINATE_CLUSTER"
            }
        ]
    }
}
from airflow import DAG
from airflow.contrib.operators.ssh_operator import SSHOperator
from airflow.operators.bash_operator import BashOperator
from datetime import datetime, timedelta

default_args = {
    'owner': 'matthew',
    'start_date': datetime(2019, 8, 28)
}

dag = DAG('custom-create',
          default_args=default_args,
          schedule_interval=timedelta(days=1),
          params={'project_source': '~/IdeaProjects/custom-create-job',
                  'spark_submit': '/usr/local/bin/spark-submit',
                  'classpath': 'CLASSPATH.CustomCreate',
                  'jar_file': 'build/libs/custom-create.jar'}
          )

templated_bash_command = """
    echo 'HOSTNAME: $HOSTNAME' #To check that you are properly connected to the host
    cd {{ params.project_source }}
    {{ params.spark_submit }} --class {{ classpath }} {{ jar_file }}
"""

t1 = SSHOperator(
    task_id="SSH_task",
    ssh_conn_id='ssh_connection',
    command=templated_bash_command,
    dag=dag
)