Apache spark 忽略在独立群集中通过REST作业提交的spark.master配置

Apache spark 忽略在独立群集中通过REST作业提交的spark.master配置,apache-spark,apache-spark-standalone,Apache Spark,Apache Spark Standalone,我在HA模式下有一个独立的spark群集(2个主控)和两个在那里注册的工人 我通过REST接口提交了spark作业,具体如下: { "sparkProperties": { "spark.app.name": "TeraGen3", "spark.default.parallelism": "40", "spark.executor.memory": "512m", "spark.driver.memory": "512m"

我在HA模式下有一个独立的spark群集(2个主控)和两个在那里注册的工人

我通过REST接口提交了spark作业,具体如下:

{
    "sparkProperties": {
        "spark.app.name": "TeraGen3",
        "spark.default.parallelism": "40",
        "spark.executor.memory": "512m",
        "spark.driver.memory": "512m",
        "spark.task.maxFailures": "3",
        "spark.jars": "file:///tmp//test//spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar",
        "spark.eventLog.enabled": "false",
        "spark.submit.deployMode": "cluster",
        "spark.driver.supervise": "true",
        "spark.master": "spark://spark-hn0:7077,spark-hn1:7077"
    },
    "mainClass": "com.github.ehiggs.spark.terasort.TeraGen",
    "environmentVariables": {
        "SPARK_ENV_LOADED": "1"
    },
    "action": "CreateSubmissionRequest",
    "appArgs": ["4g", "file:///tmp/data/teradata4g/"],
    "appResource": "file:///tmp//test//spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar",
    "clientSparkVersion": "2.1.1"
}
此请求通过REST接口()提交给激活的Spark Master

启动驱动程序时,-Dspark.master设置为“spark://spark-hn1:7077而不是在SparkProperty中传递的值,即spark://spark-hn0:7077,spark-hn1:7077”

运行驱动程序的工作节点的日志

17/12/18 13:29:49 INFO worker.DriverRunner: Launch Command: "/usr/lib/jvm/java-8-openjdk-amd64/bin/java" "-Dhdp.version=2.6.99.200-0" "-cp" "/usr/hdp/current/spark2-client/conf/:/usr/hdp/current/spark2-client/jars/*:/etc/hadoop/conf/" "-Xmx512M" "-Dspark.driver.memory=51
2m" "-Dspark.master=spark://spark-hn1:7077" "-Dspark.executor.memory=512m" "-Dspark.submit.deployMode=cluster" "-Dspark.app.name=TeraGen3" "-Dspark.default.parallelism=40" "-Dspark.jars=file:///tmp//test//spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar" "-Dspark.ta
sk.maxFailures=3" "-Dspark.driver.supervise=true" "-Dspark.eventLog.enabled=false" "org.apache.spark.deploy.worker.DriverWrapper" "spark://Worker@172.18.0.4:40803" "/var/spark/work/driver-20171218132949-0001/spark-terasort-1.1-SNAPSHOT-jar-with-dependencies.jar" "com.git
hub.ehiggs.spark.terasort.TeraGen" "4g" "file:///tmp/data/teradata4g/"
在作业执行过程中,当活动主机停机而另一个主机变为活动主机时,这会给我带来问题。由于驱动程序只知道一个主程序(旧的主程序),因此无法访问新的主程序并继续执行作业(因为spark.driver.supervise=true)


在Spark REST接口中传递多个主URL的正确方法是什么。

看起来这是RestServer实现中的一个错误,其中Spark.master正在被替换。

我们仍然可以通过在spark.driver.extraJavaOptions中设置spark.master来解决这个问题,同时通过REST接口提交作业,如下所示

"sparkProperties": {
        "spark.app.name": "TeraGen3",
        ...
        "spark.driver.extraJavaOptions": "-Dspark.master=spark://spark-hn0:7077,spark-hn1:7077"
    }
这对我有用