当通过API提交时,Databricks spark_jar_任务失败

当通过API提交时,Databricks spark_jar_任务失败,api,rest,apache-spark,databricks,Api,Rest,Apache Spark,Databricks,我正在使用提交spark\u jar\u任务示例 我的示例spark\u jar\u任务请求用于计算Pi: "libraries": [ { "jar": "dbfs:/mnt/test-prd-foundational-projects1/spark-examples_2.11-2.4.5.jar" } ], "spark_jar_task": { "main_class_name": "org.apache.spark.examples.Spark

我正在使用提交spark\u jar\u任务示例

我的示例spark\u jar\u任务请求用于计算Pi:

"libraries": [
    {
      "jar": "dbfs:/mnt/test-prd-foundational-projects1/spark-examples_2.11-2.4.5.jar"
    }
  ],
  "spark_jar_task": {
    "main_class_name": "org.apache.spark.examples.SparkPi"
  }
Databricks sysout记录在哪里按预期打印Pi值

....
(This session will block until Rserve is shut down) Spark package found in SPARK_HOME: /databricks/spark DATABRICKS_STDOUT_END-19fc0fbc-b643-4801-b87c-9d22b9e01cd2-1589148096455 
Executing command, time = 1589148103046. 
Executing command, time = 1589148115170. 
Pi is roughly 3.1370956854784273 
Heap
.....
Spark_jar_任务虽然在日志中打印PI值,但作业以失败状态终止,但没有说明错误。下面是api/api/2.0/jobs/runs/list/?job\u id=23的响应

为什么这里的工作失败了?如有任何建议,将不胜感激

编辑: 错误日志说

20/05/11 18:24:15 INFO ProgressReporter$: Removed result fetcher for 740457789401555410_9000204515761834296_job-34-run-1-action-34
20/05/11 18:24:15 WARN ScalaDriverWrapper: Spark is detected to be down after running a command
20/05/11 18:24:15 WARN ScalaDriverWrapper: Fatal exception (spark down) in ReplId-a46a2-6fb47-361d2
com.databricks.backend.common.rpc.SparkStoppedException: Spark down: 
    at com.databricks.backend.daemon.driver.DriverWrapper.getCommandOutputAndError(DriverWrapper.scala:493)
    at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:597)
    at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:390)
    at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:337)
    at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:219)
    at java.lang.Thread.run(Thread.java:748)
20/05/11 18:24:17 INFO ShutdownHookManager: Shutdown hook called

我从这篇文章中找到了答案 看来我们不该故意打电话

spark.stop()
在databricks中作为jar运行时

spark.stop()