Apache spark 从Azure Databricks运行JAR时发生Spark提交错误

Apache spark 从Azure Databricks运行JAR时发生Spark提交错误,apache-spark,azure-databricks,Apache Spark,Azure Databricks,我正试图从Azure Databricks作业调度程序发出spark submit,目前遇到以下错误。错误显示:文件:/tmp/spark事件不存在。我需要一些指针来理解我们是否需要在Azure blob位置(这是我的存储层)或Azure DBFS位置创建此目录 根据下面的链接,当运行spark submit from Azure Databricks jobs scheduler时,不太清楚在何处创建目录 错误: OpenJDK 64-Bit Server VM warning: ignor

我正试图从Azure Databricks作业调度程序发出spark submit,目前遇到以下错误。错误显示:文件:/tmp/spark事件不存在。我需要一些指针来理解我们是否需要在Azure blob位置(这是我的存储层)或Azure DBFS位置创建此目录

根据下面的链接,当运行spark submit from Azure Databricks jobs scheduler时,不太清楚在何处创建目录

错误:

OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
Warning: Ignoring non-Spark config property: eventLog.rolloverIntervalSeconds
Exception in thread "main" java.lang.ExceptionInInitializerError
    at com.dta.dl.ct.qm.hbase.reverse.pipeline.HBaseVehicleMasterLoad.main(HBaseVehicleMasterLoad.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
    at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
    at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
    at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
    at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.FileNotFoundException: File file:/tmp/spark-events does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
    at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:97)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:580)
    at com.dta.dl.ct.qm.hbase.reverse.pipeline.HBaseVehicleMasterLoad$.<init>(HBaseVehicleMasterLoad.scala:32)
    at com.dta.dl.ct.qm.hbase.reverse.pipeline.HBaseVehicleMasterLoad$.<clinit>(HBaseVehicleMasterLoad.scala)
    ... 13 more
OpenJDK 64位服务器VM警告:忽略选项MaxPermSize=512m;支持在8.0中被删除 警告:忽略非Spark配置属性:eventLog.rolloverIntervalSeconds 线程“main”java.lang.ExceptionInInitializeError中出现异常 位于com.dta.dl.ct.qm.hbase.reverse.pipeline.HBaseVehicleMasterLoad.main(HBaseVehicleMasterLoad.scala) 在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处 位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中 位于java.lang.reflect.Method.invoke(Method.java:498) 位于org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) 位于org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845) 位于org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161) 位于org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184) 位于org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86) 位于org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920) 位于org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929) 位于org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 原因:java.io.FileNotFoundException:File File:/tmp/spark事件不存在 位于org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611) 位于org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824) 位于org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601) 位于org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421) 位于org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:97) 位于org.apache.spark.SparkContext(SparkContext.scala:580) 位于com.dta.dl.ct.qm.hbase.reverse.pipeline.HBaseVehicleMasterLoad$(HBaseVehicleMasterLoad.scala:32) 位于com.dta.dl.ct.qm.hbase.reverse.pipeline.HBaseVehicleMasterLoad$(HBaseVehicleMasterLoad.scala) ... 还有13个
您需要在驱动程序节点和工作程序上创建此文件夹。为此,一种方法是在全局初始化脚本上添加属性
spark.history.fs.logDirectory
(位于spark-defaults.conf文件中),如下所述。请确保在该属性上定义的文件夹存在,并且可以从驱动程序节点访问该文件夹