Java Spark无法删除临时目录

Java Spark无法删除临时目录,java,apache-spark,bigdata,Java,Apache Spark,Bigdata,我正在尝试使用下面提到的命令从windows 10中的cmd提交spark程序: spark-submit --class abc.Main --master local[2] C:\Users\arpitbh\Desktop\AmdocsIDE\workspace\Line_Count_Spark\target\Line_Count_Spark-0.0.1-SNAPSHOT.jar 但运行此命令后,我发现错误: 17/05/02 11:56:57 INFO ShutdownHookManag

我正在尝试使用下面提到的命令从windows 10中的cmd提交spark程序:

spark-submit --class abc.Main --master local[2] C:\Users\arpitbh\Desktop\AmdocsIDE\workspace\Line_Count_Spark\target\Line_Count_Spark-0.0.1-SNAPSHOT.jar
但运行此命令后,我发现错误:

17/05/02 11:56:57 INFO ShutdownHookManager: Deleting directory C:\Users\arpitbh\AppData\Local\Temp\spark-03f14dbe-1802-40ca-906c-af8de0f462f9
17/05/02 11:56:57 ERROR ShutdownHookManager: Exception while deleting Spark temp dir: C:\Users\arpitbh\AppData\Local\Temp\spark-03f14dbe-1802-40ca-906c-af8de0f462f9
java.io.IOException: Failed to delete: C:\Users\arpitbh\AppData\Local\Temp\spark-03f14dbe-1802-40ca-906c-af8de0f462f9
        at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:1010)
        at org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:65)
        at org.apache.spark.util.ShutdownHookManager$$anonfun$1$$anonfun$apply$mcV$sp$3.apply(ShutdownHookManager.scala:62)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at org.apache.spark.util.ShutdownHookManager$$anonfun$1.apply$mcV$sp(ShutdownHookManager.scala:62)
        at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
        at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
        at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
        at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
        at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
        at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
        at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
        at scala.util.Try$.apply(Try.scala:192)
        at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
        at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
        at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
我还检查了ApacheSpark的JIRA,该缺陷已标记为已解决,但未提及任何解决方案。请帮忙

package abc;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;


public class Main {

    /**
     * @param args
     */
    public static void main(String[] args) {
        // TODO Auto-generated method stub

        SparkConf conf =new SparkConf().setAppName("Line_Count").setMaster("local[2]");
        JavaSparkContext ctx= new JavaSparkContext(conf);

        JavaRDD<String> textLoadRDD = ctx.textFile("C:/spark/README.md");
        System.out.println(textLoadRDD.count());
        System.getProperty("java.io.tmpdir");

    }

}
package-abc;
导入org.apache.spark.SparkConf;
导入org.apache.spark.api.java.JavaRDD;
导入org.apache.spark.api.java.JavaSparkContext;
公共班机{
/**
*@param args
*/
公共静态void main(字符串[]args){
//TODO自动生成的方法存根
SparkConf conf=new SparkConf().setAppName(“行数”).setMaster(“本地[2]”);
JavaSparkContext ctx=新的JavaSparkContext(conf);
JavaRDD textloardd=ctx.textFile(“C:/spark/README.md”);
System.out.println(textloardd.count());
getProperty(“java.io.tmpdir”);
}
}

这可能是因为您在实例化SparkContext时没有SPARK\u HOME或HUPA\u HOME,而SparkContext允许程序在bin目录中查找winutils.exe。我发现当我从

SparkConf conf = new SparkConf();
JavaSparkContext sc = new JavaSparkContext(conf);


错误消失了

我相信,您正在尝试在不设置用户变量HADOOP\u HOME或SPARK\u LOCAL\u DIRS的情况下执行程序。
我发布了同样的问题,通过创建变量解决了它,例如HADOOP\u HOME->C:\HADOOP,SPARK\u LOCAL\u DIRS->C:\tmp\SPARK

欢迎使用StackOverflow。请看这里你能提供你的代码吗?我已经用正确的格式更新了我的代码。请通过退出spark shell或运行任何示例来检查我是否遇到了相同的问题。这不是权限问题,因为我还尝试使用--conf spark.local.dir指定不同的工作目录。如果有人有解决方案,请分享。我已经设置了两个变量,但仍然得到相同的错误
JavaSparkContext sc = new JavaSparkContext("local[*], "programname", 
System.getenv("SPARK_HOME"), System.getenv("JARS"));