Hadoop/Thread作业失败-“;已使用exitCode:-1000退出,原因是:找不到nmPrivate的任何有效本地目录…”;

Hadoop/Thread作业失败-“;已使用exitCode:-1000退出,原因是:找不到nmPrivate的任何有效本地目录…”;,hadoop,yarn,accumulo,Hadoop,Yarn,Accumulo,我正在尝试使用Hadoop、Thread和Accumulo运行MapReduce作业 我得到以下输出,我无法跟踪该问题。看起来是一个纱线问题,但我不确定它在寻找什么。我在$HADOOP_PREFIX/grid/HADOOP/hdfs/thread/logs位置有一个nmPrivate文件夹。这是它说找不到的文件夹吗 14/03/31 08:48:46 INFO mapreduce.Job: Job job_1395942264921_0023 failed with state FAILED d

我正在尝试使用Hadoop、Thread和Accumulo运行MapReduce作业

我得到以下输出,我无法跟踪该问题。看起来是一个纱线问题,但我不确定它在寻找什么。我在$HADOOP_PREFIX/grid/HADOOP/hdfs/thread/logs位置有一个nmPrivate文件夹。这是它说找不到的文件夹吗

14/03/31 08:48:46 INFO mapreduce.Job: Job job_1395942264921_0023 failed with state FAILED due to: Application application_1395942264921_0023 failed 2 times due to AM Container for appattempt_1395
942264921_0023_000002 exited with  exitCode: -1000 due to: Could not find any valid local directory for nmPrivate/container_1395942264921_0023_02_000001.tokens
.Failing this attempt.. Failing the application.

在簇模式下测试纱线上的火花时:

spark-submit --master yarn --deploy-mode cluster --class org.apache.spark.examples.SparkPi /usr/local/install/spark-2.2.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.2.0.jar 100
我也犯了同样的错误:

Application application_1532249549503_0007 failed 2 times due to AM Container for appattempt_1532249549503_0007_000002 exited with exitCode: -1000 Failing this attempt.Diagnostics: java.io.IOException: Resource file:/usr/local/install/spark-2.2.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.2.0.jar changed on src filesystem (expected 1531576498000, was 1531576511000
有一个建议可以解决这种错误,修改core-site.xml或HADOOP的其他配置


最后,我通过在$HADOOP_HOME/etc/HADOOP/core-site.xml

中设置属性
fs.defaultFS
修复了这个错误,只是出于好奇,您试图访问的文件夹有哪些权限?HDFS v2中的权限比HDFS v1中严格得多。我创建了一个拥有hadoop文件夹的用户名hadoop,但我以root用户身份运行该操作。@bdparrish您解决了这个问题吗?我也得到了这个错误。同样的问题在这里。有什么建议吗?