Java Hadoop映射减少输出文件异常
我在amazon d2.2Xlarge上运行单节点hadoop群集时遇到此错误。我也无法查看我的输出。有人能为我提供解决此问题的正确步骤吗Java Hadoop映射减少输出文件异常,java,hadoop,exception,mapreduce,Java,Hadoop,Exception,Mapreduce,我在amazon d2.2Xlarge上运行单节点hadoop群集时遇到此错误。我也无法查看我的输出。有人能为我提供解决此问题的正确步骤吗 "Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for output/file.out" 这是我执行的步骤 bin/hdfs dfsadmin -safemode leave
"Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
find any valid local directory for output/file.out"
这是我执行的步骤
bin/hdfs dfsadmin -safemode leave
bin/hadoop fs -mkdir /inputfiles
bin/hadoop dfsadmin -safemode leave
bin/hadoop fs -mkdir /output
bin/hdfs dfsadmin -safemode leave
bin/hadoop fs -put input1 /inputfiles
bin/hdfs dfsadmin -safemode leave
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar
wordcount /inputfiles /output
不应为Map Reduce作业创建输出目录 删除此命令
bin/hadoop fs -mkdir /output
并将最后一个命令更改为
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar
wordcount /inputfiles /output1
确保您有权在下创建output1
/
如果不是,我更喜欢下面的目录结构
/home/your_user_name/input
用于输入目录文件和
输出目录的
/home/your_user\u name/output
。确定..即使删除输出mkdir,它在..100Mb、300MB的情况下也可以正常工作,但在1GB的情况下则不行..这是磁盘问题吗。。?