Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/390.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/jsf/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 使用cloudera hadoop maven依赖项时出现异常_Java_Hadoop_Cloudera - Fatal编程技术网

Java 使用cloudera hadoop maven依赖项时出现异常

Java 使用cloudera hadoop maven依赖项时出现异常,java,hadoop,cloudera,Java,Hadoop,Cloudera,我试图在eclipse中运行一个mapreduce程序,使用Windows7中pom.xml中的ClouderaHadoop maven依赖项。当在pom.xml中使用2.0.0-cdh4.0.0依赖版本时,程序工作正常,但当我将版本更改为2.0.0-cdh4.6.0时,它会抛出以下警告消息 警告fs.LocalDirAllocator$AllocatorPerContext:未能创建 /tmp/hadoop-Abhijeet/mapred/local/file:/tmp/hadoop-Abhi

我试图在eclipse中运行一个mapreduce程序,使用Windows7中pom.xml中的ClouderaHadoop maven依赖项。当在pom.xml中使用2.0.0-cdh4.0.0依赖版本时,程序工作正常,但当我将版本更改为2.0.0-cdh4.6.0时,它会抛出以下警告消息

警告fs.LocalDirAllocator$AllocatorPerContext:未能创建 /tmp/hadoop-Abhijeet/mapred/local/file:/tmp/hadoop-Abhijeet/mapred/local/localRunner/Abhijeet/job\u local83327001\u 0001/trunt\u local83327001\u 0001\u m\u000001\u 0

请注意,文件路径的模式在路径之间是异常文件:/

最终,它抛出了一个异常,随后

 java.lang.Exception: java.lang.IllegalArgumentException: n must be positive
     at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:401)
 Caused by: java.lang.IllegalArgumentException: n must be positive
at java.util.Random.nextInt(Random.java:250)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:305)
at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.MROutputFiles.getSpillFileForWrite(MROutputFiles.java:146)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1558)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1452)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:693)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:761)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:338)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:233)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
该程序也适用于2.2.0版本。我必须使用2.0.0-cdh4.6.0版本。对于我的情况,可能的解决办法是什么