Java 尝试运行mapreduce进程时获取NoSuchMethodError异常

Java 尝试运行mapreduce进程时获取NoSuchMethodError异常,java,hadoop,mapreduce,Java,Hadoop,Mapreduce,我得到以下例外情况: 线程“AWT-EventQueue-0”java.lang.NoSuchMethodError:org.apache.hadoop.mapred.JobConf.setbooleanifunce(Ljava/lang/String;Z)V中出现异常 这是我的代码: public static void CreateVector(String CBresults, String outpath,

我得到以下例外情况:

线程“AWT-EventQueue-0”java.lang.NoSuchMethodError:org.apache.hadoop.mapred.JobConf.setbooleanifunce(Ljava/lang/String;Z)V中出现异常

这是我的代码:

public static void CreateVector(String CBresults,
                                String outpath,
                                int nummappers,
                                int numreducers) throws IOException, ClassNotFoundException, InterruptedException {
    System.out.println("NUM_FMAP_TASKS: "     + nummappers);
    System.out.println("NUM_FREDUCE_TASKS: "  + numreducers);
    Configuration conf = new Configuration();
    Job job = new Job(conf, "VectorCreator");

    job.setJarByClass(VectorCreator.class);

    job.setNumReduceTasks(numreducers);

    FileInputFormat.addInputPath(job,new Path(CBresults));
    job.setMapperClass(VectorCreator.ClusterMapper.class);
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(IntWritable.class);

    Path oPath = new Path(outpath);
    FileOutputFormat.setOutputPath(job, oPath);
    job.setReducerClass(VectorCreator.ClusterReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    //conf.setOutputFormat(SequenceFileOutputFormat.class);
    //conf.setOutputPath(oPath);
    System.err.println("  Removing old results");
    oPath = new Path(outpath);
    FileSystem fs = FileSystem.get(job.getConfiguration());
    fs.delete(oPath, true); // delete file, true for recursive
    int code = job.waitForCompletion(true) ? 0 : 1;
    System.exit(code);

    System.err.println("Create Vector Finished");
}
这是我的意思:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.mapred.JobConf;
这是我的pom.xml文件:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <parent>
        <artifactId>WWH-BIO</artifactId>
        <groupId>WWH</groupId>
        <version>1.0-SNAPSHOT</version>
    </parent>
    <modelVersion>4.0.0</modelVersion>

    <artifactId>VectorCreator</artifactId>

    <dependencies>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-core</artifactId>
            <version>0.20.2</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.2.0</version>
        </dependency>
    </dependencies>
</project>

WWH-BIO
WWH
1.0-快照
4.0.0
矢量创造者
org.apache.hadoop
hadoop内核
0.20.2
org.apache.hadoop
hadoop通用
2.2.0

我很确定这是关于我的依赖关系。

pom中的Hadoop依赖关系应该有相同的版本。但是,你真的需要
hadoop核心
,还是只需要
hadoop通用
就足够了?如果我的项目中有两个模块,每个模块都有自己的pom文件,我可以在每个模块中都有不同的hadoop版本依赖关系吗?其中一个是基于hadoop 1的应用程序,第二个是hadoop 2。如您的错误消息所示,如果它们相互通信,将无法完全兼容。尝试使用单一的Hadoop版本—理想情况下是目标集群上的版本。