Java 获取错误的值类:类org.apache.hadoop.io.LongWritable不是类org.apache.hadoop.io.IntWritable

Java 获取错误的值类:类org.apache.hadoop.io.LongWritable不是类org.apache.hadoop.io.IntWritable,java,mapreduce,hadoop2,Java,Mapreduce,Hadoop2,我正在学习MapReduce,我编写了一个程序,计算会员和非会员预订的总持续时间。我通过了所有可能需要的作业配置,但是当运行hadoop命令时,它抛出了错误的值类。我尝试在stackoverflow中搜索许多解决方案,但无法调试该问题。Map输出和减速器输入正确。 有人能帮我吗 public class BixiMontrealAnalysis { public static class BixiMapper extends Mapper <LongWritable, Text,

我正在学习MapReduce,我编写了一个程序,计算会员和非会员预订的总持续时间。我通过了所有可能需要的作业配置,但是当运行hadoop命令时,它抛出了错误的值类。我尝试在stackoverflow中搜索许多解决方案,但无法调试该问题。Map输出和减速器输入正确。 有人能帮我吗

public class BixiMontrealAnalysis {

    public static class BixiMapper extends Mapper <LongWritable, Text, IntWritable, IntWritable> {
        public void map(LongWritable offset, Text line, Context context) throws IOException, InterruptedException {
            String csvAttributes[] = line.toString().split(",");
            int isMember = 0;
            int duration = 0;
            try {
                duration = Integer.parseInt(csvAttributes[4]);
                isMember = Integer.parseInt(csvAttributes[5]);
            } catch (Exception e) {
                System.out.println("Will Emit 0,0");
            }
            context.write(new IntWritable(isMember), new IntWritable(duration));
        }
    }

    public static class BixiReducer extends Reducer <IntWritable, IntWritable, IntWritable, LongWritable> {
        public void reduce(IntWritable isMember, Iterable <IntWritable> combinedDurationByIsMember, Context context) throws IOException, InterruptedException {
            long sum = 0L;
            for (IntWritable duration: combinedDurationByIsMember) {
                sum = sum + (long) duration.get();
            }
            context.write(isMember, new LongWritable(sum));
        }
    }

    public static void main(String args[]) throws IOException, ClassNotFoundException, InterruptedException {
        Configuration conf = new Configuration();
        Job job = new Job(conf, "bix-montreal-job");
        job.setJarByClass(BixiMontrealAnalysis.class);
        job.setMapperClass(BixiMapper.class);

        job.setCombinerClass(BixiReducer.class);
        job.setReducerClass(BixiReducer.class);

        job.setMapOutputKeyClass(IntWritable.class);
        job.setMapOutputValueClass(IntWritable.class);

        job.setOutputKeyClass(IntWritable.class);
        job.setOutputValueClass(LongWritable.class);

        job.setInputFormatClass(TextInputFormat.class);
        job.setOutputFormatClass(TextOutputFormat.class);

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
下面是堆栈跟踪

Error: java.io.IOException: wrong value class: class org.apache.hadoop.io.LongWritable is not class org.apache.hadoop.io.IntWritable
    at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:194)
    at org.apache.hadoop.mapred.Task$CombineOutputCollector.collect(Task.java:1374)
    at org.apache.hadoop.mapred.Task$NewCombinerRunner$OutputConverter.write(Task.java:1691)
    at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
    at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
    at com.onboarding.hadoop.BixiMontrealAnalysis$BixiReducer.reduce(BixiMontrealAnalysis.java:43)
    at com.onboarding.hadoop.BixiMontrealAnalysis$BixiReducer.reduce(BixiMontrealAnalysis.java:37)
参考标准字数问题,我已经将
组合器
类设置为与
减缩器
相同的类,这不应该是。我对
Combiner
进行了研究,发现
Combiner
类的用途是生成中间记录,因此
Reducer
上的负载更小

Error: java.io.IOException: wrong value class: class org.apache.hadoop.io.LongWritable is not class org.apache.hadoop.io.IntWritable
    at org.apache.hadoop.mapred.IFile$Writer.append(IFile.java:194)
    at org.apache.hadoop.mapred.Task$CombineOutputCollector.collect(Task.java:1374)
    at org.apache.hadoop.mapred.Task$NewCombinerRunner$OutputConverter.write(Task.java:1691)
    at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
    at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:105)
    at com.onboarding.hadoop.BixiMontrealAnalysis$BixiReducer.reduce(BixiMontrealAnalysis.java:43)
    at com.onboarding.hadoop.BixiMontrealAnalysis$BixiReducer.reduce(BixiMontrealAnalysis.java:37)
job.setCombinerClass(BixiReducer.class);