Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/fortran/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Hadoop应用程序找不到Reducer_Java_Hadoop_Hbase_Reducers - Fatal编程技术网

Java Hadoop应用程序找不到Reducer

Java Hadoop应用程序找不到Reducer,java,hadoop,hbase,reducers,Java,Hadoop,Hbase,Reducers,我正在尝试制作一个mapreduce应用程序,该应用程序从Hbase表读取数据,并将作业结果写入文本文件。我的驱动程序代码如下所示: Configuration conf = HBaseConfiguration.create(); Job job = Job.getInstance (conf, "mr test"); job.setJarByClass(Driverclass.class); job.setCombinerClass(reducername.c

我正在尝试制作一个mapreduce应用程序,该应用程序从Hbase表读取数据,并将作业结果写入文本文件。我的驱动程序代码如下所示:

    Configuration conf = HBaseConfiguration.create();
    Job job = Job.getInstance (conf, "mr test");
    job.setJarByClass(Driverclass.class);
    job.setCombinerClass(reducername.class);
    job.setReducerClass(reducername.class);

    Scan scan = new Scan();
    scan.setCaching(500);            
    scan.setCacheBlocks(false);        

    String qualifier = "qualifname"; // comma seperated
    String family= "familyname";
    scan.addColumn(Bytes.toBytes(family), Bytes.toBytes(qualifier));

    TableMapReduceUtil.initTableMapperJob("tablename", 
                                           scan, 
                                           mappername.class, 
                                           Text.class, Text.class, 
                                           job);
调用initTableMapperJob时,我得到一个ClassNotFoundException:class reducername not found

该类在同一个包中的另一个java文件中定义。我使用了几乎相同的配置来尝试常见的wordcount示例,效果很好。然后我改变了映射器的类型和配置方式,我得到了这个错误。有人能帮我吗

编辑:减速器类的代码为:

package mr.roadlevelmr;
import java.io.IOException;
import java.util.ArrayList;
import java.util.StringTokenizer;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Reducer;
public class reducername extends Reducer <Text, Text, Text, Text>{
    private Text result= new Text();

    public void reduce (Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException{
       ArrayList<String> means = new ArrayList<String>();
        for (Text val : values){
            means.add(String.valueOf(val.getBytes()));
        }
        result.set(newMean(means));
        context.write(key, result);
    }
包mr.roadlevelmr;
导入java.io.IOException;
导入java.util.ArrayList;
导入java.util.StringTokenizer;
导入org.apache.hadoop.io.*;
导入org.apache.hadoop.mapreduce.Reducer;
公共类reducername扩展了Reducer{
私有文本结果=新文本();
公共void reduce(文本键、Iterable值、上下文上下文)引发IOException、InterruptedException{
ArrayList意味着=新的ArrayList();
用于(文本值:值){
means.add(String.valueOf(val.getBytes());
}
结果集(新均值(均值));
编写(键、结果);
}

您应该按如下方式使用Map reduce util:

TableMapReduceUtil.initTableMapperJob("tablename", 
                                       scan, 
                                       mappername.class, 
                                       Text.class, Text.class, 
                                       job);Ok think I found the issue! 
然后添加减速器和合路器

job.setCombinerClass(reducername.class);
job.setReducerClass(reducername.class);
boolean b = job.waitForCompletion(true);

不是将reducer添加到表映射器作业

中,而是哪一行引发异常您可以发布reducer类的基本结构吗?嗨,Peter,这是引发异常的最后一行。我用reducer类编辑问题是的,它们在不同的java文件中,但在相同的包中您尝试过下面的提示吗?