运行hadoop作业后生成的空输出文件
我有一个MapReduce程序,如下所示运行hadoop作业后生成的空输出文件,hadoop,mapreduce,Hadoop,Mapreduce,我有一个MapReduce程序,如下所示 import java.io.IOException; import java.util.Iterator; import java.util.StringTokenizer; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.FileInputFormat; import org.apache.hado
import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.KeyValueTextInputFormat;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextOutputFormat;
public class Sample {
public static class SampleMapper extends MapReduceBase implements
Mapper<Text, Text, Text, Text> {
private Text word = new Text();
@Override
public void map(Text key, Text value,
OutputCollector<Text, Text> output, Reporter reporter)
throws IOException {
StringTokenizer itr = new StringTokenizer(value.toString(),",");
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
output.collect(key, word);
}
}
}
public static class SampleReducer extends MapReduceBase implements
Reducer<Text, Text, Text, Text> {
private Text result = new Text();
@Override
public void reduce(Text key, Iterator<Text> values,
OutputCollector<Text, Text> output, Reporter reporter)
throws IOException {
StringBuffer aggregation = new StringBuffer();
while (values.hasNext()) {
aggregation.append("|" + values.next().toString());
}
result.set(aggregation.toString());
output.collect(key, result);
}
}
public static void main(String args[]) throws IOException {
JobConf conf = new JobConf(Sample.class);
conf.setJobName("Sample");
conf.setMapperClass(SampleMapper.class);
conf.setReducerClass(SampleReducer.class);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(Text.class);
conf.setInputFormat(KeyValueTextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
jar是我打包的jar,我的输入文件如下
1 a,b,c
2 e,f
1 x,y,z
2 g
预期产量
1 a|b|c|x|y|z
2 e|f|g
尝试将配置对象传递给JobConf,我猜您的JobConf无法获得Hadoop/hdfs配置
Configuration configuration=new Configuration();
JobConf jobconf=new JobConf(configuration, exampleClass.class);
conf2.setJarByClass(cls);
.......
我猜,因为您使用KeyValueTextInputFormat作为输入格式,所以它找不到分隔符字节,因此使用整行值作为键(值为“”)。这意味着您在映射器中的迭代不会经过任何循环,也不会写出任何内容。在配置中使用属性名mapreduce.input.keyvaluelinerecordreader.key.value.separator将“”作为分隔符字节。两件事:尝试将减缩器的数量设置为0,然后重新过帐作业计数器的输出-您应该看到4条映射输入记录和10条映射输出记录。另外,您不应该调用job.setJarByClass(…)方法来配置job jar吗?您还可以查看日志以确认您正在读取和写入您认为是的目录吗?它将映射输入记录显示为4,但映射输出记录为0(零)真的无法找出问题所在。。请帮助解决文本输入文件中的问题。而不是在第一个单词后面。我正在使用一个空格。讨厌愚蠢的错误。新年,非常感谢
Configuration configuration=new Configuration();
JobConf jobconf=new JobConf(configuration, exampleClass.class);
conf2.setJarByClass(cls);
.......