Java 映射中的键类型不匹配,因为它标识为LongWriteable
我编写了一个MapReduce程序来处理文本文件中的数据。但是当我在本地(Linux VM)运行它时,它遇到了一个错误,即它将映射中的键标识为Java 映射中的键类型不匹配,因为它标识为LongWriteable,java,hadoop,Java,Hadoop,我编写了一个MapReduce程序来处理文本文件中的数据。但是当我在本地(Linux VM)运行它时,它遇到了一个错误,即它将映射中的键标识为LongWriteable而不是Text,这是Mapper类所要求的 日志输出: 2017-05-21 14:06:46436 INFO[main]org.apache.hadoop.mapred.MapTask:Map output collector class=org.apache.hadoop.mapred.MapTask$MapOutputBuf
LongWriteable
而不是Text
,这是Mapper
类所要求的
日志输出:
2017-05-21 14:06:46436 INFO[main]org.apache.hadoop.mapred.MapTask:Map output collector class=org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2017-05-21 14:06:46454 INFO[main]org.apache.hadoop.mapred.MapTask:开始刷新地图输出
2017-05-21 14:06:46468警告[main]org.apache.hadoop.mapred.YarnChild:运行子项的异常:java.io.IOException:map中的键类型不匹配:预期为org.apache.hadoop.io.Text,收到org.apache.hadoop.io.LongWritable
位于org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1072)
位于org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:715)
位于org.apache.hadoop.mapreduce.task.taskInputOutContextImpl.write(taskInputOutContextImpl.java:89)
位于org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
位于org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:125)
位于org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
位于org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
位于org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
位于org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
位于java.security.AccessController.doPrivileged(本机方法)
位于javax.security.auth.Subject.doAs(Subject.java:422)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
位于org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:169)
这是我的程序代码
制图员:
public static class RepaymentMapper extends Mapper<Text, Text, Text, Text> {
public void map(Text key, Text values, OutputCollector<Text, Text> output, Reporter reporter) throws IOException {
if(values.toString() != null) {
try {
String[] data = values.toString().split(DefaultValues.DATA_SEPARATOR);
String repaymentDate = data[0];
String accountNo = data[1];
double repaymentAmount = Double.parseDouble(data[2]);
double monthInstallment = Double.parseDouble(data[3]);
double unCollectedAmount = 0;
if(repaymentAmount == 0)
unCollectedAmount = monthInstallment;
else if(repaymentAmount<monthInstallment)
unCollectedAmount = monthInstallment - repaymentAmount;
output.collect(new Text(accountNo), new Text(repaymentDate + DefaultValues.DATA_SEPARATOR + String.valueOf(unCollectedAmount)));
} catch(IOException io) {
throw io;
}
}
}
}
是否已检查Mappermap(可长写键、文本值、OutputCollector输出、Reporter报告器)
public static class RepaymentReducer extends Reducer<Text, Text, Text, DoubleWritable> {
public void reduce(Text key, Iterable<Text> values, OutputCollector<Text, DoubleWritable> output, Reporter reporter) throws IOException {
try {
for(Text v : values) {
String[] data = v.toString().split(DefaultValues.DATA_SEPARATOR);
output.collect(key,new DoubleWritable(Double.parseDouble(data[1])));
}
} catch(IOException io) {
throw io;
}
}
}
public static void main(String[] args) {
Configuration conf = new Configuration();
try {
Job job = Job.getInstance(conf, "Loan Repayment Job");
job.setJarByClass(RepaymentAnalyticJob.class);
job.setMapperClass(RepaymentMapper.class);
job.setCombinerClass(RepaymentReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(DoubleWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
} catch(Exception e) {
e.printStackTrace();
}
}