在Hadoop MapReduce中将RecordReader上下文设置为段落

在Hadoop MapReduce中将RecordReader上下文设置为段落,hadoop,mapreduce,Hadoop,Mapreduce,我想编写自己的RecordReader,将上下文作为整个段落而不是文本输入格式的一行返回 我尝试了以下功能,但肯定它的效果不好 public boolean nextKeyValue() throws IOException, InterruptedException { if (key == null) { key = new LongWritable(); } key.set(pos); if (value == null) {

我想编写自己的RecordReader,将上下文作为整个段落而不是文本输入格式的一行返回

我尝试了以下功能,但肯定它的效果不好

public boolean nextKeyValue() throws IOException, InterruptedException {
    if (key == null) {
        key = new LongWritable();
    }
    key.set(pos);
    if (value == null) {
        value = new Text();
    }
    value.clear();
    final Text endline = new Text("\n");
    int newSize = 0;

        Text v = new Text();
        while (v!= endline) {
            value.append(v.getBytes(),0, v.getLength());
            value.append(endline.getBytes(),0, endline.getLength());
            if (newSize == 0) {
                break;
            }
            pos += newSize;
            if (newSize < maxLineLength) {
                break;
            }
        }
    if (newSize == 0) {
        key = null;
        value = null;
        return false;
    } else {
        return true;
    }
}
public boolean nextKeyValue()抛出IOException、InterruptedException{
if(key==null){
key=新的LongWritable();
}
按键设置(pos);
如果(值==null){
值=新文本();
}
value.clear();
最终文本结束行=新文本(“\n”);
int newSize=0;
Text v=新文本();
while(v!=结束线){
append(v.getBytes(),0,v.getLength());
append(endline.getBytes(),0,endline.getLength());
如果(新闻大小==0){
打破
}
pos+=新闻化;
如果(新闻大小
实际上,您不需要努力编写自己的RecordReader。相反,只需扩展TextInputFormat并更改分隔符即可。以下是仅更改了分隔符的TextInputFormat的库代码:

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.LineRecordReader;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;

import com.google.common.base.Charsets;

public class ParagraphInputFormat
extends TextInputFormat {
    private static final String PARAGRAPH_DELIMITER = "\r\n\r\n";

    @Override
    protected boolean isSplitable(JobContext context, Path file) {
        return false;
    }

    @Override
    public RecordReader<LongWritable, Text>
    createRecordReader(InputSplit split, TaskAttemptContext context) {
        String delimiter = PARAGRAPH_DELIMITER;
        byte[] recordDelimiterBytes = null;
        if (null != delimiter) {
            recordDelimiterBytes = delimiter.getBytes(Charsets.UTF_8);
        }
        return new LineRecordReader(recordDelimiterBytes);
    }
}
import org.apache.hadoop.fs.Path;
导入org.apache.hadoop.fs.Path;
导入org.apache.hadoop.io.LongWritable;
导入org.apache.hadoop.io.Text;
导入org.apache.hadoop.mapreduce.InputSplit;
导入org.apache.hadoop.mapreduce.JobContext;
导入org.apache.hadoop.mapreduce.RecordReader;
导入org.apache.hadoop.mapreduce.TaskAttemptContext;
导入org.apache.hadoop.mapreduce.lib.input.LineRecordReader;
导入org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
导入com.google.common.base.charset;
公共类段落输入格式
扩展文本输入格式{
私有静态最终字符串段落\u分隔符=“\r\n\r\n”;
@凌驾
受保护的布尔isSplitable(JobContext上下文,路径文件){
返回false;
}
@凌驾
公共记录阅读器
createRecordReader(InputSplit拆分,TaskAttemptContext上下文){
字符串分隔符=段落分隔符;
字节[]recordDelimiterBytes=null;
if(null!=分隔符){
recordDelimiterBytes=delimiter.getBytes(Charsets.UTF_8);
}
返回新的LineRecordReader(recordDelimiterBytes);
}
}

你说的“远离”是什么意思?您遇到了什么问题?您还需要找到一种定义段落边界的方法-它们是空行分隔的,还是新段落的第一句缩进(制表符或空格)?