如何使用StreamXmlRecordReader解析单个&;单个文件中的多行xml记录

如何使用StreamXmlRecordReader解析单个&;单个文件中的多行xml记录,xml,python-2.7,hadoop,cloudera,hadoop-streaming,Xml,Python 2.7,Hadoop,Cloudera,Hadoop Streaming,我有一个输入文件(txt),如下所示 val1 | | val2|| val3 | | val4-c-1val4-c-2val-d-1 如果仔细观察输入,则第三个“| |”之后的xml数据记录将分成两行 我想使用hadoop streaming的StreamXmlRecordReader来解析这个文件 -inputreader "org.apache.hadoop.streaming.StreamXmlRecordReader,begin=<a>,end=</a>,sl

我有一个输入文件(txt),如下所示

val1 | | val2||
val3 | | val4-c-1val4-c-2val-d-1
如果仔细观察输入,则第三个“| |”之后的xml数据记录将分成两行

我想使用hadoop streaming的StreamXmlRecordReader来解析这个文件

-inputreader "org.apache.hadoop.streaming.StreamXmlRecordReader,begin=<a>,end=</a>,slowmatch=true
-inputreader“org.apache.hadoop.streaming.StreamXmlRecordReader,begin=,end=,slowmatch=true
我无法解析第三条记录

我得到下面的错误

Traceback (most recent call last):
  File "/home/rsome/test/code/m1.py", line 13, in <module>
    root = ET.fromstring(xml_str.getvalue())
  File "/usr/lib64/python2.6/xml/etree/ElementTree.py", line 964, in XML
    return parser.close()
  File "/usr/lib64/python2.6/xml/etree/ElementTree.py", line 1254, in close
    self._parser.Parse("", 1) # end of data
xml.parsers.expat.ExpatError: no element found: line 1, column 18478
回溯(最近一次呼叫最后一次):
文件“/home/rsome/test/code/m1.py”,第13行,在
root=ET.fromstring(xml_str.getvalue())
文件“/usr/lib64/python2.6/xml/etree/ElementTree.py”,第964行,xml格式
返回parser.close()
文件“/usr/lib64/python2.6/xml/etree/ElementTree.py”,第1254行,关闭
self._parser.Parse(“,1)#数据结束
xml.parsers.expat.expat错误:未找到元素:第1行第18478列
我也使用了slowmatch=true,但仍然没有运气

我的输出如下

$ hdfs dfs -text /poc/testout001/part-*
rec::1::mapper1
<a><b><c>val1</c></b></a>
rec::2::mapper1
<a><b><c>val2</c></b></a>
rec::3::mapper1
<a><b>
rec::4::mapper1
<c>val3</c></b></a>
rec::1::mapper2
<a></b><c>val4-c-1</c><c>val4-c-2</c></b><d>val-d-1</d></a>
$hdfs dfs-text/poc/testout001/part-*
rec::1::mapper1
瓦尔1
rec::2::mapper1
瓦尔2
记录::3::映射程序1
rec::4::mapper1
val3
记录::1::映射器2
val4-c-1val4-c-2val-d-1
我的预期产出是

$ hdfs dfs -text /poc/testout001/part-*
rec::1::mapper1
<a><b><c>val1</c></b></a>
rec::2::mapper1
<a><b><c>val2</c></b></a>
rec::3::mapper1
<a><b><c>val3</c></b></a>
rec::1::mapper2
<a></b><c>val4-c-1</c><c>val4-c-2</c></b><d>val-d-1</d></a>
$hdfs dfs-text/poc/testout001/part-*
rec::1::mapper1
瓦尔1
rec::2::mapper1
瓦尔2
记录::3::映射程序1
val3
记录::1::映射器2
val4-c-1val4-c-2val-d-1

这方面的任何帮助都会有很大帮助

基本上,StreamXmlInputFormat是hadoop流媒体的默认输入格式,它扩展了KeyValueTextInputFormat,它将以新行字符(\r\n)分隔行,这在我的记录跨多行分割的情况下是不可能的

因此,为了克服这个问题,我实现了自己的输入格式扩展FileInputFormat,在这里我可以进一步查看endTag的新行字符(\r\n)

用法:

-libjars /path/to/custom-xml-input-format-1.0.0.jar
-D xmlinput.start="<a>" \
-D xmlinput.end="</a>" \    
-inputformat "my.package.CustomXmlInputFormat"
-libjars/path/to/custom-xml-input-format-1.0.0.jar
-D xmlinput.start=“”\
-D xmlinput.end=”“\
-inputformat“my.package.CustomXmlInputFormat”
这是我使用的代码

import java.io.*;
import java.lang.reflect.*;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.DataOutputBuffer;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.streaming.*;


public class CustomXmlInputFormat extends FileInputFormat {

  public static final String START_TAG_KEY = "xmlinput.start";
  public static final String END_TAG_KEY = "xmlinput.end";

  @SuppressWarnings("unchecked")
  @Override
  public RecordReader<LongWritable, Text> getRecordReader(final InputSplit genericSplit,
                                      JobConf job, Reporter reporter) throws IOException {
      return new XmlRecordReader((FileSplit) genericSplit, job, reporter);
  }


  public static class XmlRecordReader implements RecordReader<LongWritable, Text> {

    private final byte[] endTag;
    private final byte[] startTag;
    private final long start;
    private final long end;
    private final FSDataInputStream fsin;
    private final DataOutputBuffer buffer = new DataOutputBuffer();
    private LongWritable currentKey;
    private Text currentValue;

    public XmlRecordReader(FileSplit split, JobConf conf, Reporter reporter) throws IOException {
      startTag = conf.get(START_TAG_KEY).getBytes("UTF-8");
      endTag = conf.get(END_TAG_KEY).getBytes("UTF-8");

      start = split.getStart();
      end = start + split.getLength();
      Path file = split.getPath();
      FileSystem fs = file.getFileSystem(conf);
      fsin = fs.open(split.getPath());
      fsin.seek(start);
    }


    public boolean next(LongWritable key, Text value) throws IOException {
      if (fsin.getPos() < end && readUntilMatch(startTag, false)) {
        try {
          buffer.write(startTag);
          if (readUntilMatch(endTag, true)) {
            key.set(fsin.getPos());
            value.set(buffer.getData(), 0, buffer.getLength());
            return true;
          }
        } finally {
          buffer.reset();
        }
      }
      return false;
    }

    public boolean readUntilMatch(byte[] match, boolean withinBlock)
        throws IOException {
      int i = 0;
      while (true) {
        int b = fsin.read();
        if (b == -1) {
          return false;
        }

        if (withinBlock && b != (byte) '\r' && b != (byte) '\n') {
          buffer.write(b);
        }

        if (b == match[i]) {
          i++;
          if (i >= match.length) {
            return true;
          }
        } else {
          i = 0;
        }

        if (!withinBlock && i == 0 && fsin.getPos() >= end) {
          return false;
        }
      }
    }

    @Override
    public float getProgress() throws IOException {
      return (fsin.getPos() - start) / (float) (end - start);
    }

    @Override
    public synchronized long getPos() throws IOException {
        return fsin.getPos();
    }

    @Override
    public LongWritable createKey() {
      return new LongWritable();
    }

    @Override
    public Text createValue() {
      return new Text();
    }

    @Override
    public synchronized void close() throws IOException {
        fsin.close();
    }

  }
}
import java.io.*;
导入java.lang.reflect.*;
导入org.apache.hadoop.conf.Configuration;
导入org.apache.hadoop.fs.FSDataInputStream;
导入org.apache.hadoop.fs.FileSystem;
导入org.apache.hadoop.fs.Path;
导入org.apache.hadoop.io.DataOutputBuffer;
导入org.apache.hadoop.io.LongWritable;
导入org.apache.hadoop.io.Text;
导入org.apache.hadoop.mapred.*;
导入org.apache.hadoop.streaming.*;
公共类CustomXmlInputFormat扩展了FileInputFormat{
公共静态最终字符串START\u TAG\u KEY=“xmlinput.START”;
公共静态最终字符串END_TAG_KEY=“xmlinput.END”;
@抑制警告(“未选中”)
@凌驾
public RecordReader getRecordReader(最终输入Split genericSplit,
JobConf job,Reporter)抛出IOException{
返回新的XmlRecordReader((FileSplit)genericSplit、job、reporter);
}
公共静态类XmlRecordReader实现RecordReader{
私有最终字节[]endTag;
专用最终字节[]开始标记;
私人最终长期启动;
私人最终长尾;
私有最终FSDataInputStream fsin;
私有最终DataOutputBuffer=新DataOutputBuffer();
私有可写长密钥;
私有文本值;
publicXMLRecordReader(FileSplit-split、JobConf-conf、Reporter-Reporter)抛出IOException{
startTag=conf.get(START_TAG_KEY).getBytes(“UTF-8”);
endTag=conf.get(END_TAG_KEY).getBytes(“UTF-8”);
start=split.getStart();
end=start+split.getLength();
路径文件=split.getPath();
FileSystem fs=file.getFileSystem(conf);
fsin=fs.open(split.getPath());
fsin.seek(启动);
}
公共布尔next(LongWritable键,文本值)引发IOException{
if(fsin.getPos()=match.length){
返回true;
}
}否则{
i=0;
}
如果(!withinBlock&&i==0&&fsin.getPos()>=end){
返回false;
}
}
}
@凌驾
公共浮点getProgress()引发IOException{
返回(fsin.getPos()-start)/(float)(end-start);
}
@凌驾
公共同步的长getPos()引发IOException{
返回fsin.getPos();
}
@凌驾
公共长可写createKey(){
返回新的LongWritable();
}
@凌驾
公共文本createValue(){
返回新文本();
}
@凌驾
public synchronized void close()引发IOException{
fsin.close();
}
}
}
这是我的输出

$ hdfs dfs -text /poc/testout001/part-*
25      <a><b><c>val1</c></b></a>
52      <a><b><c>val2</c></b></a>
80      <a><b><c>val3</c></b></a>
141     <a></b><c>val4-c-1</c><c>val4-c-2</c></b><d>val-d-1</d></a>
$hdfs dfs-text/poc/testout001/part-*
25瓦尔1
52瓦尔2
80瓦力3
141 val4-c-1val4-c-2val-d-1
$ hdfs dfs -text /poc/testout001/part-*
25      <a><b><c>val1</c></b></a>
52      <a><b><c>val2</c></b></a>
80      <a><b><c>val3</c></b></a>
141     <a></b><c>val4-c-1</c><c>val4-c-2</c></b><d>val-d-1</d></a>