Apache pig 如何避免将拼花地板数据加载到PIG时出现不满意的链接错误

Apache pig 如何避免将拼花地板数据加载到PIG时出现不满意的链接错误,apache-pig,parquet,Apache Pig,Parquet,我正在尝试使用org.apache.parquet.pig.ParquetLoader()和parquet-pig-bundle-1.8.1.jar和pig版本0.15.0.2.4.2.0-258将parquet数据加载到pig脚本中。我的脚本是一个非常简单的加载和转储,以确保一切正常 我的剧本是: register 'parquet-pig-bundle-1.8.1.jar'; dat = LOAD '/project/part-r-00075.parquet' USING org.a

我正在尝试使用
org.apache.parquet.pig.ParquetLoader()
parquet-pig-bundle-1.8.1.jar
和pig版本0.15.0.2.4.2.0-258将
parquet
数据加载到
pig
脚本中。我的脚本是一个非常简单的加载和转储,以确保一切正常

我的剧本是:

register 'parquet-pig-bundle-1.8.1.jar';
dat = LOAD '/project/part-r-00075.parquet'
    USING org.apache.parquet.pig.ParquetLoader();

dat_limited = LIMIT dat 5;

DUMP dat_limited;
但是,当我运行此命令时,我得到:

2016-08-19 12:38:01,536 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2998: Unhandled internal error. org.xerial.snappy.SnappyNative.uncompressedLength(Ljava/nio/ByteBuffer;II)I
Details at logfile: /devel/mrp/pig/ttfs3_examples/pig_1471624672895.log
2016-08-19 12:38:01,581 [main] INFO  org.apache.pig.Main - Pig script completed in 9 seconds and 32 milliseconds (9032 ms)
Aug 19, 2016 12:37:57 PM INFO: org.apache.parquet.hadoop.ParquetInputFormat: Total input paths to process : 1
Aug 19, 2016 12:37:57 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Aug 19, 2016 12:37:57 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Aug 19, 2016 12:37:57 PM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Aug 19, 2016 12:37:58 PM INFO: org.apache.parquet.hadoop.ParquetInputFormat: Total input paths to process : 1
Aug 19, 2016 12:37:59 PM INFO: org.apache.parquet.hadoop.ParquetInputFormat: Total input paths to process : 1
Aug 19, 2016 12:37:59 PM WARNING: org.apache.parquet.hadoop.ParquetRecordReader: Can not initialize counter due to context is not a instance of TaskInputOutputContext, but is org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
Aug 19, 2016 12:37:59 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 64797 records.
Aug 19, 2016 12:37:59 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 19, 2016 12:38:01 PM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 1244 ms. row count = 63113
2016-08-19 12:38:01,832 [Thread-0] ERROR org.apache.hadoop.hdfs.DFSClient - Failed to close inode 457368033
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /tmp/temp-1982281463/tmp1114763885/_temporary/0/_temporary/attempt__0001_m_000001_1/part-m-00001 (inode 457368033): File does not exist. Holder DFSClient_NONMAPREDUCE_-797544746_1 does not have any open files.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3481)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3571)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3538)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:884)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:544)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2206)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2202)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2200)

    at org.apache.hadoop.ipc.Client.call(Client.java:1426)
    at org.apache.hadoop.ipc.Client.call(Client.java:1363)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy12.complete(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:464)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
    at com.sun.proxy.$Proxy13.complete(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2354)
    at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2336)
    at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2300)
    at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:951)
    at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:983)
    at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1134)
    at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2744)
    at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2761)
    at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
日志包括:

Pig Stack Trace
---------------
ERROR 2998: Unhandled internal error. org.xerial.snappy.SnappyNative.uncompressedLength(Ljava/nio/ByteBuffer;II)I

java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.uncompressedLength(Ljava/nio/ByteBuffer;II)I
    at org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
    at org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:561)
    at org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:62)
    at org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51)
    at java.io.DataInputStream.readFully(DataInputStream.java:195)
    at java.io.DataInputStream.readFully(DataInputStream.java:169)
    at org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:204)
    at org.apache.parquet.column.impl.ColumnReaderImpl.readPageV1(ColumnReaderImpl.java:591)
    at org.apache.parquet.column.impl.ColumnReaderImpl.access$300(ColumnReaderImpl.java:60)
    at org.apache.parquet.column.impl.ColumnReaderImpl$3.visit(ColumnReaderImpl.java:540)
    at org.apache.parquet.column.impl.ColumnReaderImpl$3.visit(ColumnReaderImpl.java:537)
    at org.apache.parquet.column.page.DataPageV1.accept(DataPageV1.java:96)
    at org.apache.parquet.column.impl.ColumnReaderImpl.readPage(ColumnReaderImpl.java:537)
    at org.apache.parquet.column.impl.ColumnReaderImpl.checkRead(ColumnReaderImpl.java:529)
    at org.apache.parquet.column.impl.ColumnReaderImpl.consume(ColumnReaderImpl.java:641)
    at org.apache.parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:357)
    at org.apache.parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:82)
    at org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:77)
    at org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:270)
    at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:135)
    at org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:101)
    at org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:154)
    at org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:101)
    at org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:140)
    at org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
    at org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)
    at org.apache.parquet.pig.ParquetLoader.getNext(ParquetLoader.java:230)
    at org.apache.pig.impl.io.ReadToEndLoader.getNextHelper(ReadToEndLoader.java:251)
    at org.apache.pig.impl.io.ReadToEndLoader.getNext(ReadToEndLoader.java:231)
    at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLoad.getNextTuple(POLoad.java:137)
    at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
    at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLimit.getNextTuple(POLimit.java:122)
    at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
    at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POStore.getNextTuple(POStore.java:159)
    at org.apache.pig.backend.hadoop.executionengine.fetch.FetchLauncher.runPipeline(FetchLauncher.java:157)
    at org.apache.pig.backend.hadoop.executionengine.fetch.FetchLauncher.launchPig(FetchLauncher.java:81)
    at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:302)
    at org.apache.pig.PigServer.launchPlan(PigServer.java:1431)
    at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1416)
    at org.apache.pig.PigServer.storeEx(PigServer.java:1075)
    at org.apache.pig.PigServer.store(PigServer.java:1038)
    at org.apache.pig.PigServer.openIterator(PigServer.java:951)
    at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:754)
    at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)
    at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
    at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
    at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:81)
    at org.apache.pig.Main.run(Main.java:631)
    at org.apache.pig.Main.main(Main.java:177)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
================================================================================
清管器堆栈跟踪
---------------
错误2998:未处理的内部错误。org.xerial.snappy.SnappyNative.uncompressedLength(Ljava/nio/ByteBuffer;II)I
java.lang.unsatifiedlinkerror:org.xerial.snappy.SnappyNative.uncompressedLength(Ljava/nio/ByteBuffer;II)I
位于org.xerial.snappy.SnappyNative.uncompressedLength(本机方法)
位于org.xerial.snappy.snappy.uncompressedLength(snappy.java:561)
位于org.apache.parquet.hadoop.codec.SnappyDecompressor.decompress(SnappyDecompressor.java:62)
位于org.apache.parquet.hadoop.codec.NonBlockedDecompressorStream.read(NonBlockedDecompressorStream.java:51)
在java.io.DataInputStream.readFully(DataInputStream.java:195)中
在java.io.DataInputStream.readFully(DataInputStream.java:169)处
位于org.apache.parquet.bytes.BytesInput$StreamBytesInput.toByteArray(BytesInput.java:204)
位于org.apache.parquet.column.impl.ColumnReaderImpl.readPageV1(ColumnReaderImpl.java:591)
访问org.apache.parquet.column.impl.ColumnReaderImpl.access$300(ColumnReaderImpl.java:60)
访问org.apache.parquet.column.impl.ColumnReaderImpl$3.visit(ColumnReaderImpl.java:540)
访问org.apache.parquet.column.impl.ColumnReaderImpl$3.visit(ColumnReaderImpl.java:537)
位于org.apache.parquet.column.page.DataPageV1.accept(DataPageV1.java:96)
位于org.apache.parquet.column.impl.ColumnReaderImpl.readPage(ColumnReaderImpl.java:537)
位于org.apache.parquet.column.impl.ColumnReaderImpl.checkRead(ColumnReaderImpl.java:529)
位于org.apache.parquet.column.impl.ColumnReaderImpl.consume(ColumnReaderImpl.java:641)
位于org.apache.parquet.column.impl.ColumnReaderImpl.(ColumnReaderImpl.java:357)
位于org.apache.parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:82)
位于org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:77)
位于org.apache.parquet.io.RecordReaderImplementation(RecordReaderImplementation.java:270)
访问org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:135)
访问org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:101)
位于org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:154)
位于org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:101)
位于org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:140)
位于org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
在org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)上
位于org.apache.parquet.pig.ParquetLoader.getNext(ParquetLoader.java:230)
位于org.apache.pig.impl.io.ReadToEndLoader.getNextHelper(ReadToEndLoader.java:251)
位于org.apache.pig.impl.io.ReadToEndLoader.getNext(ReadToEndLoader.java:231)
位于org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLoad.getNextTuple(POLoad.java:137)
位于org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
位于org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLimit.getNextTuple(POLimit.java:122)
位于org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:307)
位于org.apache.pig.backend.hadoop.executionengine.physicalayer.relationalOperators.POStore.getNextTuple(POStore.java:159)
位于org.apache.pig.backend.hadoop.executionengine.fetch.FetchLauncher.runPipeline(FetchLauncher.java:157)
位于org.apache.pig.backend.hadoop.executionengine.fetch.FetchLauncher.launchPig(FetchLauncher.java:81)
位于org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:302)
位于org.apache.pig.PigServer.launchPlan(PigServer.java:1431)
位于org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1416)
位于org.apache.pig.PigServer.storeEx(PigServer.java:1075)
位于org.apache.pig.PigServer.store(PigServer.java:1038)
位于org.apache.pig.PigServer.openIterator(PigServer.java:951)
位于org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:754)
位于org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:376)
位于org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
位于org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
位于org.apache.pig.tools.grunt.grunt.exec(grunt.java:81)
位于org.apache.pig.Main.run(Main.java:631)
位于org.apache.pig.Main.Main(Main.java:177)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:497)
位于org.apache.hadoop.util.RunJar.run(RunJar.java:221)
位于org.apache.hadoop.util.RunJar.main(RunJar.java:136)
================================================================================

我检查了
ParquetLoader
的源代码,发现该方法似乎有一个没有参数的有效签名。我还尝试添加了其他几个似乎没有与
拼花清管器捆绑包一起打包的依赖项,例如
拼花公用
,以及
拼花编码
,但没有成功。

这里的问题是hadoop和pig在snappy的版本上存在分歧。snappy的旧版本