Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/377.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Apache Spark-拼花地板/Snappy压缩错误_Java_Apache Spark_Unsatisfiedlinkerror - Fatal编程技术网

Java Apache Spark-拼花地板/Snappy压缩错误

Java Apache Spark-拼花地板/Snappy压缩错误,java,apache-spark,unsatisfiedlinkerror,Java,Apache Spark,Unsatisfiedlinkerror,我有一个来自oracle表的数据帧,我正试图用本地的Snappy压缩将其写入拼花地板格式 如果我保存为CSV,则可以正常工作,但在尝试保存为拼花地板时遇到此错误 java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.maxCompressedLength(I)I Snappy库已经在我的类路径中,这对其他源类型(平面文件)也有效 我能做些什么来解决这个问题 堆栈跟踪如下: 2017-05-19 08:10:37.398

我有一个来自oracle表的数据帧,我正试图用本地的Snappy压缩将其写入拼花地板格式

如果我保存为CSV,则可以正常工作,但在尝试保存为拼花地板时遇到此错误

java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.maxCompressedLength(I)I
Snappy库已经在我的类路径中,这对其他源类型(平面文件)也有效

我能做些什么来解决这个问题

堆栈跟踪如下:

2017-05-19 08:10:37.398  INFO 7740 --- [rker for task 0] org.apache.hadoop.io.compress.CodecPool  : Got brand-new compressor [.snappy]
2017-05-19 08:11:45.482 ERROR 7740 --- [rker for task 0] org.apache.spark.util.Utils              : Aborting task
java.lang.UnsatisfiedLinkError: org.xerial.snappy.SnappyNative.maxCompressedLength(I)I
    at org.xerial.snappy.SnappyNative.maxCompressedLength(Native Method) ~[snappy-java-1.1.2.6.jar:na]
    at org.xerial.snappy.Snappy.maxCompressedLength(Snappy.java:376) ~[snappy-java-1.1.2.6.jar:na]
    at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:67) ~[parquet-hadoop-1.8.1.jar:1.8.1]
    at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81) ~[hadoop-common-2.2.0.jar:na]
    at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92) ~[hadoop-common-2.2.0.jar:na]
    at org.apache.parquet.hadoop.CodecFactory$BytesCompressor.compress(CodecFactory.java:112) ~[parquet-hadoop-1.8.1.jar:1.8.1]
    at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:89) ~[parquet-hadoop-1.8.1.jar:1.8.1]
    at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:152) ~[parquet-column-1.8.1.jar:1.8.1]
    at org.apache.parquet.column.impl.ColumnWriterV1.accountForValueWritten(ColumnWriterV1.java:113) ~[parquet-column-1.8.1.jar:1.8.1]
    at org.apache.parquet.column.impl.ColumnWriterV1.write(ColumnWriterV1.java:205) ~[parquet-column-1.8.1.jar:1.8.1]
    at org.apache.parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.addBinary(MessageColumnIO.java:347) ~[parquet-column-1.8.1.jar:1.8.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport$$anonfun$org$apache$spark$sql$execution$datasources$parquet$ParquetWriteSupport$$makeWriter$9.apply(ParquetWriteSupport.scala:169) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport$$anonfun$org$apache$spark$sql$execution$datasources$parquet$ParquetWriteSupport$$makeWriter$9.apply(ParquetWriteSupport.scala:157) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport$$anonfun$org$apache$spark$sql$execution$datasources$parquet$ParquetWriteSupport$$writeFields$1.apply$mcV$sp(ParquetWriteSupport.scala:114) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.org$apache$spark$sql$execution$datasources$parquet$ParquetWriteSupport$$consumeField(ParquetWriteSupport.scala:422) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.org$apache$spark$sql$execution$datasources$parquet$ParquetWriteSupport$$writeFields(ParquetWriteSupport.scala:113) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport$$anonfun$write$1.apply$mcV$sp(ParquetWriteSupport.scala:104) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.consumeMessage(ParquetWriteSupport.scala:410) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.write(ParquetWriteSupport.scala:103) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetWriteSupport.write(ParquetWriteSupport.scala:51) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121) ~[parquet-hadoop-1.8.1.jar:1.8.1]
    at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123) ~[parquet-hadoop-1.8.1.jar:1.8.1]
    at org.apache.parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42) ~[parquet-hadoop-1.8.1.jar:1.8.1]
    at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.writeInternal(ParquetOutputWriter.scala:42) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$SingleDirectoryWriteTask.execute(FileFormatWriter.scala:245) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:190) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:188) ~[spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1341) ~[spark-core_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:193) [spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:129) [spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$3.apply(FileFormatWriter.scala:128) [spark-sql_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) [spark-core_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.scheduler.Task.run(Task.scala:99) [spark-core_2.11-2.1.1.jar:2.1.1]
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) [spark-core_2.11-2.1.1.jar:2.1.1]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_75]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_75]
    at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75]
2017-05-19 08:11:45.484  INFO 7740 --- [rker for task 0] o.a.p.h.InternalParquetRecordWriter      : Flushing mem columnStore to file. allocated memory: 13,812,677
2017-05-19 08:11:45.499  WARN 7740 --- [rker for task 0] org.apache.hadoop.fs.FileUtil            : Failed to delete file or dir [C:\Dev\edi_parquet\GMS_TEST\_temporary\0\_temporary\attempt_20170519081036_0000_m_000000_0\.part-00000-193f8835-6505-4dac-8cb6-0e8c5f3cff1b.snappy.parquet.crc]: it still exists.
2017-05-19 08:11:45.501  WARN 7740 --- [rker for task 0] org.apache.hadoop.fs.FileUtil            : Failed to delete file or dir [C:\Dev\edi_parquet\GMS_TEST\_temporary\0\_temporary\attempt_20170519081036_0000_m_000000_0\part-00000-193f8835-6505-4dac-8cb6-0e8c5f3cff1b.snappy.parquet]: it still exists.
2017-05-19 08:11:45.501  WARN 7740 --- [rker for task 0] o.a.h.m.lib.output.FileOutputCommitter   : Could not delete file:/C:/Dev/edi_parquet/GMS_TEST/_temporary/0/_temporary/attempt_20170519081036_0000_m_000000_0
2017-05-19 08:11:45.504 ERROR 7740 --- [rker for task 0] o.a.s.s.e.datasources.FileFormatWriter   : Job job_20170519081036_0000 aborted.

此问题是由于parquet和spark/hadoop所需的snappy java版本之间不兼容造成的

我们在cloudera上的spark 2.3也面临同样的问题


对我们有效的解决方案是下载并将其放在Sparks的jar文件夹中,解决了这个问题

这包括在安装spark的所有节点上更换snappy java jar


您可以在以下位置找到Spark的jar文件夹:

  • Cloudera:/opt/Cloudera/parcels/SPARK2-{spark Cloudera version}/lib/SPARK2/jars
  • Hdp:/usr/Hdp/{Hdp version}/spark2/jars

是否附加输出?不,只是使用:df.write().mode(SaveMode.Overwrite)方法保存完整的数据帧。原来问题是我测试的Windows机器的本地问题。在Linux环境下运行良好。。。。