Apache spark 加载拼花地板时不支持Spark异常复杂类型

Apache spark 加载拼花地板时不支持Spark异常复杂类型,apache-spark,spark-dataframe,parquet,Apache Spark,Spark Dataframe,Parquet,我正在尝试在Spark中加载拼花地板文件作为数据帧- val df= spark.read.parquet(path) 我越来越- org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 12, 10.250.2.32): java.lang

我正在尝试在Spark中加载拼花地板文件作为数据帧-

val df= spark.read.parquet(path)
我越来越-

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 12, 10.250.2.32): java.lang.UnsupportedOperationException: Complex types not supported.
在浏览代码时,我意识到spark VectorizedParquetRecordReader.java(initializeInternal)中有一个检查-

所以我认为它在isRepetition方法上是失败的。 有人能告诉我解决这个问题的方法吗

我的拼花数据如下-

Key1 = value1
Key2 = value1
Key3 = value1
Key4:
.list:
..element:
...key5:
....list:
.....element:
......certificateSerialNumber = dfsdfdsf45345
......issuerName = CN=Microsoft Windows Verification PCA, O=Microsoft Corporation, L=Redmond, S=Washington, C=US
......subjectName = CN=Microsoft Windows, OU=MOPR, O=Microsoft Corporation, L=Redmond, S=Washington, C=US
......thumbprintAlgorithm = Sha1
......thumbprintContent = sfdasf42dsfsdfsdfsd
......validFrom = 2009-12-07 21:57:44.000000
......validTo = 2011-03-07 21:57:44.000000
....list:
.....element:
......certificateSerialNumber = dsafdsafsdf435345
......issuerName = CN=Microsoft Root Certificate Authority, DC=microsoft, DC=com
......subjectName = CN=Microsoft Windows Verification PCA, O=Microsoft Corporation, L=Redmond, S=Washington, C=US
......thumbprintAlgorithm = Sha1
......thumbprintContent = sdfsdfdsf43543
......validFrom = 2005-09-15 21:55:41.000000
......validTo = 2016-03-15 22:05:41.000000
我怀疑键4可能是因为嵌套的 树。输入数据是Json类型的,因此可能是parquet而不是 理解复杂级别是Json

我在Spark里发现了一只虫子

但它指出了蜂巢复杂类型的问题。不确定,这是否能解决拼花地板的问题

更新1 进一步探索拼花地板,我得出以下结论-

我在spark.write时创建了5个拼花文件
在这两个拼花文件中,有一个是空的,所以一个列的模式应该是ArrayType,它是字符串类型的,当我试图把它作为一个整体来阅读时,我看到了上面的异常 指示“ColumnarBatch支持从Spark 2.0.0开始的结构和数组”(参见)

同样从Spark 2.0.0开始,处理属性
Spark.sql.parquet.enableVectorizedReader
(参见)

我的2美分:禁用“矢量恐惧”优化,看看会发生什么

第二次拍摄
由于问题已经缩小到一些空文件,这些文件不显示与“真实”文件相同的模式,因此我的3美分:使用
spark.sql.parquet.mergeSchema
进行实验,以查看真实文件中的模式是否优先


除此之外,您可能会尝试在写入时通过某种重新分区来消除空文件,例如,
coalesce(1)
(好的,1有点讽刺,但您明白了重点)。

哪个工具用于创建拼花地板文件,首先?使用Spark dataframe write Spark.write.parquet命令,我得到org.apache.parquet.io.ParquetDecodingException:无法读取文件{hdfs parquet file location]中块-1中0处的值,现在错误是精确的
Key1 = value1
Key2 = value1
Key3 = value1
Key4:
.list:
..element:
...key5:
....list:
.....element:
......certificateSerialNumber = dfsdfdsf45345
......issuerName = CN=Microsoft Windows Verification PCA, O=Microsoft Corporation, L=Redmond, S=Washington, C=US
......subjectName = CN=Microsoft Windows, OU=MOPR, O=Microsoft Corporation, L=Redmond, S=Washington, C=US
......thumbprintAlgorithm = Sha1
......thumbprintContent = sfdasf42dsfsdfsdfsd
......validFrom = 2009-12-07 21:57:44.000000
......validTo = 2011-03-07 21:57:44.000000
....list:
.....element:
......certificateSerialNumber = dsafdsafsdf435345
......issuerName = CN=Microsoft Root Certificate Authority, DC=microsoft, DC=com
......subjectName = CN=Microsoft Windows Verification PCA, O=Microsoft Corporation, L=Redmond, S=Washington, C=US
......thumbprintAlgorithm = Sha1
......thumbprintContent = sdfsdfdsf43543
......validFrom = 2005-09-15 21:55:41.000000
......validTo = 2016-03-15 22:05:41.000000