Pyspark 将Spark数据帧写入ORC文件时引发错误

Pyspark 将Spark数据帧写入ORC文件时引发错误,pyspark,apache-spark-sql,orc,Pyspark,Apache Spark Sql,Orc,我试图写一个Spark DF作为ORC文件,它抛出下面的错误。我有一种强烈的感觉 日志: Caused by: org.apache.spark.SparkException: Task failed while writing rows at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWrite

我试图写一个Spark DF作为ORC文件,它抛出下面的错误。我有一种强烈的感觉

日志:

Caused by: org.apache.spark.SparkException: Task failed while writing rows
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:270)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:189)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:188)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:108)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        ... 1 more
Caused by: java.lang.IndexOutOfBoundsException: Index: 116, Size: 116
        at java.util.ArrayList.rangeCheck(ArrayList.java:657)
        at java.util.ArrayList.get(ArrayList.java:433)
        at org.apache.hadoop.hive.ql.io.orc.OrcStruct$OrcStructInspector.<init>(OrcStruct.java:196)
        at org.apache.hadoop.hive.ql.io.orc.OrcStruct.createObjectInspector(OrcStruct.java:549)
        at org.apache.hadoop.hive.ql.io.orc.OrcSerde.initialize(OrcSerde.java:109)
        at org.apache.spark.sql.hive.orc.OrcSerializer.<init>(OrcFileFormat.scala:188)
        at org.apache.spark.sql.hive.orc.OrcOutputWriter.<init>(OrcFileFormat.scala:231)
        at org.apache.spark.sql.hive.orc.OrcFileFormat$$anon$1.newInstance(OrcFileFormat.scala:91)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.org$apache$spark$sql$execution$datasources$FileFormatWriter$DynamicPartitionWriteTask$$newOutputWriter(FileFormatWriter.scala:416)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask$$anonfun$execute$2.apply(FileFormatWriter.scala:449)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask$$anonfun$execute$2.apply(FileFormatWriter.scala:438)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at org.apache.spark.sql.catalyst.util.AbstractScalaRowIterator.foreach(AbstractScalaRowIterator.scala:26)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.execute(FileFormatWriter.scala:438)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:254)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1371)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:259)
        ... 8 more
原因:org.apache.spark.SparkException:写入行时任务失败
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:270)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:189)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1$$anonfun$apply$mcV$sp$1.apply(FileFormatWriter.scala:188)
位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
位于org.apache.spark.scheduler.Task.run(Task.scala:108)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:338)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 还有一个
原因:java.lang.IndexOutOfBoundsException:索引:116,大小:116
位于java.util.ArrayList.rangeCheck(ArrayList.java:657)
获取(ArrayList.java:433)
位于org.apache.hadoop.hive.ql.io.orc.OrcStruct$OrcStructInspector(OrcStruct.java:196)
位于org.apache.hadoop.hive.ql.io.orc.OrcStruct.createObjectInspector(OrcStruct.java:549)
位于org.apache.hadoop.hive.ql.io.orc.OrcSerde.initialize(OrcSerde.java:109)
位于org.apache.spark.sql.hive.orc.OrcSerializer。(OrcFileFormat.scala:188)
在org.apache.spark.sql.hive.orc.OrcOutputWriter上
位于org.apache.spark.sql.hive.orc.OrcFileFormat$$anon$1.newInstance(OrcFileFormat.scala:91)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.org$apache$spark$sql$execution$datasources$FileFormatWriter$DynamicPartitionWriteTask$$newoutputwitter(FileFormatWriter.scala:416)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask$$anonfun$execute$2.apply(FileFormatWriter.scala:449)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask$$anonfun$execute$2.apply(FileFormatWriter.scala:438)
位于scala.collection.Iterator$class.foreach(Iterator.scala:893)
位于org.apache.spark.sql.catalyst.util.AbstractScalaRowIterator.foreach(AbstractScalaRowIterator.scala:26)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$DynamicPartitionWriteTask.execute(FileFormatWriter.scala:438)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:256)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:254)
位于org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1371)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:259)
... 8个以上

您可以添加更多关于如何尝试向ORC写入的详细信息吗

一般做法是,如果您正在使用模式(如文本格式的配置单元表)读取数据。您将使用下面的DirectAPI

df.write().format(‘orc’).save(‘/tmp/output’)
如果您没有模式,则可以选择直接从hdfs或流式应用程序读取数据的情况。您必须定义模式并创建数据帧

spark.read.csv(path, schema)
Val schema = StructType([
StructField(‘colName1’, StringType(), false)
])
df = spark.convertDataFrame(rdd_of_rows, schema)
df.write().format('orc').save('/tmp/output')
或者,如果您有一个RDD,您必须将RDD[ANY]转换为RDD[Row]行,并定义模式并将其转换为dataframe

spark.read.csv(path, schema)
Val schema = StructType([
StructField(‘colName1’, StringType(), false)
])
df = spark.convertDataFrame(rdd_of_rows, schema)
df.write().format('orc').save('/tmp/output')

您能否添加更多关于您如何尝试向ORC写入的详细信息

一般做法是,如果您正在使用模式(如文本格式的配置单元表)读取数据。您将使用下面的DirectAPI

df.write().format(‘orc’).save(‘/tmp/output’)
如果您没有模式,则可以选择直接从hdfs或流式应用程序读取数据的情况。您必须定义模式并创建数据帧

spark.read.csv(path, schema)
Val schema = StructType([
StructField(‘colName1’, StringType(), false)
])
df = spark.convertDataFrame(rdd_of_rows, schema)
df.write().format('orc').save('/tmp/output')
或者,如果您有一个RDD,您必须将RDD[ANY]转换为RDD[Row]行,并定义模式并将其转换为dataframe

spark.read.csv(path, schema)
Val schema = StructType([
StructField(‘colName1’, StringType(), false)
])
df = spark.convertDataFrame(rdd_of_rows, schema)
df.write().format('orc').save('/tmp/output')

小更正:在PySpark端,它应该是:df.write.format('orc')。save(output_path)write不是方法小更正:在PySpark端,它应该是:df.write.format('orc')。save(output_path)write不是方法