Apache spark 分区文本文件的Spark append模式因SaveMode失败。append-IOException文件已存在

Apache spark 分区文本文件的Spark append模式因SaveMode失败。append-IOException文件已存在,apache-spark,spark-dataframe,Apache Spark,Spark Dataframe,编写分区文本文件这样简单的操作失败了 dataDF.write.partitionBy("year", "month", "date").mode(SaveMode.Append).text("s3://data/test2/events/") 例外情况- 16/07/06 02:15:05 ERROR datasources.DynamicPartitionWriterContainer: Aborting task. java.io.IOException: File already ex

编写分区文本文件这样简单的操作失败了

dataDF.write.partitionBy("year", "month", "date").mode(SaveMode.Append).text("s3://data/test2/events/")
例外情况-

16/07/06 02:15:05 ERROR datasources.DynamicPartitionWriterContainer: Aborting task.
java.io.IOException: File already exists:s3://path/1839dd1ed38a.gz
 at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.create(S3NativeFileSystem.java:614)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:791)
 at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.create(EmrFileSystem.java:177)
 at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:135)
 at org.apache.spark.sql.execution.datasources.text.TextOutputWriter.<init>(DefaultSource.scala:156)
 at org.apache.spark.sql.execution.datasources.text.TextRelation$$anon$1.newInstance(DefaultSource.scala:125)
 at org.apache.spark.sql.execution.datasources.BaseWriterContainer.newOutputWriter(WriterContainer.scala:129)
 at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.newOutputWriter$1(WriterContainer.scala:424)
 at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:356)
 at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
 at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
 at org.apache.spark.scheduler.Task.run(Task.scala:89)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
16/07/06 02:15:05 INFO output.DirectFileOutputCommitter: Nothing to clean up on abort since there are no temporary files written
16/07/06 02:15:05 ERROR datasources.DynamicPartitionWriterContainer: Task attempt attempt_201607060215_0004_m_001709_3 aborted.
16/07/06 02:15:05 ERROR executor.Executor: Exception in task 1709.3 in stage 4.0 (TID 12093)
org.apache.spark.SparkException: Task failed while writing rows.
 at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:414)
 at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
 at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
 at org.apache.spark.scheduler.Task.run(Task.scala:89)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: File already exists:s3://path/a984-1839dd1ed38a.gz
 at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.create(S3NativeFileSystem.java:614)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:791)
 at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.create(EmrFileSystem.java:177)
 at org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:135)
 at org.apache.spark.sql.execution.datasources.text.TextOutputWriter.<init>(DefaultSource.scala:156)
 at org.apache.spark.sql.execution.datasources.text.TextRelation$$anon$1.newInstance(DefaultSource.scala:125)
 at org.apache.spark.sql.execution.datasources.BaseWriterContainer.newOutputWriter(WriterContainer.scala:129)
 at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.newOutputWriter$1(WriterContainer.scala:424)
 at org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:356)
 ... 8 more
16/07/06 02:15:05错误数据源。DynamicPartitionWriterContainer:中止任务。
java.io.IOException:文件已存在:s3://path/1839dd1ed38a.gz
位于com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.create(S3NativeFileSystem.java:614)
位于org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
位于org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
位于org.apache.hadoop.fs.FileSystem.create(FileSystem.java:791)
位于com.amazon.ws.emr.hadoop.fs.EmrFileSystem.create(EmrFileSystem.java:177)
位于org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:135)
位于org.apache.spark.sql.execution.datasources.text.TextOutputWriter(DefaultSource.scala:156)
位于org.apache.spark.sql.execution.datasources.text.TextRelation$$anon$1.newInstance(DefaultSource.scala:125)
位于org.apache.spark.sql.execution.datasources.BaseWriterContainer.newOutputWriter(WriterContainer.scala:129)
位于org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.newOutputWriter$1(WriterContainer.scala:424)
位于org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:356)
在org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
在org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
位于org.apache.spark.scheduler.Task.run(Task.scala:89)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:214)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
运行(Thread.java:745)
16/07/06 02:15:05信息输出。DirectFileOutputCommitter:中止时无需清理,因为没有写入临时文件
2006年7月16日02:15:05错误数据源。DynamicPartitionWriterContainer:任务尝试\u 201607060215\u 0004\u m\u 001709\u 3已中止。
2006年7月16日02:15:05错误执行者。执行者:第4.0阶段任务1709.3中的异常(TID 12093)
org.apache.spark.SparkException:任务在写入行时失败。
位于org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:414)
在org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
在org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
位于org.apache.spark.scheduler.Task.run(Task.scala:89)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:214)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
运行(Thread.java:745)
原因:java.io.IOException:文件已存在:s3://path/a984-1839dd1ed38a.gz
位于com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.create(S3NativeFileSystem.java:614)
位于org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
位于org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
位于org.apache.hadoop.fs.FileSystem.create(FileSystem.java:791)
位于com.amazon.ws.emr.hadoop.fs.EmrFileSystem.create(EmrFileSystem.java:177)
位于org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.getRecordWriter(TextOutputFormat.java:135)
位于org.apache.spark.sql.execution.datasources.text.TextOutputWriter(DefaultSource.scala:156)
位于org.apache.spark.sql.execution.datasources.text.TextRelation$$anon$1.newInstance(DefaultSource.scala:125)
位于org.apache.spark.sql.execution.datasources.BaseWriterContainer.newOutputWriter(WriterContainer.scala:129)
位于org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.newOutputWriter$1(WriterContainer.scala:424)
位于org.apache.spark.sql.execution.datasources.DynamicPartitionWriterContainer.writeRows(WriterContainer.scala:356)
... 8个以上

在浪费了大量的工时后,用对我有用的解决方案回答我的问题,以及其他解决问题的方法

TLDR; 将spark.Projection设置为false,如下所示:

conf = new SparkConf().set(“spark.speculation“,”false”)

更多细节和细节

遇到此问题时,Spark的版本是什么?