PySpark-InserverWrite-文件已存在异常

PySpark-InserverWrite-文件已存在异常,pyspark,pyspark-sql,amazon-emr,aws-glue,Pyspark,Pyspark Sql,Amazon Emr,Aws Glue,我试图从红移中提取数据,并在S3位置上使用新创建的粘合表插入S3 versions Pyspark - 2.4.0 EMR-emr-5.21.0 我的文字如下所示: date_filtered_df.coalesce(int(args.numpartitions)) \ .write \ .mode("overwrite") \ .format(

我试图从红移中提取数据,并在S3位置上使用新创建的粘合表插入S3

versions 
Pyspark - 2.4.0 
EMR-emr-5.21.0
我的文字如下所示:

 date_filtered_df.coalesce(int(args.numpartitions)) \
                        .write \
                        .mode("overwrite") \
                        .format("parquet") \
                        .insertInto("{}.{}_stg".format(args.database, args.table))
这是一个新创建的表,就在插入之前,它指向一个完全空的位置

versions 
Pyspark - 2.4.0 
EMR-emr-5.21.0
但是代码有错误

19/12/09 14:37:48 WARN TaskSetManager: Lost task 576.1 in stage 1.0 (TID 3125, ip-172-31-31-203.ec2.internal, executor 401): org.apache.spark.SparkException: Task failed while writing rows.
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:254)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:168)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
        at org.apache.spark.scheduler.Task.run(Task.scala:121)
        at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
        at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.fs.FileAlreadyExistsException: File already exists:s3://bucketname/trusted/databasename/dw/2019-12-08/tbalename/.hive-staging_hive_2019-12-09_12-49-15_136_2979557082816709535-1/-ext-10000/sales_dt=20051130/biz_unit_code=CS/geo_code=AMER/part-00576-84bcde9a-4fc0-402b-8aab-8d71e06f8c43.c000
        at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249)
        at org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:123)
        at org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103)
        at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.newOutputWriter(FileFormatDataWriter.scala:236)
        at org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.write(FileFormatDataWriter.scala:260)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:239)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:245)
        ... 10 more
19/12/09 14:37:48警告TaskSetManager:在阶段1.0中丢失了任务576.1(TID 3125,ip-172-31-31-203.ec2.internal,executor 401):org.apache.spark.sparkeException:写入行时任务失败。
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:254)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:168)
位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
位于org.apache.spark.scheduler.Task.run(Task.scala:121)
位于org.apache.spark.executor.executor$TaskRunner$$anonfun$10.apply(executor.scala:402)
位于org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:408)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
运行(Thread.java:748)
原因:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.fs.FileReadyExistException:文件已存在存在:s3://bucketname/trusted/databasename/dw/2019-12-08/tbalename/.hive-staging\u-hive\u 2019-12-09\u 12-49-15\u 136\u 29795570826709535-1/-ext-10000/sales\u dt=20051130/biz\u unit\u code=CS/geo\u code=AMER/part-00576-84bcde9a-4fc0-402b-8aab-8d71e06c43.c000
位于org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:249)
位于org.apache.spark.sql.hive.execution.HiveOutputWriter。(HiveFileFormat.scala:123)
位于org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103)
位于org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.newOutputWriter(FileFormatDataWriter.scala:236)
位于org.apache.spark.sql.execution.datasources.DynamicPartitionDataWriter.write(FileFormatDataWriter.scala:260)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:239)
位于org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
位于org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:245)
... 10多
非常感谢您的帮助。

代替
模式(“覆盖”),
尝试
插入(output\u table,overwrite=True)
代替
模式(“覆盖”),
尝试
插入(output\u table,overwrite=True)