Amazon web services Pyspark:在s3故障时覆盖数据

Amazon web services Pyspark:在s3故障时覆盖数据,amazon-web-services,amazon-s3,pyspark,apache-spark-sql,pyspark-dataframes,Amazon Web Services,Amazon S3,Pyspark,Apache Spark Sql,Pyspark Dataframes,我在AWS EMR上有一个pyspark项目,它可以将数据读写到AWS S3 我有一个每月运行的管道,所以通常我会覆盖如下目录: df.write.mode("overwrite").partitionBy('col1').parquet(path) 我遇到了s3无法删除目录中已有文件的错误。如果我转到s3控制台并删除该目录,然后重新运行该文件,它会很好地完成 除了在开始作业之前手动删除目录之外,还有什么建议可以避免这种情况 20/11/16 22:26:14 - INFO

我在AWS EMR上有一个pyspark项目,它可以将数据读写到AWS S3

我有一个每月运行的管道,所以通常我会覆盖如下目录:

df.write.mode("overwrite").partitionBy('col1').parquet(path)
我遇到了s3无法删除目录中已有文件的错误。如果我转到s3控制台并删除该目录,然后重新运行该文件,它会很好地完成

除了在开始作业之前手动删除目录之外,还有什么建议可以避免这种情况

20/11/16 22:26:14 - INFO - __main__ - write file to s3
Traceback (most recent call last):
  File "deepar_data_prep_for_deployment.py", line 395, in <module>
    train_data.write.mode("Overwrite").partitionBy('col1').parquet(train_path)
  File "/mnt/yarn/usercache/hadoop/appcache/application_1604607821553_0033/container_1604607821553_0033_01_000001/pyspark.zip/pyspark/sql/readwriter.py", line 841, in parquet
  File "/mnt/yarn/usercache/hadoop/appcache/application_1604607821553_0033/container_1604607821553_0033_01_000001/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
  File "/mnt/yarn/usercache/hadoop/appcache/application_1604607821553_0033/container_1604607821553_0033_01_000001/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
  File "/mnt/yarn/usercache/hadoop/appcache/application_1604607821553_0033/container_1604607821553_0033_01_000001/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o1952.parquet.
: java.lang.NullPointerException
    at com.amazon.ws.emr.hadoop.fs.s3.lite.S3Errors.isHttp200WithErrorCode(S3Errors.java:57)
    at com.amazon.ws.emr.hadoop.fs.s3.lite.executor.GlobalS3Executor.execute(GlobalS3Executor.java:100)
    at com.amazon.ws.emr.hadoop.fs.s3.lite.AmazonS3LiteClient.invoke(AmazonS3LiteClient.java:184)
    at com.amazon.ws.emr.hadoop.fs.s3.lite.AmazonS3LiteClient.deleteObjects(AmazonS3LiteClient.java:127)
    at com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.deleteAll(Jets3tNativeFileSystemStore.java:364)
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.doSingleThreadedBatchDelete(S3NativeFileSystem.java:1372)
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.delete(S3NativeFileSystem.java:663)
    at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.delete(EmrFileSystem.java:332)
    at org.apache.spark.internal.io.FileCommitProtocol.deleteWithJob(FileCommitProtocol.scala:124)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.deleteMatchingPartitions(InsertIntoHadoopFsRelationCommand.scala:223)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:122)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
    at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
    at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:668)
    at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:276)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:270)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:228)
    at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:557)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
20/11/16 22:26:14-INFO-main-write文件到s3
回溯(最近一次呼叫最后一次):
文件“deepar_data_prep_for_deployment.py”,第395行,在
列车数据写入模式(“覆盖”).partitionBy(“col1”).parquet(列车路径)
文件“/mnt/thread/usercache/hadoop/appcache/application_1604607821553_0033/container_1604607821553_0033_01_000001/pyspark.zip/pyspark/sql/readwriter.py”,第841行,拼花地板
文件“/mnt/thread/usercache/hadoop/appcache/application_1604607821553_0033/container_1604607821553_0033_01_000001/py4j-0.10.7-src.zip/py4j/java_gateway.py”,第1257行,在__
文件“/mnt/thread/usercache/hadoop/appcache/application_1604607821553_0033/container_1604607821553_0033_01_000001/pyspark.zip/pyspark/sql/utils.py”,第63行,装饰
文件“/mnt/thread/usercache/hadoop/appcache/application_1604607821553_0033/container_1604607821553_0033_01_000001/py4j-0.10.7-src.zip/py4j/protocol.py”,第328行,在get_return_值中
py4j.protocol.Py4JJavaError:调用o1952.parquet时出错。
:java.lang.NullPointerException
在com.amazon.ws.emr.hadoop.fs.s3.lite.S3Errors.isHttp200WithErrorCode(S3Errors.java:57)
在com.amazon.ws.emr.hadoop.fs.s3.lite.executor.GlobalS3Executor.execute(GlobalS3Executor.java:100)
位于com.amazon.ws.emr.hadoop.fs.s3.lite.AmazonS3LiteClient.invoke(AmazonS3LiteClient.java:184)
位于com.amazon.ws.emr.hadoop.fs.s3.lite.AmazonS3LiteClient.deleteObjects(AmazonS3LiteClient.java:127)
在com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.deleteAll(Jets3tNativeFileSystemStore.java:364)
位于com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.doSingleThreadedBatchDelete(S3NativeFileSystem.java:1372)
位于com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.delete(S3NativeFileSystem.java:663)
位于com.amazon.ws.emr.hadoop.fs.EmrFileSystem.delete(EmrFileSystem.java:332)
位于org.apache.spark.internal.io.FileCommitProtocol.deleteWithJob(FileCommitProtocol.scala:124)
位于org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.deleteMatchingPartitions(InsertIntoHadoopFsRelationCommand.scala:223)
在org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:122)
位于org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
位于org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
位于org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
位于org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
位于org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
位于org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
位于org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
位于org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
位于org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
位于org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
位于org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
位于org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
位于org.apache.spark.sql.execution.SQLExecution$$anonfun$和newexecutionid$1.apply(SQLExecution.scala:78)
在org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
位于org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
位于org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:668)
位于org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:276)
位于org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:270)
位于org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:228)
位于org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:557)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:282)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:238)
运行(Thread.java:748)