Scala 将拼花文件写入s3时出现奇怪错误

Scala 将拼花文件写入s3时出现奇怪错误,scala,apache-spark,amazon-s3,apache-spark-sql,amazon-emr,Scala,Apache Spark,Amazon S3,Apache Spark Sql,Amazon Emr,在尝试将数据帧写入S3时,我遇到了以下nullpointerexception错误。有时这项工作很顺利,有时却失败了 我使用的是EMR 5.20和spark 2.4.0 Spark会话创建 val spark = SparkSession.builder .config("spark.sql.parquet.binaryAsString", "true") .config("spark.sql.sources.partitionColumnTypeInferenc

在尝试将数据帧写入S3时,我遇到了以下nullpointerexception错误。有时这项工作很顺利,有时却失败了

我使用的是EMR 5.20和spark 2.4.0

Spark会话创建

val spark = SparkSession.builder
        .config("spark.sql.parquet.binaryAsString", "true")
        .config("spark.sql.sources.partitionColumnTypeInference.enabled", "false")
        .config("spark.sql.parquet.filterPushdown", "true")
        .config("spark.sql.parquet.fs.optimized.committer.optimization-enabled","true")
        .getOrCreate()

spark.sql("myQuery").write.partitionBy("partitionColumn").mode(SaveMode.Overwrite).option("inferSchema","false").parquet("s3a://...filePath")
谁能帮我解开这个谜。提前谢谢

java.lang.NullPointerException
  at com.amazon.ws.emr.hadoop.fs.s3.lite.S3Errors.isHttp200WithErrorCode(S3Errors.java:57)
  at com.amazon.ws.emr.hadoop.fs.s3.lite.executor.GlobalS3Executor.execute(GlobalS3Executor.java:100)
  at com.amazon.ws.emr.hadoop.fs.s3.lite.AmazonS3LiteClient.invoke(AmazonS3LiteClient.java:184)
  at com.amazon.ws.emr.hadoop.fs.s3.lite.AmazonS3LiteClient.deleteObjects(AmazonS3LiteClient.java:127)
  at com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.deleteAll(Jets3tNativeFileSystemStore.java:364)
  at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.doSingleThreadedBatchDelete(S3NativeFileSystem.java:1372)
  at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.delete(S3NativeFileSystem.java:663)
  at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.delete(EmrFileSystem.java:332)
  at org.apache.spark.internal.io.FileCommitProtocol.deleteWithJob(FileCommitProtocol.scala:124)
  at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.deleteMatchingPartitions(InsertIntoHadoopFsRelationCommand.scala:223)
  at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:122)
  at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
  at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
  at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
  at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
  at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:668)
  at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
  at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
  at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:668)
  at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:276)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:270)
  at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:228)
  at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:557)
  ... 55 elided

看起来像是AWS代码中的错误。这是一个封闭的来源——你必须与他们合作


我确实看到一个提示,这是试图解析错误响应的代码中的一个错误。可能有些东西失败了,但是客户端上传递错误响应的代码有缺陷。这不是很不寻常吗?是故障处理很少获得足够的测试覆盖率

您使用的是
SaveMode.Overwrite
,错误行
com.amazon.ws.emr.hadoop.fs.s3.lite.AmazonS3LiteClient.deleteObjects(AmazonS3LiteClient.java:127)
表明在覆盖的删除操作中出现问题

我将检查并确保您的EMR EC2实例配置文件的IAM策略中的S3权限允许您在调用write Parquet时对文件路径执行
S3:DeleteObject
操作。它应该是这样的:

{
  "Sid": "AllowWriteAccess",
  "Action": [
    "s3:DeleteObject",
    "s3:Get*",
    "s3:List*",
    "s3:PutObject"
  ],
  "Effect": "Allow",
  "Resource": [
    "<arn_for_your_filepath>/*"
  ]
}
{
“Sid”:“AllowWriteAccess”,
“行动”:[
“s3:DeleteObject”,
“s3:获取*”,
“s3:列表*”,
“s3:PutObject”
],
“效果”:“允许”,
“资源”:[
"/*"
]
}

在作业之间,您是否在调用中使用不同的文件路径来编写拼花地板?如果是这样的话,那就可以解释间歇性作业故障了

存在多个Spark写入策略。查看错误,它来自S3端,而不是s3a://。。。尝试改用s3策略,使其成为s3:/…可能是语法错误。。。试试这个
val spark=(SparkSession.builder().config(“spark.sql.parquet.binaryAsString”,“true”).config(“spark.sql.sources.partitionColumnTypeEnference.enabled”,“false”).config(“spark.sql.parquet.filterPushdown”,“true”).config(“spark.sql.parquet.fs.optimized.committer.optimization enabled”,“true”).getOrCreate())
我同时尝试了s3和s3a。只有当我们试图覆盖时,才会发生这种情况。当outputpath中没有文件时,作业运行正常。@DineshJ您最终找到解决方案了吗?我看到了完全相同的行为。@DineshJ有什么解决办法吗?当我将其写入csv时,它可以完美地工作,但不能使用Parquetts。这解决了我的问题。我无法更改权限,因此我写入了新的s3路径。