Amazon s3 [Amazon](500310)无效操作:断言

Amazon s3 [Amazon](500310)无效操作:断言,amazon-s3,pyspark,apache-spark-sql,amazon-redshift,Amazon S3,Pyspark,Apache Spark Sql,Amazon Redshift,我正在使用spark红移并使用pyspark查询红移数据进行处理 如果我使用workbench等在redshift上运行,查询工作正常。但是spark redshift将数据卸载到s3,然后检索数据,当我运行它时,它抛出以下错误 py4j.protocol.Py4JJavaError: An error occurred while calling o124.save. : java.sql.SQLException: [Amazon](500310) Invalid operation: As

我正在使用spark红移并使用pyspark查询红移数据进行处理

如果我使用workbench等在redshift上运行,查询工作正常。但是spark redshift将数据卸载到s3,然后检索数据,当我运行它时,它抛出以下错误

py4j.protocol.Py4JJavaError: An error occurred while calling o124.save.
: java.sql.SQLException: [Amazon](500310) Invalid operation: Assert
Details: 
 -----------------------------------------------
  error:  Assert
  code:      1000
  context:   !AmLeaderProcess - 
  query:     583860
  location:  scheduler.cpp:642
  process:   padbmaster [pid=31521]
  -----------------------------------------------;
    at com.amazon.redshift.client.messages.inbound.ErrorResponse.toErrorException(ErrorResponse.java:1830)
    at com.amazon.redshift.client.PGMessagingContext.handleErrorResponse(PGMessagingContext.java:822)
    at com.amazon.redshift.client.PGMessagingContext.handleMessage(PGMessagingContext.java:647)
    at com.amazon.jdbc.communications.InboundMessagesPipeline.getNextMessageOfClass(InboundMessagesPipeline.java:312)
    at com.amazon.redshift.client.PGMessagingContext.doMoveToNextClass(PGMessagingContext.java:1080)
    at com.amazon.redshift.client.PGMessagingContext.getErrorResponse(PGMessagingContext.java:1048)
    at com.amazon.redshift.client.PGClient.handleErrorsScenario2ForPrepareExecution(PGClient.java:2524)
    at com.amazon.redshift.client.PGClient.handleErrorsPrepareExecute(PGClient.java:2465)
    at com.amazon.redshift.client.PGClient.executePreparedStatement(PGClient.java:1420)
    at com.amazon.redshift.dataengine.PGQueryExecutor.executePreparedStatement(PGQueryExecutor.java:370)
    at com.amazon.redshift.dataengine.PGQueryExecutor.execute(PGQueryExecutor.java:245)
    at com.amazon.jdbc.common.SPreparedStatement.executeWithParams(Unknown Source)
    at com.amazon.jdbc.common.SPreparedStatement.execute(Unknown Source)
    at com.databricks.spark.redshift.JDBCWrapper$$anonfun$executeInterruptibly$1.apply(RedshiftJDBCWrapper.scala:108)
    at com.databricks.spark.redshift.JDBCWrapper$$anonfun$executeInterruptibly$1.apply(RedshiftJDBCWrapper.scala:108)
    at com.databricks.spark.redshift.JDBCWrapper$$anonfun$2.apply(RedshiftJDBCWrapper.scala:126)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
    at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Caused by: com.amazon.support.exceptions.ErrorException: [Amazon](500310) Invalid operation: Assert
生成的查询:

UNLOAD ('SELECT “x”,”y" FROM (select x,y from table_name where 
((load_date=20171226 and hour>=16) or (load_date between 20171227 and 
20171226) or (load_date=20171227 and hour<=16))) ') TO ‘s3:s3path' WITH 
CREDENTIALS ‘aws_access_key_id=xxx;aws_secret_access_key=yyy' ESCAPE 
MANIFEST

这里的问题是什么,我如何解决这个问题。

当解释数据类型出现问题时,通常会发生断言错误,例如,联合查询的两个部分,其中一部分中的列N是varchar,另一部分中的列N是integer或null。对于来自不同节点的数据,可能会发生断言错误,就像在联合查询中一样。尝试为每个列添加显式数据格式,如x::integer

是否尝试简化查询?你不需要用大写字母包装。断言错误通常发生在解释数据类型出错时,例如,对于联合查询的两个部分,其中一部分中的列N是varchar,而另一部分中的同一列是integer或null。可能是来自不同节点的数据的断言错误。实际上,我使用的查询只是内部部分。外部部分包装器是在必须卸载到s3时生成的。我猜它来自spark-redshift。如果在workbench中使用完整生成的查询会怎么样?它是否返回相同的错误?使用unload语句yes,它会生成相同的assert错误。但只有它执行的查询是正确的。请尝试为每个列添加显式数据格式,如x::integer