Amazon s3 AWS Glue:使用作业书签加载增量时无法推断架构

Amazon s3 AWS Glue:使用作业书签加载增量时无法推断架构,amazon-s3,parquet,aws-glue,aws-glue-data-catalog,Amazon S3,Parquet,Aws Glue,Aws Glue Data Catalog,我正在做一个AWS胶水作业,它使用S3(拼花文件)中的分区数据和作业书签。我在尝试使用作业书签功能进行每日增量加载时遇到了一些问题。 以下是我读取数据的方式: val push: String = "p_date > '" + start + "' and (attribute=='x' or attribute=='y')" logger.info("Using pushdown predicate: " + push) val source = glueContext .g

我正在做一个AWS胶水作业,它使用S3(拼花文件)中的分区数据和作业书签。我在尝试使用作业书签功能进行每日增量加载时遇到了一些问题。 以下是我读取数据的方式:

val push: String = "p_date > '" + start + "' and (attribute=='x' or attribute=='y')"
logger.info("Using pushdown predicate: " + push)
val source = glueContext
      .getCatalogSource(database = "testbase", tableName = "testtable", pushDownPredicate = push,
transformationContext = "source").getDynamicFrame()
这是AWS Glue生成的Input-files.json,它是在初始满载后使用作业书签逻辑创建的。不应处理任何新数据,这些数据似乎正确显示为空的“文件”部分

但是,会发生以下情况,而不是记录跳过的文件:

After final job bookmarks filter, processing 0.00% of 0 files in partition DynamicFramePartition(com.amazonaws.services.glue.DynamicRecord@7d679e8a,s3://path/to/bucket/attribute=x,1578972694000). 
After final job bookmarks filter, processing 0.00% of 0 files in partition DynamicFramePartition(com.amazonaws.services.glue.DynamicRecord@7d679e8a,s3://path/to/bucket/attribute=y,1578972694000).
我猜Glue现在试图创建一个空的DynamicFrame,但失败了,并显示以下消息:

ERROR ApplicationMaster: User class threw exception: org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
    at org.apache.spark.sql.wrapper.SparkSqlDecoratorDataSource$$anonfun$3.apply(SparkSqlDecoratorDataSource.scala:38)
    at org.apache.spark.sql.wrapper.SparkSqlDecoratorDataSource$$anonfun$3.apply(SparkSqlDecoratorDataSource.scala:38)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.wrapper.SparkSqlDecoratorDataSource.getOrInferFileFormatSchema(SparkSqlDecoratorDataSource.scala:37)
    at org.apache.spark.sql.wrapper.SparkSqlDecoratorDataSource.resolveRelation(SparkSqlDecoratorDataSource.scala:53)
    at com.amazonaws.services.glue.SparkSQLDataSource$$anonfun$getDynamicFrame$8.apply(DataSource.scala:640)
    at com.amazonaws.services.glue.SparkSQLDataSource$$anonfun$getDynamicFrame$8.apply(DataSource.scala:604)
    at com.amazonaws.services.glue.util.FileSchemeWrapper$$anonfun$executeWithQualifiedScheme$1.apply(FileSchemeWrapper.scala:63)
    at com.amazonaws.services.glue.util.FileSchemeWrapper$$anonfun$executeWithQualifiedScheme$1.apply(FileSchemeWrapper.scala:63)
    at com.amazonaws.services.glue.util.FileSchemeWrapper.executeWith(FileSchemeWrapper.scala:57)
    at com.amazonaws.services.glue.util.FileSchemeWrapper.executeWithQualifiedScheme(FileSchemeWrapper.scala:63)
    at com.amazonaws.services.glue.SparkSQLDataSource.getDynamicFrame(DataSource.scala:603)
你以前在使用AWS胶水时有过类似的行为吗? 我正在考虑对“待创建”动态框架实施“空检查”,以防止作业失败。或者您是否有任何AWS本机解决方案可以确保作业书签的正确功能

ERROR ApplicationMaster: User class threw exception: org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
org.apache.spark.sql.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
    at org.apache.spark.sql.wrapper.SparkSqlDecoratorDataSource$$anonfun$3.apply(SparkSqlDecoratorDataSource.scala:38)
    at org.apache.spark.sql.wrapper.SparkSqlDecoratorDataSource$$anonfun$3.apply(SparkSqlDecoratorDataSource.scala:38)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.wrapper.SparkSqlDecoratorDataSource.getOrInferFileFormatSchema(SparkSqlDecoratorDataSource.scala:37)
    at org.apache.spark.sql.wrapper.SparkSqlDecoratorDataSource.resolveRelation(SparkSqlDecoratorDataSource.scala:53)
    at com.amazonaws.services.glue.SparkSQLDataSource$$anonfun$getDynamicFrame$8.apply(DataSource.scala:640)
    at com.amazonaws.services.glue.SparkSQLDataSource$$anonfun$getDynamicFrame$8.apply(DataSource.scala:604)
    at com.amazonaws.services.glue.util.FileSchemeWrapper$$anonfun$executeWithQualifiedScheme$1.apply(FileSchemeWrapper.scala:63)
    at com.amazonaws.services.glue.util.FileSchemeWrapper$$anonfun$executeWithQualifiedScheme$1.apply(FileSchemeWrapper.scala:63)
    at com.amazonaws.services.glue.util.FileSchemeWrapper.executeWith(FileSchemeWrapper.scala:57)
    at com.amazonaws.services.glue.util.FileSchemeWrapper.executeWithQualifiedScheme(FileSchemeWrapper.scala:63)
    at com.amazonaws.services.glue.SparkSQLDataSource.getDynamicFrame(DataSource.scala:603)