Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark AvroRuntimeException:非联合:{quot;type";:“long";,“logicalType";:“timestamp millis”}_Apache Spark_Hdfs_Avro - Fatal编程技术网

Apache spark AvroRuntimeException:非联合:{quot;type";:“long";,“logicalType";:“timestamp millis”}

Apache spark AvroRuntimeException:非联合:{quot;type";:“long";,“logicalType";:“timestamp millis”},apache-spark,hdfs,avro,Apache Spark,Hdfs,Avro,我试图使用存储在模式注册表中的Avro模式将数据从Spark dataframe保存到HDFS。但是,我在写入数据时出错: Caused by: org.apache.avro.AvroRuntimeException: Not a union: {"type":"long","logicalType":"timestamp-millis"} at org.apache.avro.Schema.getTypes(Schema.java:299) at org.apache.spa

我试图使用存储在模式注册表中的Avro模式将数据从Spark dataframe保存到HDFS。但是,我在写入数据时出错:

Caused by: org.apache.avro.AvroRuntimeException: Not a union: {"type":"long","logicalType":"timestamp-millis"}
    at org.apache.avro.Schema.getTypes(Schema.java:299)
    at org.apache.spark.sql.avro.AvroSerializer.org$apache$spark$sql$avro$AvroSerializer$$resolveNullableType(AvroSerializer.scala:229)
    at org.apache.spark.sql.avro.AvroSerializer$$anonfun$3.apply(AvroSerializer.scala:209)
    at org.apache.spark.sql.avro.AvroSerializer$$anonfun$3.apply(AvroSerializer.scala:208)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.immutable.List.map(List.scala:296)
    at org.apache.spark.sql.avro.AvroSerializer.newStructConverter(AvroSerializer.scala:208)
    at org.apache.spark.sql.avro.AvroSerializer.<init>(AvroSerializer.scala:51)
    at org.apache.spark.sql.avro.AvroOutputWriter.serializer$lzycompute(AvroOutputWriter.scala:42)
    at org.apache.spark.sql.avro.AvroOutputWriter.serializer(AvroOutputWriter.scala:42)
    at org.apache.spark.sql.avro.AvroOutputWriter.write(AvroOutputWriter.scala:64)
    at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.write(FileFormatDataWriter.scala:137)
dataDF.write
  .mode("append")
  .format("avro")
  .option(
    "avroSchema",
    SchemaRegistry.getSchema(
      schemaRegistryConfig.url,
      schemaRegistryConfig.dataSchemaSubject,
      schemaRegistryConfig.dataSchemaVersion))
  .save(hdfsURL)
以下是日期格式的示例:

1900-01-01 00:00:00
Spark数据帧中此字段的数据类型:

|-- CreateDate: timestamp (nullable = true)
这是我写数据的方式:

Caused by: org.apache.avro.AvroRuntimeException: Not a union: {"type":"long","logicalType":"timestamp-millis"}
    at org.apache.avro.Schema.getTypes(Schema.java:299)
    at org.apache.spark.sql.avro.AvroSerializer.org$apache$spark$sql$avro$AvroSerializer$$resolveNullableType(AvroSerializer.scala:229)
    at org.apache.spark.sql.avro.AvroSerializer$$anonfun$3.apply(AvroSerializer.scala:209)
    at org.apache.spark.sql.avro.AvroSerializer$$anonfun$3.apply(AvroSerializer.scala:208)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
    at scala.collection.immutable.List.map(List.scala:296)
    at org.apache.spark.sql.avro.AvroSerializer.newStructConverter(AvroSerializer.scala:208)
    at org.apache.spark.sql.avro.AvroSerializer.<init>(AvroSerializer.scala:51)
    at org.apache.spark.sql.avro.AvroOutputWriter.serializer$lzycompute(AvroOutputWriter.scala:42)
    at org.apache.spark.sql.avro.AvroOutputWriter.serializer(AvroOutputWriter.scala:42)
    at org.apache.spark.sql.avro.AvroOutputWriter.write(AvroOutputWriter.scala:64)
    at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.write(FileFormatDataWriter.scala:137)
dataDF.write
  .mode("append")
  .format("avro")
  .option(
    "avroSchema",
    SchemaRegistry.getSchema(
      schemaRegistryConfig.url,
      schemaRegistryConfig.dataSchemaSubject,
      schemaRegistryConfig.dataSchemaVersion))
  .save(hdfsURL)

问题似乎在于,代码中的
CreateDate
不是联合类型,而是原始
long
类型,spark将其转换为非联合
timestamp millis
不可为空的逻辑类型(您可以在中了解这一点),以便将列转换为可为空的列,请参阅@YuvalItzchakov我不确定,因为在Spark dataframe中,它是一种时间戳类型。哦,我忽略了一个事实,即时间戳列在DF模式中已经设置为null。嗯……你能试着调试一下
AvroSerializer
类,看看它是如何处理这个列的吗?另外,您的dataDF中是否有任何其他
TimestampType
字段?