Scala 如何修复异常:java.math.BigDecimal在datadframe上重新应用架构时不是double架构的有效外部类型?

Scala 如何修复异常:java.math.BigDecimal在datadframe上重新应用架构时不是double架构的有效外部类型?,scala,apache-spark,hadoop,hive,apache-spark-sql,Scala,Apache Spark,Hadoop,Hive,Apache Spark Sql,我试图以以下方式将数据从表:system_releases从Greenplum移动到Hive: val yearDF = spark.read.format("jdbc").option("url", "urltemplate;MaxNumericScale=30;MaxNumericPrecision=40;") .option("dbtable", s"(${execQuery}) as year2016")

我试图以以下方式将数据从表:system_releases从Greenplum移动到Hive:

val yearDF = spark.read.format("jdbc").option("url", "urltemplate;MaxNumericScale=30;MaxNumericPrecision=40;")
                                      .option("dbtable", s"(${execQuery}) as year2016")
                                      .option("user", "user")
                                      .option("password", "pwd")
                                      .option("partitionColumn","release_number")
                                      .option("lowerBound", 306)
                                      .option("upperBound", 500)
                                      .option("numPartitions",2)
                                      .load()
spark推断的数据帧yearDF架构:

description:string
status_date:timestamp
time_zone:string
table_refresh_delay_min:decimal(38,30)
online_patching_enabled_flag:string
release_number:decimal(38,30)
change_number:decimal(38,30)
interface_queue_enabled_flag:string
rework_enabled_flag:string
smart_transfer_enabled_flag:string
patch_number:decimal(38,30)
threading_enabled_flag:string
drm_gl_source_name:string
reverted_flag:string
table_refresh_delay_min_text:string
release_number_text:string
change_number_text:string
我在配置单元上有相同的表,具有以下数据类型:

val hiveCols=string,status_date:timestamp,time_zone:string,table_refresh_delay_min:double,online_patching_enabled_flag:string,release_number:double,change_number:double,interface_queue_enabled_flag:string,rework_enabled_flag:string,smart_transfer_enabled_flag:string,patch_number:double,threading_enabled_flag:string,drm_gl_source_name:string,reverted_flag:string,table_refresh_delay_min_text:string,release_number_text:string,change_number_text:string
列:
table\u refresh\u delay\u min、release\u number、change\u number和patch\u number
给出的小数点太多,即使GP中没有太多小数点。 所以我尝试将其保存为CSV文件,以查看spark是如何读取数据的。 例如,GP上的最大版本号为:306.00,但在csv文件中,我保存了dataframe:yearDF,值为306.000000000000000000

我尝试采用配置单元表模式,并将其转换为StructType,以将其应用于yearDF,如下所示

def convertDatatype(datatype: String): DataType = {
  val convert = datatype match {
    case "string"     => StringType
    case "bigint"     => LongType
    case "int"        => IntegerType
    case "double"     => DoubleType
    case "date"       => TimestampType
    case "boolean"    => BooleanType
    case "timestamp"  => TimestampType
  }
  convert
}

val schemaList        = hiveCols.split(",")
val schemaStructType  = new StructType(schemaList.map(col => col.split(":")).map(e => StructField(e(0), convertDatatype(e(1)), true)))
val newDF = spark.createDataFrame(yearDF.rdd, schemaStructType)
newDF.write.format("csv").save("hdfs/location")
但我得到了一个错误:

Caused by: java.lang.RuntimeException: java.math.BigDecimal is not a valid external type for schema of double
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalIfFalseExpr8$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply_2$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
    at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:287)
    ... 17 more
我试图以下面的方式将十进制列转换为DoubleType,但仍然面临相同的异常

  val pattern = """DecimalType\(\d+,(\d+)\)""".r
  val df2 = dataDF.dtypes.
    collect{ case (dn, dt) if pattern.findFirstMatchIn(dt).map(_.group(1)).getOrElse("0") != "0" => dn }.
    foldLeft(dataDF)((accDF, c) => accDF.withColumn(c, col(c).cast("Double")))

   Caused by: java.lang.RuntimeException: java.math.BigDecimal is not a valid external type for schema of double
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.evalIfFalseExpr8$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply_2$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
    at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:287)
    ... 17 more
在尝试了以上两种方法之后,我已经没有了主意。
有人能告诉我如何将数据帧的列正确转换为所需的数据类型吗?

在这种情况下,当您将RDD转换为DF时,需要指定与spark schema使用的类型完全相同的类型

例如,当您在
yearDF
DataFrame上执行
printSchema
时,您得到了以下结果

description:string
status_date:timestamp
time_zone:string
table_refresh_delay_min:decimal(38,30)
online_patching_enabled_flag:string
release_number:decimal(38,30)
change_number:decimal(38,30)
interface_queue_enabled_flag:string
rework_enabled_flag:string
smart_transfer_enabled_flag:string
patch_number:decimal(38,30)
threading_enabled_flag:string
drm_gl_source_name:string
reverted_flag:string
table_refresh_delay_min_text:string
release_number_text:string
change_number_text:string
将RDD转换为DF时,对于那些具有
十进制(38,30)
的字段,必须指定为
十进制类型(38,30)
,而不是您使用的
双精度类型


希望有帮助

在这种情况下,当您将RDD转换为DF时,需要指定与spark schema使用的类型完全相同的类型

例如,当您在
yearDF
DataFrame上执行
printSchema
时,您得到了以下结果

description:string
status_date:timestamp
time_zone:string
table_refresh_delay_min:decimal(38,30)
online_patching_enabled_flag:string
release_number:decimal(38,30)
change_number:decimal(38,30)
interface_queue_enabled_flag:string
rework_enabled_flag:string
smart_transfer_enabled_flag:string
patch_number:decimal(38,30)
threading_enabled_flag:string
drm_gl_source_name:string
reverted_flag:string
table_refresh_delay_min_text:string
release_number_text:string
change_number_text:string
将RDD转换为DF时,对于那些具有
十进制(38,30)
的字段,必须指定为
十进制类型(38,30)
,而不是您使用的
双精度类型

希望有帮助

可能的重复可能的重复