Scala 分解具有不同长度的相同类型的多个列

Scala 分解具有不同长度的相同类型的多个列,scala,apache-spark,apache-spark-sql,explode,Scala,Apache Spark,Apache Spark Sql,Explode,我有一个spark数据框,它的格式如下,需要分解。我检查其他解决方案,例如。然而,在我的例子中,之前和之后可以是不同长度的数组 root |-- id: string (nullable = true) |-- before: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- start_time: string (nullable = true) | |

我有一个spark数据框,它的格式如下,需要分解。我检查其他解决方案,例如。然而,在我的例子中,
之前
之后
可以是不同长度的数组

root
 |-- id: string (nullable = true)
 |-- before: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- start_time: string (nullable = true)
 |    |    |-- end_time: string (nullable = true)
 |    |    |-- area: string (nullable = true)
 |-- after: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- start_time: string (nullable = true)
 |    |    |-- end_time: string (nullable = true)
 |    |    |-- area: string (nullable = true)
例如,如果数据帧只有一行,
before
是一个大小为2的数组,
before
是一个大小为3的数组,则分解版本应该有5行,模式如下:

root
 |-- id: string (nullable = true)
 |-- type: string (nullable = true)
 |-- start_time: integer (nullable = false)
 |-- end_time: string (nullable = true)
 |-- area: string (nullable = true)
其中,
类型
是一个新列,可以是“之前的
”或“之后的

我可以在两个单独的分解中完成这些工作,在每个分解中创建
type
列,然后创建
union

val dfSummary1 = df.withColumn("before_exp", 
explode($"before")).withColumn("type", 
lit("before")).withColumn(
"start_time", $"before_exp.start_time").withColumn(
"end_time", $"before_exp.end_time").withColumn(
"area", $"before_exp.area").drop("before_exp", "before")

val dfSummary2 = df.withColumn("after_exp", 
explode($"after")).withColumn("type", 
lit("after")).withColumn(
"start_time", $"after_exp.start_time").withColumn(
"end_time", $"after_exp.end_time").withColumn(
"area", $"after_exp.area").drop("after_exp", "after")

val dfResult = dfSumamry1.unionAll(dfSummary2)

但是,我想知道是否有一个更优雅的方式来做到这一点。谢谢。

我认为,
将这两列分开,然后是一个
联合
是一个相当简单的方法。您可以稍微简化StructField元素选择,并为重复的
分解过程创建一个简单的方法,如下所示:

import org.apache.spark.sql.functions._
import org.apache.spark.sql.DataFrame

case class Area(start_time: String, end_time: String, area: String)

val df = Seq((
  "1", Seq(Area("01:00", "01:30", "10"), Area("02:00", "02:30", "20")),
  Seq(Area("07:00", "07:30", "70"), Area("08:00", "08:30", "80"), Area("09:00", "09:30", "90"))
)).toDF("id", "before", "after")

def explodeCol(df: DataFrame, colName: String): DataFrame = {
  val expColName = colName + "_exp"
  df.
    withColumn("type", lit(colName)).
    withColumn(expColName, explode(col(colName))).
    select("id", "type", expColName + ".*")
}

val dfResult = explodeCol(df, "before") union explodeCol(df, "after")

dfResult.show
// +---+------+----------+--------+----+
// | id|  type|start_time|end_time|area|
// +---+------+----------+--------+----+
// |  1|before|     01:00|   01:30|  10|
// |  1|before|     02:00|   02:30|  20|
// |  1| after|     07:00|   07:30|  70|
// |  1| after|     08:00|   08:30|  80|
// |  1| after|     09:00|   09:30|  90|
// +---+------+----------+--------+----+

你也可以在没有工会的情况下实现这一点。有了这些数据:

case class Area(start_time: String, end_time: String, area: String)

val df = Seq((
  "1", Seq(Area("01:00", "01:30", "10"), Area("02:00", "02:30", "20")),
  Seq(Area("07:00", "07:30", "70"), Area("08:00", "08:30", "80"), Area("09:00", "09:30", "90"))
)).toDF("id", "before", "after")
你能行

df
  .select($"id",
    explode(
      array(
        struct(lit("before").as("type"), $"before".as("data")),
        struct(lit("after").as("type"), $"after".as("data"))
      )
    ).as("step1")
  )
 .select($"id",$"step1.type", explode($"step1.data").as("step2"))
 .select($"id",$"type", $"step2.*")
 .show()

+---+------+----------+--------+----+
| id|  type|start_time|end_time|area|
+---+------+----------+--------+----+
|  1|before|     01:00|   01:30|  10|
|  1|before|     02:00|   02:30|  20|
|  1| after|     07:00|   07:30|  70|
|  1| after|     08:00|   08:30|  80|
|  1| after|     09:00|   09:30|  90|
+---+------+----------+--------+----+

我做的是分开做,然后在id上联合或加入。