Scala 如何向包含两个数据集元素的数据集添加Seq[T]列?

Scala 如何向包含两个数据集元素的数据集添加Seq[T]列?,scala,apache-spark,apache-spark-sql,apache-spark-dataset,Scala,Apache Spark,Apache Spark Sql,Apache Spark Dataset,我有两个数据集AccountData和CustomerData,以及相应的案例类: case class AccountData(customerId: String, forename: String, surname: String) customerId|accountId|balance| +----------+---------+-------+ | IND0002| ACC0002| 200| | IND0002| ACC0022| 300| | IN

我有两个数据集AccountDataCustomerData,以及相应的案例类:

case class AccountData(customerId: String, forename: String, surname: String)

customerId|accountId|balance|
+----------+---------+-------+
|   IND0002|  ACC0002|    200|
|   IND0002|  ACC0022|    300|
|   IND0003|  ACC0003|    400|
+----------+---------+-------+


case class CustomerData(customerId: String, accountId: String, balance: Long)
+----------+-----------+--------+
|customerId|   forename| surname|
+----------+-----------+--------+
|   IND0001|Christopher|   Black|
|   IND0002|  Madeleine|    Kerr|
|   IND0003|      Sarah| Skinner|
+----------+-----------+--------+
如何派生以下数据集,该数据集添加包含每个customerId的Seq[AccountData]的列帐户

我试过:

val joinCustomerAndAccount =  accountDS.joinWith(customerDS, customerDS("customerId") === accountDS("customerId")).drop(col("_2"))
这为我提供了以下数据帧:

+---------------------+
|_1                   |
+---------------------+
|[IND0002,ACC0002,200]|
|[IND0002,ACC0022,300]|
|[IND0003,ACC0003,400]|
+---------------------+
如果我这样做:

val result = customerDS.withColumn("accounts", joinCustomerAndAccount("_1")(0)) 
我得到以下例外情况:

Exception in thread "main" org.apache.spark.sql.AnalysisException: Field name should be String Literal, but it's 0;

帐户可以按“customerId”分组并与客户合并:

// data
val accountDS = Seq(
  AccountData("IND0002", "ACC0002", 200),
  AccountData("IND0002", "ACC0022", 300),
  AccountData("IND0003", "ACC0003", 400)
).toDS()

val customerDS = Seq(
  CustomerData("IND0001", "Christopher", "Black"),
  CustomerData("IND0002", "Madeleine", "Kerr"),
  CustomerData("IND0003", "Sarah", "Skinner")
).toDS()

// action
val accountsGroupedDF = accountDS.toDF
  .groupBy("customerId")
  .agg(collect_set(struct("accountId", "balance")).as("accounts"))

val result = customerDS.toDF.alias("c")
  .join(accountsGroupedDF.alias("a"), $"c.customerId" === $"a.customerId", "left")
    .select("c.*","accounts")

result.show(false)
输出:

+----------+-----------+-------+--------------------------------+
|customerId|forename   |surname|accounts                        |
+----------+-----------+-------+--------------------------------+
|IND0001   |Christopher|Black  |null                            |
|IND0002   |Madeleine  |Kerr   |[[ACC0002, 200], [ACC0022, 300]]|
|IND0003   |Sarah      |Skinner|[[ACC0003, 400]]                |
+----------+-----------+-------+--------------------------------+
+----------+-----------+-------+--------------------------------+
|customerId|forename   |surname|accounts                        |
+----------+-----------+-------+--------------------------------+
|IND0001   |Christopher|Black  |null                            |
|IND0002   |Madeleine  |Kerr   |[[ACC0002, 200], [ACC0022, 300]]|
|IND0003   |Sarah      |Skinner|[[ACC0003, 400]]                |
+----------+-----------+-------+--------------------------------+