Scala Spark 2一次合并多个列

Scala Spark 2一次合并多个列,scala,apache-spark,Scala,Apache Spark,我正在尝试将一个数据帧插入另一个数据帧 scala> addressOrigRenamed.show +--------------+----------------------+-----------+-----------+ |orig_contactid|orig_contactaddresskey|orig_valueA|orig_valueB| +--------------+----------------------+-----------+-----------+ |

我正在尝试将一个数据帧插入另一个数据帧

scala> addressOrigRenamed.show
+--------------+----------------------+-----------+-----------+
|orig_contactid|orig_contactaddresskey|orig_valueA|orig_valueB|
+--------------+----------------------+-----------+-----------+
|             1|                     1|         54|          3|
|             1|                     2|         55|          7|
+--------------+----------------------+-----------+-----------+
scala> dfNew.show
+---------+-----------------+------+------+
|contactId|contactaddresskey|valueA|valueB|
+---------+-----------------+------+------+
|        1|                2|    10|     9|
+---------+-----------------+------+------+
scala> val endDF = addressOrigRenamed.join(dfNew, $"orig_contactid" === $"contactid" && $"orig_contactaddresskey" === "$contactaddresskey", "fullouter").select(coalesce($"contactid", $"orig_contactid").alias("contactid"), coalesce($"contactaddresskey", $"orig_contactaddresskey").alias("contactaddresskey"), coalesce($"valueA", $"orig_valueA").alias("valueA"), coalesce($"valueB", $"orig_valueB").alias("valueB"))
scala> endDF.show
+---------+-----------------+------+------+
|contactid|contactaddresskey|valueA|valueB|
+---------+-----------------+------+------+
|        1|                1|    54|     3|
|        1|                2|    10|     9|
+---------+-----------------+------+------+

正如你所见,这是可行的。但是语法是可怕的。这只是一个测试,我需要合并15-20列。编写
coalesce(…).alias(…)
15-20确实是一个糟糕的选择。如何更好地编写此函数?

可以创建合并函数数组:

scala> val joinedDF = addressOrigRenamed.join(dfNew, $"orig_contactid" === $"contactid" && $"orig_contactaddresskey" === "$contactaddresskey", "fullouter")
scala> val arr = dfNew.columns.map(x => {
         val y = "orig_" + x
         coalesce(joinedDF.col(x), joinedDF.col(y)).alias(x)
      })
然后,您可以选择使用此arr,记住要分散arr的元素:

scala> joinedDF.select(arr:_*).show 
+---------+-----------------+------+------+
|contactId|contactaddresskey|valueA|valueB|
+---------+-----------------+------+------+
|        1|                1|    54|     3|
|        1|                2|    10|     9|
+---------+-----------------+------+------+