Apache spark 如何在spark sql中合并映射列?
我在一个数据框中有两个映射类型列。有没有一种方法可以使用.withColumn在spark Sql中创建一个新的映射列来合并这两个列Apache spark 如何在spark sql中合并映射列?,apache-spark,apache-spark-sql,Apache Spark,Apache Spark Sql,我在一个数据框中有两个映射类型列。有没有一种方法可以使用.withColumn在spark Sql中创建一个新的映射列来合并这两个列 val sampleDF = Seq( ("Jeff", Map("key1" -> "val1"), Map("key2" -> "val2")) ).toDF("name", "mapCol1", "mapCol2") sampleDF.show() +----+-----------------+-----------------+ |na
val sampleDF = Seq(
("Jeff", Map("key1" -> "val1"), Map("key2" -> "val2"))
).toDF("name", "mapCol1", "mapCol2")
sampleDF.show()
+----+-----------------+-----------------+
|name| mapCol1| mapCol2|
+----+-----------------+-----------------+
|Jeff|Map(key1 -> val1)|Map(key2 -> val2)|
+----+-----------------+-----------------+
您可以编写一个udf函数,使用withColumn将两列合并为一列,如下所示
应该给你什么
+----+-----------------+-----------------+-------------------------------+
|name|mapCol1 |mapCol2 |merged |
+----+-----------------+-----------------+-------------------------------+
|Jeff|Map(key1 -> val1)|Map(key2 -> val2)|Map(key1 -> val1, key2 -> val2)|
+----+-----------------+-----------------+-------------------------------+
我希望答案有帮助您可以编写一个udf函数,使用withColumn将两列合并为一列,如下所示
应该给你什么
+----+-----------------+-----------------+-------------------------------+
|name|mapCol1 |mapCol2 |merged |
+----+-----------------+-----------------+-------------------------------+
|Jeff|Map(key1 -> val1)|Map(key2 -> val2)|Map(key1 -> val1, key2 -> val2)|
+----+-----------------+-----------------+-------------------------------+
我希望答案有帮助您可以使用struct来实现这一点
val sampleDF = Seq(
("Jeff", Map("key1" -> "val1"), Map("key2" -> "val2"))
).toDF("name", "mapCol1", "mapCol2")
sampleDF.show()
+----+-----------------+-----------------+
|name| mapCol1| mapCol2|
+----+-----------------+-----------------+
|Jeff|Map(key1 -> val1)|Map(key2 -> val2)|
+----+-----------------+-----------------+
sampleDF.withColumn("NewColumn",struct(sampleDF("mapCol1"), sampleDF("mapCol2"))).take(2)
res17: Array[org.apache.spark.sql.Row] = Array([Jeff,Map(key1 -> val1),Map(key2 -> val2),[Map(key1 -> val1),Map(key2 -> val2)]])
+----+-----------------+-----------------+--------------------+
|name| mapCol1| mapCol2| NewColumn|
+----+-----------------+-----------------+--------------------+
|Jeff|Map(key1 -> val1)|Map(key2 -> val2)|[Map(key1 -> val1...|
+----+-----------------+-----------------+--------------------+
参考:您可以使用struct来实现这一点
val sampleDF = Seq(
("Jeff", Map("key1" -> "val1"), Map("key2" -> "val2"))
).toDF("name", "mapCol1", "mapCol2")
sampleDF.show()
+----+-----------------+-----------------+
|name| mapCol1| mapCol2|
+----+-----------------+-----------------+
|Jeff|Map(key1 -> val1)|Map(key2 -> val2)|
+----+-----------------+-----------------+
sampleDF.withColumn("NewColumn",struct(sampleDF("mapCol1"), sampleDF("mapCol2"))).take(2)
res17: Array[org.apache.spark.sql.Row] = Array([Jeff,Map(key1 -> val1),Map(key2 -> val2),[Map(key1 -> val1),Map(key2 -> val2)]])
+----+-----------------+-----------------+--------------------+
|name| mapCol1| mapCol2| NewColumn|
+----+-----------------+-----------------+--------------------+
|Jeff|Map(key1 -> val1)|Map(key2 -> val2)|[Map(key1 -> val1...|
+----+-----------------+-----------------+--------------------+
参考:
仅当由于性能原因您的用例没有内置函数时才使用UDF
Spark版本2.4及以上
输出
+----+-----------------+-----------------+-------------------------------+
|name|mapCol1 |mapCol2 |map_concat |
+----+-----------------+-----------------+-------------------------------+
|Jeff|Map(key1 -> val1)|Map(key2 -> val2)|Map(key1 -> val1, key2 -> val2)|
+----+-----------------+-----------------+-------------------------------+
Spark版本2.4如下
按照@RameshMaharjan创建一个UDF,但是我添加了一个null检查,以避免运行时出现NPE,如果不添加NPE,最终会导致作业失败
import org.apache.spark.sql.functions.{udf, col}
val map_concat = udf((map1: Map[String, String],
map2: Map[String, String]) =>
if (map1 == null) {
map2
} else if (map2 == null) {
map1
} else {
map1 ++ map2
})
sampleDF.withColumn("map_concat", map_concat(col("mapCol1"), col("mapCol2")))
.show(false)
仅当由于性能原因您的用例没有内置函数时才使用UDF
Spark版本2.4及以上
输出
+----+-----------------+-----------------+-------------------------------+
|name|mapCol1 |mapCol2 |map_concat |
+----+-----------------+-----------------+-------------------------------+
|Jeff|Map(key1 -> val1)|Map(key2 -> val2)|Map(key1 -> val1, key2 -> val2)|
+----+-----------------+-----------------+-------------------------------+
Spark版本2.4如下
按照@RameshMaharjan创建一个UDF,但是我添加了一个null检查,以避免运行时出现NPE,如果不添加NPE,最终会导致作业失败
import org.apache.spark.sql.functions.{udf, col}
val map_concat = udf((map1: Map[String, String],
map2: Map[String, String]) =>
if (map1 == null) {
map2
} else if (map2 == null) {
map1
} else {
map1 ++ map2
})
sampleDF.withColumn("map_concat", map_concat(col("mapCol1"), col("mapCol2")))
.show(false)
非常感谢。这是可行的,但有没有不使用自定义项的方法?您可以使用数组或结构内置函数,但我认为结果不是用户所希望的you@Nats:现在可以使用地图检查我对这个问题的答案。谢谢!!这是可行的,但有没有不使用自定义项的方法?您可以使用数组或结构内置函数,但我认为结果不是用户所希望的you@Nats:现在可以使用map_concat检查我对这个问题的回答。这不会合并地图,它会创建一个结构,其中包含两个为地图的字段。这不会合并地图,它会创建一个结构,其中包含2个为地图的字段是地图的字段