Apache spark spark dataframe根据条件从多列中选择值

Apache spark spark dataframe根据条件从多列中选择值,apache-spark,apache-spark-sql,Apache Spark,Apache Spark Sql,数据模式 root |-- id: string (nullable = true) |-- col1: string (nullable = true) |-- col2: string (nullable = true) |id|col1 |col2 | |1 |["x","y","z"]|[123,"null","null"]| 从上面的数据中,我想过滤col1中x存在的位置和col2中x的相应值。 (按顺序排列col1和col2的值。如果

数据模式

root
|-- id: string (nullable = true)
|-- col1: string (nullable = true)
|-- col2: string (nullable = true)


|id|col1         |col2               |
|1 |["x","y","z"]|[123,"null","null"]|
从上面的数据中,我想过滤col1中x存在的位置和col2中x的相应值。 (按顺序排列col1和col2的值。如果col1中的x索引2和col2处的值索引也是2)

结果:(需要col1和col2类型数组类型)

如果x在col1中不存在,则需要如下结果

|id| col1    |col2 |
|1 |["null"] |["null"]|
我试过了

val df1=df.withColumn(“result”,当($“col1.”包含(“x”),“x”)。否则(“null”))

诀窍是将数据从哑
字符串
列转换为更有用的数据结构。一旦
col1
col2
被重建为数组(或映射,正如您所期望的输出所建议的那样),您就可以使用Spark的内置函数,而不是@baitmbarek所建议的凌乱的UDF

首先,使用
trim
split
col1
col2
转换为数组:

scala> val df = Seq(
     |       ("1", """["x","y","z"]""","""[123,"null","null"]"""),
     |         ("2", """["a","y","z"]""","""[123,"null","null"]""")
     |     ).toDF("id","col1","col2")
df: org.apache.spark.sql.DataFrame = [id: string, col1: string ... 1 more field]

scala> val df_array = df.withColumn("col1", split(trim($"col1", "[\"]"), "\"?,\"?"))
                        .withColumn("col2", split(trim($"col2", "[\"]"), "\"?,\"?"))
df_array: org.apache.spark.sql.DataFrame = [id: string, col1: array<string> ... 1 more field]

scala> df_array.show(false)
+---+---------+-----------------+
|id |col1     |col2             |
+---+---------+-----------------+
|1  |[x, y, z]|[123, null, null]|
|2  |[a, y, z]|[123, null, null]|
+---+---------+-----------------+


scala> df_array.printSchema
root
 |-- id: string (nullable = true)
 |-- col1: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- col2: array (nullable = true)
 |    |-- element: string (containsNull = true)


哪个Spark版本和数组中的字段数是固定的?
scala> val df = Seq(
     |       ("1", """["x","y","z"]""","""[123,"null","null"]"""),
     |         ("2", """["a","y","z"]""","""[123,"null","null"]""")
     |     ).toDF("id","col1","col2")
df: org.apache.spark.sql.DataFrame = [id: string, col1: string ... 1 more field]

scala> val df_array = df.withColumn("col1", split(trim($"col1", "[\"]"), "\"?,\"?"))
                        .withColumn("col2", split(trim($"col2", "[\"]"), "\"?,\"?"))
df_array: org.apache.spark.sql.DataFrame = [id: string, col1: array<string> ... 1 more field]

scala> df_array.show(false)
+---+---------+-----------------+
|id |col1     |col2             |
+---+---------+-----------------+
|1  |[x, y, z]|[123, null, null]|
|2  |[a, y, z]|[123, null, null]|
+---+---------+-----------------+


scala> df_array.printSchema
root
 |-- id: string (nullable = true)
 |-- col1: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- col2: array (nullable = true)
 |    |-- element: string (containsNull = true)

scala> val df_map = df_array.select(
                        $"id", 
                        map_from_entries(arrays_zip($"col1", $"col2")).as("col_map")
                        )
df_map: org.apache.spark.sql.DataFrame = [id: string, col_map: map<string,string>]

scala> df_map.show(false)
+---+--------------------------------+
|id |col_map                         |
+---+--------------------------------+
|1  |[x -> 123, y -> null, z -> null]|
|2  |[a -> 123, y -> null, z -> null]|
+---+--------------------------------+
scala> val df_final = df_map.select(
                                $"id",
                                when(isnull(element_at($"col_map", "x")), 
                                    array(lit("null")))
                                .otherwise(
                                    array(lit("x")))
                                .as("col1"),  
                                when(isnull(element_at($"col_map", "x")), 
                                    array(lit("null")))
                                .otherwise(
                                    array(element_at($"col_map", "x")))
                                .as("col2")
                                )
df_final: org.apache.spark.sql.DataFrame = [id: string, col1: array<string> ... 1 more field]

scala> df_final.show
+---+------+------+
| id|  col1|  col2|
+---+------+------+
|  1|   [x]| [123]|
|  2|[null]|[null]|
+---+------+------+
scala> df_final.printSchema
root
 |-- id: string (nullable = true)
 |-- col1: array (nullable = false)
 |    |-- element: string (containsNull = false)
 |-- col2: array (nullable = false)
 |    |-- element: string (containsNull = true)