Apache spark pyspark拆分数组并获取键值

Apache spark pyspark拆分数组并获取键值,apache-spark,pyspark,apache-spark-sql,pyspark-dataframes,Apache Spark,Pyspark,Apache Spark Sql,Pyspark Dataframes,我有一个包含键值对字符串数组的数据帧,我只想从键值中获取键 每行的键值对数量是动态的,命名约定也不同 Sample Input ------+-----+-----+-----+--------------------- |ID |data| value | +------+-----+-----+--------+----------------- |e1 |D1 |["K1":"V1","K2":"V2","K3":"V3"]

我有一个包含键值对字符串数组的数据帧,我只想从键值中获取键 每行的键值对数量是动态的,命名约定也不同

Sample Input
------+-----+-----+-----+---------------------
|ID    |data| value                          |
+------+-----+-----+--------+-----------------
|e1    |D1  |["K1":"V1","K2":"V2","K3":"V3"] |
|e2    |D2  |["K1":"V1","K3":"V3"]           |
|e3    |D1  |["K1":"V1","K2":"V2"]           |
|e4    |D3  |["K2":"V2","K1":"V1","K3":"V3"] |
+------+-----+-----+--------+-----------------


Expected Result:

------+-----+-----+------
|ID    |data| value     |
+------+-----+-----+----|
|e1    |D1  |[K1|K2|K3] |
|e2    |D2  |[K1|K3]    |
|e3    |D1  |[K1|K2]    |
|e4    |D3  |[K2|K1|K3] |
+------+-----+-----+-----
对于Spark 2.4+,使用函数

对于数组的每个元素,使用substring键并使用函数修剪前导和尾随引号

df.show(truncate=False)
#+---+----+------------------------------------+
#|ID |data|value                               |
#+---+----+------------------------------------+
#|e1 |D1  |["K1":"V1", "K2": "V2", "K3": "V3"] |
#|e2 |D2  |["K1": "V1", "K3": "V3"]            |
#|e3 |D1  |["K1": "V1", "K2": "V2"]            |
#|e4 |D3  |["K2": "V2", "K1": "V1", "K3": "V3"]|
#+---+----+------------------------------------+    

new_value = """ transform(value, x -> trim(BOTH '"' FROM substring_index(x, ':', 1))) """
df.withColumn("value", expr(new_value)).show()

#+---+----+------------+
#|ID |data|value       |
#+---+----+------------+
#|e1 |D1  |[K1, K2, K3]|
#|e2 |D2  |[K1, K3]    |
#|e3 |D1  |[K1, K2]    |
#|e4 |D3  |[K2, K1, K3]|
#+---+----+------------+
如果希望结果是一个由
|
分隔的字符串,可以这样使用:

df.withColumn("value", array_join(expr(new_value), "|")).show()
#+---+----+--------+
#|ID |data|value   |
#+---+----+--------+
#|e1 |D1  |K1|K2|K3|
#|e2 |D2  |K1|K3   |
#|e3 |D1  |K1|K2   |
#|e4 |D3  |K2|K1|K3|
#+---+----+--------+
对于Spark 2.4+,使用函数

对于数组的每个元素,使用substring键并使用函数修剪前导和尾随引号

df.show(truncate=False)
#+---+----+------------------------------------+
#|ID |data|value                               |
#+---+----+------------------------------------+
#|e1 |D1  |["K1":"V1", "K2": "V2", "K3": "V3"] |
#|e2 |D2  |["K1": "V1", "K3": "V3"]            |
#|e3 |D1  |["K1": "V1", "K2": "V2"]            |
#|e4 |D3  |["K2": "V2", "K1": "V1", "K3": "V3"]|
#+---+----+------------------------------------+    

new_value = """ transform(value, x -> trim(BOTH '"' FROM substring_index(x, ':', 1))) """
df.withColumn("value", expr(new_value)).show()

#+---+----+------------+
#|ID |data|value       |
#+---+----+------------+
#|e1 |D1  |[K1, K2, K3]|
#|e2 |D2  |[K1, K3]    |
#|e3 |D1  |[K1, K2]    |
#|e4 |D3  |[K2, K1, K3]|
#+---+----+------------+
如果希望结果是一个由
|
分隔的字符串,可以这样使用:

df.withColumn("value", array_join(expr(new_value), "|")).show()
#+---+----+--------+
#|ID |data|value   |
#+---+----+--------+
#|e1 |D1  |K1|K2|K3|
#|e2 |D2  |K1|K3   |
#|e3 |D1  |K1|K2   |
#|e4 |D3  |K2|K1|K3|
#+---+----+--------+

可以将值拆分为包含键和值的数组

df.withColumn("keys", expr('transform(value, keyValue -> trim(split(keyValue, ":")[0]))')).drop("value")

可以将值拆分为包含键和值的数组

df.withColumn("keys", expr('transform(value, keyValue -> trim(split(keyValue, ":")[0]))')).drop("value")

对,arraytype列iterable有一些限制。所以您可以更改为:df.withColumn(“keys”,expr(“transform(value,keyValue->trim(split(keyValue),:”[0]))))。drop(“value”)。对,arraytype column iterable有一些限制。因此,您可以更改为:df.withColumn(“keys”,expr(“transform(value,keyValue->trim(split(keyValue),:”[0])))。drop(“value”)。