Apache spark PySpark分解json字符串

Apache spark PySpark分解json字符串,apache-spark,pyspark,apache-spark-sql,Apache Spark,Pyspark,Apache Spark Sql,有一个数据帧 data = [(42, """[{"param_a":9519,"param_b":7,"param_c":64},{"param_a":7483,"param_b":7,"param_c":1},{"param_a":1032,"param_b":7,"param_c":

有一个数据帧

data = [(42, """[{"param_a":9519,"param_b":7,"param_c":64},{"param_a":7483,"param_b":7,"param_c":1},{"param_a":1032,"param_b":7,"param_c":0}]""")]
df = spark.createDataFrame(data, ['key', 'value'])
value
是字符串类型,但它是有效的json。 如何分解此列,使输出数据框有3行和下一个结构:

output_df: [(key, param_a, param_b, param_c)]

“value”列中有一个json字符串。试试这个

import json

data = [(42,json.loads("""[{"param_a":9519,"param_b":7,"param_c":64},{"param_a":7483,"param_b":7,"param_c":1},{"param_a":1032,"param_b":7,"param_c":0}]""")) ]
hdr=['id','c1']
df = spark.createDataFrame(data, hdr)
df.show(100,truncate=False)

+---+----------------------------------------------------------------------------------------------------------------------------------------------+
|id |c1                                                                                                                                            |
+---+----------------------------------------------------------------------------------------------------------------------------------------------+
|42 |[[param_a -> 9519, param_b -> 7, param_c -> 64], [param_a -> 7483, param_b -> 7, param_c -> 1], [param_a -> 1032, param_b -> 7, param_c -> 0]]|
+---+----------------------------------------------------------------------------------------------------------------------------------------------+

df.printSchema()
df.createOrReplaceTempView("df")

root
 |-- id: long (nullable = true)
 |-- c1: array (nullable = true)
 |    |-- element: map (containsNull = true)
 |    |    |-- key: string
 |    |    |-- value: long (valueContainsNull = true)

spark.sql("""
select id, explode(c1) c2 from df
""").show(100, truncate=False)

+---+----------------------------------------------+
|id |c2                                            |
+---+----------------------------------------------+
|42 |[param_a -> 9519, param_b -> 7, param_c -> 64]|
|42 |[param_a -> 7483, param_b -> 7, param_c -> 1] |
|42 |[param_a -> 1032, param_b -> 7, param_c -> 0] |
+---+----------------------------------------------+

spark.sql("""
select id, c2["param_a"] param_a, c2["param_b"] param_b, c2["param_c"] param_c from (
select id, explode(c1) c2 from df )
""").show(100, truncate=False)

+---+-------+-------+-------+
|id |param_a|param_b|param_c|
+---+-------+-------+-------+
|42 |9519   |7      |64     |
|42 |7483   |7      |1      |
|42 |1032   |7      |0      |
+---+-------+-------+-------+

为了使用Spark的Json功能,您可以使用内置函数解析
字段,然后解析结果,将结果拆分为单行

这种方法对于Spark驱动程序上无法处理的大量数据特别有用。一个小缺点是必须显式指定Json模式

从pyspark.sql导入函数为F
schema=“数组”
df.withColumn(“已解析”,F.from_json(F.col(“值”),schema))\
.withColumn(“分解”,F.explode(“解析”))\
.选择(“键”,“分解*”)\
.show()
印刷品

+---+-------+-------+-------+
|键|参数a |参数b |参数c|
+---+-------+-------+-------+
| 42|   9519|      7|     64|
| 42|   7483|      7|      1|
| 42|   1032|      7|      0|
+---+-------+-------+-------+

太好了。谢谢,但是当dataframe有几种复杂的json格式时该怎么办?如何处理?