使用pyspark按键合并json文件

使用pyspark按键合并json文件,json,pyspark,pyspark-dataframes,Json,Pyspark,Pyspark Dataframes,Json文件的格式如下:- **Input-** {'key-a' : [{'key1':'value1', 'key2':'value2'},{'key1':'value3', 'key2':'value4'}...], 'key-b':'value-b', 'key-c':'value-c'}, {'key-a' : [{'key1':'value5', 'key2':'value6'},{'key1':'value7', 'key2':'value8'}...], 'key-b'

Json文件的格式如下:-

**Input-** 

{'key-a' : [{'key1':'value1', 'key2':'value2'},{'key1':'value3', 'key2':'value4'}...], 
'key-b':'value-b', 
'key-c':'value-c'},
{'key-a' : [{'key1':'value5', 'key2':'value6'},{'key1':'value7', 'key2':'value8'}...], 
'key-b':'value-b', 
'key-c':'value-c'}
我需要合并数据以合并“key-a”的所有值,并返回一个json对象作为输出:

**Output-** 
{'key-a' : 
[{'key1':'value1', 'key2':'value2'},
{'key1':'value3', 'key2':'value4'},
{'key1':'value5', 'key2':'value6'},
{'key1':'value7', 'key2':'value8'}...], 
'key-b':'value-b', 
'key-c':'value-c'}
数据加载到具有以下架构的pyspark数据帧中:-

**Schema:**

key-a
|-- key1: string (nullable= false)
|-- key2: string (nullable= true)
key-b: string (nullable= true)
key-c: string (nullable= false)
我尝试过使用
groupbykey
函数,但是当我尝试
Show()
输出时,我得到了以下错误:“
groupeddata
对象没有属性'Show'pyspark”

如何实现上述转型


PFA-

这可能是一个有效的解决方案-

#在此处创建数据帧

df_new = spark.createDataFrame([(str({"key-a":[{"key1":"value1","key2":"value2"}, {"key1": "value3", "key2": "value4"}], "key-b" :"value-b"})), (str({"key-a":[{"key1":"value5","key2":"value6"}, {"key1": "value7", "key2": "value8"}], "key-b" :"value-b"}))],T.StringType())
df_new.show(truncate=False)
+-----------------------------------------------------------------------------------------------------------+
|value                                                                                                      |
+-----------------------------------------------------------------------------------------------------------+
|{'key-a': [{'key1': 'value1', 'key2': 'value2'}, {'key1': 'value3', 'key2': 'value4'}], 'key-b': 'value-b'}|
|{'key-a': [{'key1': 'value5', 'key2': 'value6'}, {'key1': 'value7', 'key2': 'value8'}], 'key-b': 'value-b'}|
+-----------------------------------------------------------------------------------------------------------+
使用json中的
和正确的模式首先计算列-
这里的想法是在列中获取json的键,然后使用groupBy

df = df_new.withColumn('col', F.from_json("value",T.MapType(T.StringType(), T.StringType())))
df = df.select("col", F.explode("col").alias("x", "y"))
df.select("x", "y").show(truncate=False)
+-----+---------------------------------------------------------------------+
|x    |y                                                                    |
+-----+---------------------------------------------------------------------+
|key-a|[{"key1":"value1","key2":"value2"},{"key1":"value3","key2":"value4"}]|
|key-b|value-b                                                              |
|key-a|[{"key1":"value5","key2":"value6"},{"key1":"value7","key2":"value8"}]|
|key-b|value-b                                                              |
+-----+---------------------------------------------------------------------+
这里的逻辑- 为了分组,我们创建了一个虚拟列

df_grp = df.groupBy("x").agg(F.collect_set("y").alias("y"))
df_grp = df_grp.withColumn("y", F.col("y").cast(T.StringType()))
df_grp = df_grp.withColumn("array", F.array("x", "y"))
df_grp = df_grp.withColumn("dummy_col", F.lit("1"))
df_grp = df_grp.groupBy("dummy_col").agg(F.collect_set("array"))
df_grp.show(truncate=False)

+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|dummy_col|collect_set(array)                                                                                                                                                           |
+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|1        |[[key-a, [[{"key1":"value1","key2":"value2"},{"key1":"value3","key2":"value4"}], [{"key1":"value5","key2":"value6"},{"key1":"value7","key2":"value8"}]]], [key-b, [value-b]]]|
+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
我尝试过使用groupbykey函数,但是当我尝试显示()输出时,我得到了以下错误:“groupeddata对象没有'Show'pyspark'属性”


这给您带来了麻烦,因为您在
groupBy
子句中没有使用任何聚合函数。

它在-df\u grp=df.groupBy(“x”).agg(F.collect\u set(“y”).alias(“y”)行中给出了语法错误。您可以发布错误吗?它不应该是..这是一个经过测试的代码我在上面的问题中附上了错误截图检查语法是否正确。。好像还有别的东西。。为了调试更多,我可能需要查看完整的代码。。同时,如果该解决方案/方法对您有所帮助,如果您能帮助批准和投票,我将不胜感激。。