Pyspark 在(py)spark上创建复杂数据帧

Pyspark 在(py)spark上创建复杂数据帧,pyspark,Pyspark,我想知道如何在(py)Spark(2.3.2)中创建以下“复杂”json结构: 测试数据集: 代码: 我的目标json结构是这样一个数据集: [ (id) 1, (info) {“key1”: ‘a’, “info”: [{“number”: 1, “key2”: “x1”}, {“number”: 1, “key2”: “x1”}], (id) 2, (info) {“key1”: ‘b’, “info”: [{“number”: 2, “key2”: “y1”}, {“number”: 1,

我想知道如何在(py)Spark(2.3.2)中创建以下“复杂”json结构:

测试数据集:

代码:

我的目标json结构是这样一个数据集:

[
(id) 1, (info) {“key1”: ‘a’, “info”: [{“number”: 1, “key2”: “x1”}, {“number”: 1, “key2”: “x1”}],
(id) 2, (info) {“key1”: ‘b’, “info”: [{“number”: 2, “key2”: “y1”}, {“number”: 1, “key2”: “x2”}],
(id) 3, (info) {“key1”: ‘c’, “info”: [{“number”: 3, “key2”: “z”}]
]
我怎样才能做到这一点?(我能做到这一点吗?) 因为我总是遇到以下错误:

org.apache.spark.sql.AnalysisException:
cannot resolve 'map('key1', `field1`, 'info', collect_list(map('number',
  CAST(`id` AS STRING), 'key2', CAST(`field2` AS STRING))))'
due to data type mismatch: The given values of function map should all be the same type,
  but they are [string, array<map<string,string>>]
org.apache.spark.sql.AnalysisException:
无法解析“映射('key1'、'field1'、'info',收集列表(映射('number'),
强制转换(`id`AS STRING)、'key2',强制转换(`field2`AS STRING)))
由于数据类型不匹配:函数映射的给定值都应该是相同的类型,
但它们是[字符串,数组]
我从这个错误中了解到field1是一个字符串,而'info'的值不是。但那是我想要的方式。。。 那么,我能用另一种方式实现这一点吗

谢谢

我找到了一种(黑客式的)做事方式。。。我不太喜欢,但看到社区里没有人给出答案,我开始觉得这并不容易

首先,我将“大”聚合分为2部分:

out = df.groupBy('id', 'field1').agg(F.to_json(F.create_map(
    F.lit('key1'),
    F.col('field1'),
    F.lit('info'),
    F.lit('%%replace%%')
)).alias('first'), F.to_json(    F.collect_list(F.create_map(
        F.lit('number'),
        F.col('id'),
        F.lit('key2'),
        F.col('field2')
    ))
).alias('second'))
这将生成下表:

+---+------+---------------------------------+-------------------------------------------------------+
|id |field1|first                            |second                                                 |
+---+------+---------------------------------+-------------------------------------------------------+
|3  |c     |{"key1":"c","info":"%%replace%%"}|[{"number":"3","key2":"z"}]                            |
|2  |b     |{"key1":"b","info":"%%replace%%"}|[{"number":"2","key2":"y1"},{"number":"2","key2":"y2"}]|
|1  |a     |{"key1":"a","info":"%%replace%%"}|[{"number":"1","key2":"x1"},{"number":"1","key2":"x2"}]|
+---+------+---------------------------------+-------------------------------------------------------+
现在将它们结合在一起:

df2 = out.withColumn('final', F.expr('REPLACE(first, \'"%%replace%%"\', second)')).drop('first').drop('second')
df2.show(10, False)

+---+------+---------------------------------------------------------------------------+
|id |field1|final                                                                      |
+---+------+---------------------------------------------------------------------------+
|3  |c     |{"key1":"c","info":[{"number":"3","key2":"z"}]}                            |
|2  |b     |{"key1":"b","info":[{"number":"2","key2":"y1"},{"number":"2","key2":"y2"}]}|
|1  |a     |{"key1":"a","info":[{"number":"1","key2":"x1"},{"number":"1","key2":"x2"}]}|
+---+------+---------------------------------------------------------------------------+

有点离经叛道,但是Spark没有抱怨:)

我相信你可以使用我在解决另一个问题时使用的技巧。如图所示。总而言之,我认为这应该是可能的,但需要你方的努力。抱歉@user238607,这对我的问题没有帮助,因为我还需要收集它们。
+---+------+---------------------------------+-------------------------------------------------------+
|id |field1|first                            |second                                                 |
+---+------+---------------------------------+-------------------------------------------------------+
|3  |c     |{"key1":"c","info":"%%replace%%"}|[{"number":"3","key2":"z"}]                            |
|2  |b     |{"key1":"b","info":"%%replace%%"}|[{"number":"2","key2":"y1"},{"number":"2","key2":"y2"}]|
|1  |a     |{"key1":"a","info":"%%replace%%"}|[{"number":"1","key2":"x1"},{"number":"1","key2":"x2"}]|
+---+------+---------------------------------+-------------------------------------------------------+
df2 = out.withColumn('final', F.expr('REPLACE(first, \'"%%replace%%"\', second)')).drop('first').drop('second')
df2.show(10, False)

+---+------+---------------------------------------------------------------------------+
|id |field1|final                                                                      |
+---+------+---------------------------------------------------------------------------+
|3  |c     |{"key1":"c","info":[{"number":"3","key2":"z"}]}                            |
|2  |b     |{"key1":"b","info":[{"number":"2","key2":"y1"},{"number":"2","key2":"y2"}]}|
|1  |a     |{"key1":"a","info":[{"number":"1","key2":"x1"},{"number":"1","key2":"x2"}]}|
+---+------+---------------------------------------------------------------------------+