Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 在pyspark中对数组类型列进行分组_Apache Spark_Pyspark - Fatal编程技术网

Apache spark 在pyspark中对数组类型列进行分组

Apache spark 在pyspark中对数组类型列进行分组,apache-spark,pyspark,Apache Spark,Pyspark,我有一个pyspark dataframe列,其结构如下: [{id:1, value:"a"}, {id: 2, value: "b"}, {id: 1, value: "c"} ] 我想把它转换成下面的最佳结果,如果可能的话,不使用自定义项 [{id: 1, value:["a","b"]}, {id: 2, value: "b"} ] 您可以尝试以下方法: df2 = sqlContext.read.json(sc.parallelize([{'id':1, 'value':"a

我有一个pyspark dataframe列,其结构如下:

[{id:1, value:"a"},
 {id: 2, value: "b"},
 {id: 1, value: "c"}
]
我想把它转换成下面的最佳结果,如果可能的话,不使用自定义项

[{id: 1, value:["a","b"]},
 {id: 2, value: "b"}
]
您可以尝试以下方法:

df2 = sqlContext.read.json(sc.parallelize([{'id':1, 'value':"a"},
 {'id': 2, 'value': "b"},
 {'id': 1, 'value': "c"}
]))

df2.show()
+---+-----+
| id|value|
+---+-----+
|  1|    a|
|  2|    b|
|  1|    c|
+---+-----+
您可以根据ID进行聚合,然后收集值列表

import pyspark.sql.functions as F
df2.groupBy('id').agg(F.collect_list('value')).show()

+---+-------------------+
| id|collect_list(value)|
+---+-------------------+
|  1|             [a, c]|
|  2|                [b]|
+---+-------------------+