Python 列上collect_列表后的PySpark reduceByKey聚合

Python 列上collect_列表后的PySpark reduceByKey聚合,python,apache-spark,lambda,pyspark,Python,Apache Spark,Lambda,Pyspark,我想以下面的示例为例,根据collect_list收集的“states”进行聚合 示例代码: 我的代码: 我想要的是: 20170901,[('TX', 3), ('CA', 1 )] 20170902,[('TX', 2), ('CA', 2 )] 我认为第一步是将收集列表结果展平,我尝试过: udf(lambda x:list(chain.from_iterable(x)),StringType()) 自定义项(lambda项:列表(链。从_iterable(itertools。重复(x,

我想以下面的示例为例,根据collect_list收集的“states”进行聚合

示例代码: 我的代码: 我想要的是:

20170901,[('TX', 3), ('CA', 1 )]
20170902,[('TX', 2), ('CA', 2 )]
我认为第一步是将收集列表结果展平,我尝试过: udf(lambda x:list(chain.from_iterable(x)),StringType()) 自定义项(lambda项:列表(链。从_iterable(itertools。重复(x,1),如果是instance(x,str),则为x表示项中的x))) 自定义项(lambda l:[子列表中项目的l中子列表的项目])


但还没有运气,下一步是补充KV对并减少,我在这里停留了一段时间,火花专家能在逻辑上提供帮助吗?谢谢你的帮助

您可以在udf中使用reduce和counter来实现它。我尝试了我的方式,希望这能有所帮助

>>> from functools import reduce
>>> from collections import Counter
>>> from pyspark.sql.types import *
>>> from pyspark.sql import functions as F
>>> rdd = sc.parallelize([('20170901',['TX','TX','CA','TX']), ('20170902', ['TX','CA','CA']), ('20170902',['TX']) ])
>>> df = spark.createDataFrame(rdd, ["datatime", "actionlist"])
>>> df = df.groupBy("datatime").agg(F.collect_list("actionlist").alias("actionlist"))
>>> def someudf(row):
        value = reduce(lambda x,y:x+y,row)
        return Counter(value).most_common()

>>> schema = ArrayType(StructType([
    StructField("char", StringType(), False),
    StructField("count", IntegerType(), False)]))

>>> udf1 = F.udf(someudf,schema)
>>> df.select('datatime',udf1(df.actionlist)).show(2,False)
+--------+-------------------+
|datatime|someudf(actionlist)|
+--------+-------------------+
|20170902|[[TX,2], [CA,2]]   |
|20170901|[[TX,3], [CA,1]]   |
+--------+-------------------+

您可以在udf中使用reduce和counter来实现它。我尝试了我的方式,希望这能有所帮助

>>> from functools import reduce
>>> from collections import Counter
>>> from pyspark.sql.types import *
>>> from pyspark.sql import functions as F
>>> rdd = sc.parallelize([('20170901',['TX','TX','CA','TX']), ('20170902', ['TX','CA','CA']), ('20170902',['TX']) ])
>>> df = spark.createDataFrame(rdd, ["datatime", "actionlist"])
>>> df = df.groupBy("datatime").agg(F.collect_list("actionlist").alias("actionlist"))
>>> def someudf(row):
        value = reduce(lambda x,y:x+y,row)
        return Counter(value).most_common()

>>> schema = ArrayType(StructType([
    StructField("char", StringType(), False),
    StructField("count", IntegerType(), False)]))

>>> udf1 = F.udf(someudf,schema)
>>> df.select('datatime',udf1(df.actionlist)).show(2,False)
+--------+-------------------+
|datatime|someudf(actionlist)|
+--------+-------------------+
|20170902|[[TX,2], [CA,2]]   |
|20170901|[[TX,3], [CA,1]]   |
+--------+-------------------+

只需使用combineByKey()即可:


只需使用combineByKey()即可:


这个解决方案是完美的!你真的很好!下一步是从一个dataframe列中获取rdd,以处理实际需求。谢谢你的帮助!很高兴这有帮助!:)这个解决方案是完美的!你真的很好!下一步是从一个dataframe列中获取rdd,以处理实际需求。谢谢你的帮助!很高兴这有帮助!:)
>>> from functools import reduce
>>> from collections import Counter
>>> from pyspark.sql.types import *
>>> from pyspark.sql import functions as F
>>> rdd = sc.parallelize([('20170901',['TX','TX','CA','TX']), ('20170902', ['TX','CA','CA']), ('20170902',['TX']) ])
>>> df = spark.createDataFrame(rdd, ["datatime", "actionlist"])
>>> df = df.groupBy("datatime").agg(F.collect_list("actionlist").alias("actionlist"))
>>> def someudf(row):
        value = reduce(lambda x,y:x+y,row)
        return Counter(value).most_common()

>>> schema = ArrayType(StructType([
    StructField("char", StringType(), False),
    StructField("count", IntegerType(), False)]))

>>> udf1 = F.udf(someudf,schema)
>>> df.select('datatime',udf1(df.actionlist)).show(2,False)
+--------+-------------------+
|datatime|someudf(actionlist)|
+--------+-------------------+
|20170902|[[TX,2], [CA,2]]   |
|20170901|[[TX,3], [CA,1]]   |
+--------+-------------------+
from collections import Counter
count = rdd.combineByKey(lambda v: Counter(v),
                                 lambda c,v: c + Counter(v),
                                 lambda c1,c2: c1 + c2)
print count #[('20170901', Counter({'TX': 3, 'CA': 1})), ('20170902', Counter({'CA': 2, 'TX': 2}))]