Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/typescript/8.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python PySpark获取列的最大和最小非零值_Python_Dataframe_Apache Spark_Pyspark - Fatal编程技术网

Python PySpark获取列的最大和最小非零值

Python PySpark获取列的最大和最小非零值,python,dataframe,apache-spark,pyspark,Python,Dataframe,Apache Spark,Pyspark,我有一个数据框,如下所示: +-------+--------------------+--------------------+--------------+---------+----------+ | label| app_id| title|download_count|entity_id|risk_score| +-------+--------------------+--------------------+---------

我有一个数据框,如下所示:

+-------+--------------------+--------------------+--------------+---------+----------+
|  label|              app_id|               title|download_count|entity_id|risk_score|
+-------+--------------------+--------------------+--------------+---------+----------+
|ANDROID|com.aaron.test.ze...|          Aaron Test|             0|      124|       100|
|ANDROID|com.boulderdailyc...|Boulder Daily Cam...|           100|      122|       100|
|ANDROID|com.communitybank...|     Budgeting Tools|             0|      123|       100|
|ANDROID|com.communitybank...|   PB Mobile Banking|           600|      123|       100|
|ANDROID|com.mendocinobeac...|Mendocino Beacon ...|            10|      122|       100|
|ANDROID|com.profitstars.t...|Johnson City Mobi...|           500|      123|       100|
|ANDROID|com.spreedinc.pro...|Oneida Dispatch f...|          1000|      122|       100|
+-------+--------------------+--------------------+--------------+---------+----------+
我希望获得按实体ID分组的非零max和min
download\u count
值。我不太确定如何使用聚合来实现这一点,当然简单的max和min不会起作用

apps_by_entity = (
        group_by_entity_id(df)
        .agg(F.min(df.download_count), F.max(df.download_count), F.count("entity_id").alias("app_count"))
        .withColumnRenamed("max(download_count)", "download_max")
        .withColumnRenamed("min(download_count)", "download_min")
    )
因为实体123和124的最小值为0

+---------+------------+------------+---------+
|entity_id|download_min|download_max|app_count|
+---------+------------+------------+---------+
|      124|           0|           0|        1|
|      123|           0|         600|        3|
|      122|          10|        1000|        3|
+---------+------------+------------+---------+
所需的输出类似于

+---------+------------+------------+---------+
|entity_id|download_min|download_max|app_count|
+---------+------------+------------+---------+
|      124|           0|           0|        1|
|      123|         500|         600|        3|
|      122|          10|        1000|        3|
+---------+------------+------------+---------+

有没有一种方法可以通过聚合实现这一点?如果不是,那么获得非零值的最佳方法是什么?在
max=min=0
的情况下,只返回
0
null
就可以了。

我不确定在进行最小、最大聚合时是否可以排除零,而不会丢失计数
实现输出的一种方法是分别对聚合进行(最小、最大)和计数,然后将它们重新连接起来

from pyspark.sql.functions import *
from pyspark.sql import functions as F

min_max_df = df.filter(col("download_count")!=0).groupBy('entity_id')\
                .agg(F.min('download_count').alias("download_min"),\
                F.max('download_count').alias("download_max"))\
                .withColumnRenamed("entity_id", "entity_id_1")

count_df =df.groupBy('entity_id').agg(F.count('download_count')\
            .alias("app_count"))

count_df.join(min_max_df, (count_df.entity_id == min_max_df.entity_id_1), \
        "left").drop("entity_id_1").fillna(0, subset=['download_min',\
         'download_max']).show()

+---------+---------+------------+------------+
|entity_id|app_count|download_min|download_max|
+---------+---------+------------+------------+
|      124|        1|           0|           0|
|      123|        3|         500|         600|
|      122|        3|          10|        1000|
+---------+---------+------------+------------+

啊哈,我正要回答我自己的问题,但这正是我最后要做的,谢谢你写下这篇文章