Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/visual-studio-2010/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何在pyspark中获得groupBy之后每次计数的总数百分比?_Pyspark - Fatal编程技术网

如何在pyspark中获得groupBy之后每次计数的总数百分比?

如何在pyspark中获得groupBy之后每次计数的总数百分比?,pyspark,Pyspark,给定以下数据帧: import findspark findspark.init() from pyspark.sql import SparkSession spark = SparkSession.builder.master("local").appName("test").getOrCreate() df = spark.createDataFrame([['a',1],['b', 2],['a', 3]], ['category', 'value']) df.show() +--

给定以下数据帧:

import findspark
findspark.init()
from pyspark.sql import SparkSession

spark = SparkSession.builder.master("local").appName("test").getOrCreate()
df = spark.createDataFrame([['a',1],['b', 2],['a', 3]], ['category', 'value'])
df.show()


+--------+-----+
|category|value|
+--------+-----+
|       a|    1|
|       b|    2|
|       a|    3|
+--------+-----+
我想计算每个类别中的项目数量,并为每个计数提供总项目数的百分比,如下所示

+--------+-----+----------+
|category|count|percentage|
+--------+-----+----------+
|       b|    1|     0.333|
|       a|    2|     0.667|
+--------+-----+----------+

您可以通过以下方法获得总数的计数和百分比/比率

import pyspark.sql.functions as f
from pyspark.sql.window import Window
df.groupBy('category').count()\
  .withColumn('percentage', f.round(f.col('count') / f.sum('count')\
  .over(Window.partitionBy()),3)).show()

+--------+-----+----------+
|category|count|percentage|
+--------+-----+----------+
|       b|    1|     0.333|
|       a|    2|     0.667|
+--------+-----+----------+
前面的语句可以分为几个步骤。df.groupBy'category'。count生成计数:

然后通过应用窗口函数,我们可以获得每行的总计数:

df.groupBy('category').count().withColumn('total', f.sum('count').over(Window.partitionBy())).show()

+--------+-----+-----+
|category|count|total|
+--------+-----+-----+
|       b|    1|    3|
|       a|    2|    3|
+--------+-----+-----+
其中,总计列是通过将分区中的所有计数相加到一个包含所有行的分区来计算的

一旦我们有了每行的count和total,我们就可以计算出比率:

df.groupBy('category')\
  .count()\
  .withColumn('total', f.sum('count').over(Window.partitionBy()))\
  .withColumn('percentage',f.col('count')/f.col('total'))\
  .show()

+--------+-----+-----+------------------+
|category|count|total|        percentage|
+--------+-----+-----+------------------+
|       b|    1|    3|0.3333333333333333|
|       a|    2|    3|0.6666666666666666|
+--------+-----+-----+------------------+
您可以使用agg进行分组和聚合:

输出:

+--------+------------------+
|category|(count(value) / 3)|
+--------+------------------+
|       b|0.3333333333333333|
|       a|0.6666666666666666|
+--------+------------------+
+--------+-----+
|category|ratio|
+--------+-----+
|       b| 0.33|
|       a| 0.67|
+--------+-----+
要使其更美观,您可以使用:

df.groupby('category').agg(
    (
        F.round(F.count('value') / df.count(), 2)
    ).alias('ratio')
).show()
输出:

+--------+------------------+
|category|(count(value) / 3)|
+--------+------------------+
|       b|0.3333333333333333|
|       a|0.6666666666666666|
+--------+------------------+
+--------+-----+
|category|ratio|
+--------+-----+
|       b| 0.33|
|       a| 0.67|
+--------+-----+
您还可以使用SQL:

df.createOrReplaceTempView('df')

spark.sql(
    """
    SELECT category, COUNT(*) / (SELECT COUNT(*) FROM df) AS ratio
    FROM df
    GROUP BY category
    """
).show()