Dataframe Pyspark数据帧条件groupBy

Dataframe Pyspark数据帧条件groupBy,dataframe,pyspark,Dataframe,Pyspark,是否有基于数据帧groupBy中的某些条件使用列或无列的方法?如果不满足您要查找的条件,您可以向groupBy提供一个空列表,这将groupBy无列: from pyspark.sql import Row, functions as F row = Row("UK_1","UK_2","Date","Cat") agg = '' agg = 'Cat' tdf = (sc.parallelize ([ row(1,1,'12/10/2016',"A"),

是否有基于数据帧groupBy中的某些条件使用列或无列的方法?

如果不满足您要查找的条件,您可以向
groupBy
提供一个空列表,这将
groupBy
无列:

from pyspark.sql import Row, functions as F
row = Row("UK_1","UK_2","Date","Cat")
agg = ''
agg = 'Cat'
tdf = (sc.parallelize
    ([
        row(1,1,'12/10/2016',"A"),
        row(1,2,None,'A'),
        row(2,1,'14/10/2016','B'),
        row(3,3,'!~2016/2/276','B'),
        row(None,1,'26/09/2016','A'),
        row(1,1,'12/10/2016',"A"),
        row(1,2,None,'A'),
        row(2,1,'14/10/2016','B'),
        row(None,None,'!~2016/2/276','B'),
        row(None,1,'26/09/2016','A')
        ]).toDF())
tdf.groupBy(  iff(len(agg.strip()) > 0 , F.col(agg),  )).agg(F.count('*').alias('row_count')).show()

tdf.groupBy(agg if len(agg) > 0 else []).agg(...)
agg = ''
tdf.groupBy(agg if len(agg) > 0 else []).agg(F.count('*').alias('row_count')).show()
+---------+
|row_count|
+---------+
|       10|
+---------+

agg = 'Cat'
tdf.groupBy(agg if len(agg) > 0 else []).agg(F.count('*').alias('row_count')).show()
+---+---------+
|Cat|row_count|
+---+---------+
|  B|        4|
|  A|        6|
+---+---------+