在具有相同id的另一列中显示不同的值pyspark dataframe

在具有相同id的另一列中显示不同的值pyspark dataframe,dataframe,pyspark,group-by,Dataframe,Pyspark,Group By,我有一个pyspark数据框,看起来像这样: 如何显示每个id下每个唯一时间的计数以及按id排序的顺序?理想的结果如下 尝试使用groupBy,count df.show() #+---+-------------------+ #| ID| TIME| #+---+-------------------+ #| 1|07-24-2019,19:47:36| #| 2|07-24-2019,20:43:39| #| 1|07-24-2019,20:47:36|

我有一个pyspark数据框,看起来像这样:

如何显示每个id下每个唯一时间的计数以及按id排序的顺序?理想的结果如下


尝试使用
groupBy,count

df.show()
#+---+-------------------+
#| ID|               TIME|
#+---+-------------------+
#|  1|07-24-2019,19:47:36|
#|  2|07-24-2019,20:43:39|
#|  1|07-24-2019,20:47:36|
#|  1|07-24-2019,19:47:36|
#+---+-------------------+

from pyspark.sql.functions import *

df.groupBy("ID","TIME").\
agg(count(col("ID")).alias("count")).\
orderBy("ID","TIME").\
show()

#or using time as aggregation
df.groupBy("ID","TIME").\
agg(count(col("TIME")).alias("count")).\
orderBy("ID","TIME").\
show()

#+---+-------------------+-----+
#| ID|               TIME|count|
#+---+-------------------+-----+
#|  1|07-24-2019,19:47:36|    2|
#|  1|07-24-2019,20:47:36|    1|
#|  2|07-24-2019,20:43:39|    1|
#+---+-------------------+-----+
示例:

df.show()
#+---+-------------------+
#| ID|               TIME|
#+---+-------------------+
#|  1|07-24-2019,19:47:36|
#|  2|07-24-2019,20:43:39|
#|  1|07-24-2019,20:47:36|
#|  1|07-24-2019,19:47:36|
#+---+-------------------+

from pyspark.sql.functions import *

df.groupBy("ID","TIME").\
agg(count(col("ID")).alias("count")).\
orderBy("ID","TIME").\
show()

#or using time as aggregation
df.groupBy("ID","TIME").\
agg(count(col("TIME")).alias("count")).\
orderBy("ID","TIME").\
show()

#+---+-------------------+-----+
#| ID|               TIME|count|
#+---+-------------------+-----+
#|  1|07-24-2019,19:47:36|    2|
#|  1|07-24-2019,20:47:36|    1|
#|  2|07-24-2019,20:43:39|    1|
#+---+-------------------+-----+