Apache spark 如何使用window而不是Pyspark groupBy进行聚合

Apache spark 如何使用window而不是Pyspark groupBy进行聚合,apache-spark,pyspark,apache-spark-sql,pyspark-dataframes,Apache Spark,Pyspark,Apache Spark Sql,Pyspark Dataframes,我在使用窗口功能而不是GroupBy聚合每个用户时遇到问题,在我的情况下110和220用户id: 1-为每个p_uuid df = spark.createDataFrame([(1, 110, 'aaa', 'walk', 'work', '2019-09-28 13:40:19-04:00'), (2, 110, 'aaa', 'walk', 'work', '2019-09-28 13:40:19-04:01'),

我在使用窗口功能而不是GroupBy聚合每个用户时遇到问题,在我的情况下110220用户id:

1-为每个
p_uuid

df = spark.createDataFrame([(1, 110, 'aaa', 'walk', 'work', '2019-09-28 13:40:19-04:00'),
                        (2, 110, 'aaa', 'walk', 'work', '2019-09-28 13:40:19-04:01'),
                        (3, 110, 'aaa', 'walk', 'work', '2019-09-28 13:40:19-04:02'),
                        (4, 110, 'aaa', 'metro', 'work', '2019-09-28 13:41:19-04:00'),
                        (5, 110, 'aaa', 'metro', 'work', '2019-09-28 13:41:19-04:01'),
                        (6, 110, 'aaa', 'walk', 'work', '2019-09-28 13:42:19-04:00'),
                        (7, 110, 'aaa', 'walk', 'work', '2019-09-28 13:42:19-04:01'),
                        (8, 110, 'bbb', 'bike', 'home', '2019-09-17 14:40:19-04:00'),
                        (9, 110, 'bbb', 'bus', 'home', '2019-09-17 14:41:19-04:00'),
                        (10, 110, 'bbb', 'walk', 'home', '2019-09-17 14:43:19-04:00'),
                        (16, 110, 'ooo', None, None, '2019-08-29 16:01:19-04:00'),
                        (17, 110, 'ooo', None, None, '2019-08-29 16:02:19-04:00'),
                        (18, 110, 'ooo', None, None, '2019-08-29 16:02:19-04:00'),
                        (19, 222, 'www', 'car', 'work', '2019-09-28 08:00:19-04:00'),
                        (20, 222, 'www', 'metro', 'work', '2019-09-28 08:01:19-04:00'),
                        (21, 222, 'www', 'walk', 'work', '2019-09-28 08:02:19-04:00'),
                        (22, 222, 'xxx', 'walk', 'friend', '2019-09-17 08:40:19-04:00'),
                        (23, 222, 'xxx', 'bike', 'friend', '2019-09-17 08:42:19-04:00'),
                        (24, 222, 'xxx', 'bus', 'friend', '2019-09-17 08:43:19-04:00'),
                        (30, 222, 'ooo', None, None, '2019-08-29 10:00:19-04:00'),
                        (31, 222, 'ooo', None, None, '2019-08-29 10:01:19-04:00'),
                        (32, 222, 'ooo', None, None, '2019-08-29 10:02:19-04:00')],
                    ['idx', 'u_uuid', 'p_uuid', 'mode', 'place', 'timestamp']
                )
 df.show(30, False)
2-为每个
p_uuid

df = spark.createDataFrame([(1, 110, 'aaa', 'walk', 'work', '2019-09-28 13:40:19-04:00'),
                        (2, 110, 'aaa', 'walk', 'work', '2019-09-28 13:40:19-04:01'),
                        (3, 110, 'aaa', 'walk', 'work', '2019-09-28 13:40:19-04:02'),
                        (4, 110, 'aaa', 'metro', 'work', '2019-09-28 13:41:19-04:00'),
                        (5, 110, 'aaa', 'metro', 'work', '2019-09-28 13:41:19-04:01'),
                        (6, 110, 'aaa', 'walk', 'work', '2019-09-28 13:42:19-04:00'),
                        (7, 110, 'aaa', 'walk', 'work', '2019-09-28 13:42:19-04:01'),
                        (8, 110, 'bbb', 'bike', 'home', '2019-09-17 14:40:19-04:00'),
                        (9, 110, 'bbb', 'bus', 'home', '2019-09-17 14:41:19-04:00'),
                        (10, 110, 'bbb', 'walk', 'home', '2019-09-17 14:43:19-04:00'),
                        (16, 110, 'ooo', None, None, '2019-08-29 16:01:19-04:00'),
                        (17, 110, 'ooo', None, None, '2019-08-29 16:02:19-04:00'),
                        (18, 110, 'ooo', None, None, '2019-08-29 16:02:19-04:00'),
                        (19, 222, 'www', 'car', 'work', '2019-09-28 08:00:19-04:00'),
                        (20, 222, 'www', 'metro', 'work', '2019-09-28 08:01:19-04:00'),
                        (21, 222, 'www', 'walk', 'work', '2019-09-28 08:02:19-04:00'),
                        (22, 222, 'xxx', 'walk', 'friend', '2019-09-17 08:40:19-04:00'),
                        (23, 222, 'xxx', 'bike', 'friend', '2019-09-17 08:42:19-04:00'),
                        (24, 222, 'xxx', 'bus', 'friend', '2019-09-17 08:43:19-04:00'),
                        (30, 222, 'ooo', None, None, '2019-08-29 10:00:19-04:00'),
                        (31, 222, 'ooo', None, None, '2019-08-29 10:01:19-04:00'),
                        (32, 222, 'ooo', None, None, '2019-08-29 10:02:19-04:00')],
                    ['idx', 'u_uuid', 'p_uuid', 'mode', 'place', 'timestamp']
                )
 df.show(30, False)

我曾经

 win = Window.partitionBy("u_uuid", "p_uuid").orderBy("timestamp")
 df.withColumn("count_", F.count('p_uuid').over(win))
 df.withColumn("max_timestamp", F.max("timestamp").over(win))
 df.withColumn("min_timestamp", F.min("timestamp").over(win))
它似乎不起作用(例如:获取最大值)

remarque:忘记
trip\u id
subtrip\u id
track\u id


您需要将
无界删除,无界跟随
一起使用。如果我们提供,默认情况下,partitionBy
子句的值为
无界删除,currentRow

在窗口规范中添加
.rowsBetween
,然后再次运行

win = Window.partitionBy("u_uuid", "p_uuid").orderBy("timestamp").rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)
示例:

df.withColumn("max_timestamp", max("timestamp").over(win)).show(10,False)
+---+------+------+-----+------+-------------------------+-------------------------+
|idx|u_uuid|p_uuid|mode |place |timestamp                |max_timestamp            |
+---+------+------+-----+------+-------------------------+-------------------------+
|8  |110   |bbb   |bike |home  |2019-09-17 14:40:19-04:00|2019-09-17 14:43:19-04:00|
|9  |110   |bbb   |bus  |home  |2019-09-17 14:41:19-04:00|2019-09-17 14:43:19-04:00|
|10 |110   |bbb   |walk |home  |2019-09-17 14:43:19-04:00|2019-09-17 14:43:19-04:00|
|16 |110   |ooo   |null |null  |2019-08-29 16:01:19-04:00|2019-08-29 16:02:19-04:00|
|17 |110   |ooo   |null |null  |2019-08-29 16:02:19-04:00|2019-08-29 16:02:19-04:00|
|18 |110   |ooo   |null |null  |2019-08-29 16:02:19-04:00|2019-08-29 16:02:19-04:00|
+---+------+------+-----+------+-------------------------+-------------------------+

您必须使用
rowsBetween
将窗口扩展到整个帧:

win = Window.partitionBy("u_uuid", "p_uuid").orderBy("timestamp").rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)
要获取F.max(“timestamp”)不起作用,例如,对于p_uuid='aaa'max_timestamp列,必须具有'2019-09-28 13:42:19-04:01'