Python 如何累计超过';1小时';PySpark中一天内累积的窗口数

Python 如何累计超过';1小时';PySpark中一天内累积的窗口数,python,apache-spark,pyspark,apache-spark-sql,Python,Apache Spark,Pyspark,Apache Spark Sql,我有一个Spark数据框,如下所示: +---------+--------------------------+ |组| id |事件|时间| +---------+--------------------------+ |XXXX | 2017-10-25 14:47:02.717013| |XXXX | 2017-10-25 14:47:25.444979| |XXXX | 2017-10-25 14:49:32.21353| |YYYY | 2017-10-25 14:50:38.321

我有一个Spark数据框,如下所示:

+---------+--------------------------+
|组| id |事件|时间|
+---------+--------------------------+
|XXXX | 2017-10-25 14:47:02.717013|
|XXXX | 2017-10-25 14:47:25.444979|
|XXXX | 2017-10-25 14:49:32.21353|
|YYYY | 2017-10-25 14:50:38.321134|
|YYYY | 2017-10-25 14:51:12.028447|
|ZZZZ | 2017-10-25 14:51:24.810688|
|YYYY | 2017-10-25 14:37:34.241097|
|ZZZZ | 2017-10-25 14:37:24.427836|
|XXXX | 2017-10-25 14:37:24.620864|
|YYYY | 2017-10-25 14:37:24.964614|
+---------+--------------------------+
我想计算每个
组id
一天内每小时事件的滚动计数

因此,对于日期时间
25-10 14:00
组id
,我想计算该
组id
25-10 00:00
25-10 14:00
的事件计数

执行如下操作:

df.groupBy('group_id', window('event_time', '1 hour').alias('model_window')) \
    .agg(dfcount(lit(1)).alias('values'))
计算每小时的事件数,但不计算每天的累积事件数

有什么想法吗

编辑: 预期输出如下所示:

+---------+---------------------------------------------+-------+
|组| id |模型|窗口|值|
+---------+---------------------------------------------+-------+
|XXXX |[2017-10-25 00:00:00.02017-10-25 01:00:00.0]| 10|
|XXXX |[2017-10-25 00:00:00.02017-10-25 02:00:00.0]| 17|
|XXXX |[2017-10-25 00:00:00.02017-10-25 03:00:00.0]| 22|
|YYYY |[2017-10-25 00:00:00.02017-10-25 01:00:00.0]|0|
|YYYY |[2017-10-25 00:00:00.02017-10-25 02:00:00.0]| 1|
|YYYY |[2017-10-25 00:00:00.02017-10-25 03:00:00.0]| 9|
+---------+---------------------------------------------+-------+
要计算。。。每组id一天内每小时

提取数据和时间:

from pyspark.sql.functions import col, count, hour, sum

extended = (df
  .withColumn("event_time", col("event_time").cast("timestamp"))
  .withColumn("date", col("event_time").cast("date"))
  .withColumn("hour", hour(col("event_time"))))
计算聚合

aggs = extended.groupBy("group_id", "date", "hour").count()
我想计算事件的滚动计数

并使用窗口功能:

from pyspark.sql.window import Window

aggs.withColumn(
    "agg_count", 
    sum("count").over(Window.partitionBy("group_id", "date").orderBy("hour")))
若要为缺少的时间间隔获取0,您必须为每个日期和小时生成参考数据并加入其中

df
定义为:

df = sc.parallelize([
    ("XXXX", "2017-10-25 01:47:02.717013"),
    ("XXXX", "2017-10-25 14:47:25.444979"),
    ("XXXX", "2017-10-25 14:49:32.21353"),
    ("YYYY", "2017-10-25 14:50:38.321134"),
    ("YYYY", "2017-10-25 14:51:12.028447"),
    ("ZZZZ", "2017-10-25 14:51:24.810688"),
    ("YYYY", "2017-10-25 14:37:34.241097"),
    ("ZZZZ", "2017-10-25 14:37:24.427836"),
    ("XXXX", "2017-10-25 22:37:24.620864"),
    ("YYYY", "2017-10-25 16:37:24.964614")
]).toDF(["group_id", "event_time"])
结果是

+--------+----------+----+-----+---------+                                      
|group_id|      date|hour|count|agg_count|
+--------+----------+----+-----+---------+
|    XXXX|2017-10-25|   1|    1|        1|
|    XXXX|2017-10-25|  14|    2|        3|
|    XXXX|2017-10-25|  22|    1|        4|
|    ZZZZ|2017-10-25|  14|    2|        2|
|    YYYY|2017-10-25|  14|    3|        3|
|    YYYY|2017-10-25|  16|    1|        4|
+--------+----------+----+-----+---------+