Python Pyspark:带条件的窗口/累积和

Python Pyspark:带条件的窗口/累积和,python,pyspark,window-functions,cumsum,Python,Pyspark,Window Functions,Cumsum,假设我有这样的数据: +------+-------+-------+---------------------+ | Col1 | Col2 | Col3 | Col3 | +------+-------+-------+---------------------+ | A | 0.532 | 0.234 | 2020-01-01 05:00:00 | | B | 0.242 | 0.224 | 2020-01-01 06:00:00 | | A

假设我有这样的数据:

+------+-------+-------+---------------------+
| Col1 | Col2  | Col3  | Col3                |
+------+-------+-------+---------------------+
| A    | 0.532 | 0.234 | 2020-01-01 05:00:00 |
| B    | 0.242 | 0.224 | 2020-01-01 06:00:00 |
| A    | 0.152 | 0.753 | 2020-01-01 08:00:00 |
| C    | 0.149 | 0.983 | 2020-01-01 08:00:00 |
| A    | 0.635 | 0.429 | 2020-01-01 09:00:00 |
| A    | 0.938 | 0.365 | 2020-01-01 10:00:00 |
| C    | 0.293 | 0.956 | 2020-01-02 05:00:00 |
| A    | 0.294 | 0.234 | 2020-01-02 06:00:00 |
| E    | 0.294 | 0.394 | 2020-01-02 07:00:00 |
| D    | 0.294 | 0.258 | 2020-01-02 08:00:00 |
| A    | 0.687 | 0.666 | 2020-01-03 05:00:00 |
| C    | 0.232 | 0.494 | 2020-01-03 06:00:00 |
| D    | 0.575 | 0.845 | 2020-01-03 07:00:00 |
+------+-------+-------+---------------------+
我想创建另一个列,即:

  • Col2之和
  • 按Col1分组
  • 仅适用于Col3前2小时以外的记录
在这个例子中,看A,求和Col2

+------+-------+-------+---------------------+
| Col1 | Col2  | Col3  | Col3                |
+------+-------+-------+---------------------+
| A    | 0.532 | 0.234 | 2020-01-01 05:00:00 | => Will be null, as it is the earliest
| A    | 0.152 | 0.753 | 2020-01-01 08:00:00 | => 0.532, as 05:00:00 is >= 2 hours prior
| A    | 0.635 | 0.429 | 2020-01-01 09:00:00 | => 0.532, as 08:00:00 is <2 hours, but 05:00:00 is >= 2 hours (08:00 is within 2 hours of 09:00)
| A    | 0.938 | 0.365 | 2020-01-01 10:00:00 | => 0.532 + 0.152, as 09:00:00 is < 2 hours, but 08:00:00 and 05:00:00 are >= 2 hours prior
| A    | 0.294 | 0.234 | 2020-01-01 12:00:00 | => 0.532 + 0.152 + 0.635 + 0.938, as all of the ones on the same day are >= least 2 hours prior.
| A    | 0.687 | 0.666 | 2020-01-03 05:00:00 | => Will be null, as it is the earliest this day.
+------+-------+-------+---------------------+
+------+-------+-------+---------------------+
|Col1 | Col2 | Col3 | Col3|
+------+-------+-------+---------------------+
|A | 0.532 | 0.234 | 2020-01-01 05:00:00 |=>将为空,因为它是最早的
|A | 0.152 | 0.753 | 2020-01-01 08:00:00 |=>0.532,因为05:00:00是>=2小时前的时间
|A | 0.635 | 0.429 | 2020-01-01 09:00:00 |=>0.532,因为08:00等于2小时(08:00在09:00的2小时之内)
|A | 0.938 | 0.365 | 2020-01-01 10:00:00 |=>0.532+0.152,因为09:00:00小于2小时,但08:00:00和05:00:00大于等于2小时
|A | 0.294 | 0.234 | 2020-01-01 12:00:00 |=>0.532+0.152+0.635+0.938,因为同一天的所有人都至少提前2小时。
|A | 0.687 | 0.666 | 2020-01-03 05:00:00 |=>将为空,因为这是今天最早的一天。
+------+-------+-------+---------------------+
  • 我曾考虑过对它们进行排序并进行累计,但不确定如何排除2小时范围内的那些

  • 考虑过按条件分组和求和,但不完全确定如何执行

  • 也曾考虑过发出记录来填补空白,这样他们所有的时间都被填满了,并在2点之前进行总结。然而,这需要我转换数据,因为它在每个小时的顶部不是天生干净的;它们是实际的随机时间戳


    • 对于
      Spark2.4+
      试试这个

      from pyspark.sql import functions as F
      from pyspark.sql.window import Window
      
      
      w=Window().partitionBy("col1",F.to_date("col4", "yyyy-MM-dd HH:mm:ss")).orderBy(F.unix_timestamp("col4"))\
                 .rowsBetween(Window.unboundedPreceding, Window.currentRow)
      df\
        .withColumn("try", F.collect_list("col2").over(w))\
        .withColumn("try2", F.collect_list(F.unix_timestamp("col4")).over(w))\
        .withColumn("col5", F.arrays_zip("try","try2")).drop("try")\
        .withColumn("try3",F.element_at("try2", -1))\
        .withColumn("col5", F.when(F.size("try2")>1,F.expr("""aggregate(filter(col5, x-> x.try2 <= (try3-7200)),\
                                                           cast(0 as double), (acc,y)-> acc+y.try)""")).otherwise(None))\
        .drop("try3","try2").orderBy("col1","col4").show(truncate=False)
      
      #+----+-----+-----+-------------------+------------------+
      #|col1|col2 |col3 |col4               |col5              |
      #+----+-----+-----+-------------------+------------------+
      #|A   |0.532|0.234|2020-01-01 05:00:00|null              |
      #|A   |0.152|0.753|2020-01-01 08:00:00|0.532             |
      #|A   |0.635|0.429|2020-01-01 09:00:00|0.532             |
      #|A   |0.938|0.365|2020-01-01 10:00:00|0.684             |
      #|A   |0.294|0.234|2020-01-01 12:00:00|2.2569999999999997|
      #|A   |0.687|0.666|2020-01-03 05:00:00|null              |
      #|B   |0.242|0.224|2020-01-01 06:00:00|null              |
      #|C   |0.149|0.983|2020-01-01 08:00:00|null              |
      #|C   |0.293|0.956|2020-01-02 05:00:00|null              |
      #|C   |0.232|0.494|2020-01-03 06:00:00|null              |
      #|D   |0.294|0.258|2020-01-02 08:00:00|null              |
      #|D   |0.575|0.845|2020-01-03 07:00:00|null              |
      #|E   |0.294|0.394|2020-01-02 07:00:00|null              |
      #+----+-----+-----+-------------------+------------------+
      
      从pyspark.sql导入函数为F
      从pyspark.sql.window导入窗口
      w=Window().partitionBy(“col1”,F.to_date(“col4”,“yyyy-MM-dd HH:MM:ss”)).orderBy(F.unix_时间戳(“col4”))\
      .rowsBetween(Window.unbounddpreceiding,Window.currentRow)
      df\
      .带列(“try”,F.收集列表(“col2”)。超过(w))\
      .withColumn(“try2”,F.collect\u列表(F.unix\u时间戳(“col4”))。over(w))\
      .withColumn(“col5”,F.arrays\u-zip(“try”,“try2”))。drop(“try”)\
      .带列(“try3”,F.element_at(“try2”,-1))\
      .withColumn(“col5”,F.when(F.size(“try2”)>1,F.expr(“聚合(过滤器(col5,x->x.try2 acc+y.try)”))。否则(无))\
      .drop(“try3”、“try2”).orderBy(“col1”、“col4”).show(truncate=False)
      #+----+-----+-----+-------------------+------------------+
      #|col1 | col2 | col3 | col4 | col5|
      #+----+-----+-----+-------------------+------------------+
      #|A | 0.532 | 0.234 | 2020-01-01 05:00:00 |空|
      #|A | 0.152 | 0.753 | 2020-01-01 08:00:00 | 0.532|
      #|A | 0.635 | 0.429 | 2020-01-01 09:00:00 | 0.532|
      #|A | 0.938 | 0.365 | 2020-01-01 10:00:00 | 0.684|
      #|A | 0.294 | 0.234 | 2020-01-01 12:00:00 | 2.256999997|
      #|A | 0.687 | 0.666 | 2020-01-03 05:00:00 |空|
      #|B | 0.242 | 0.224 | 2020-01-01 06:00:00 |空|
      #|C | 0.149 | 0.983 | 2020-01-01 08:00:00 |空|
      #|C | 0.293 | 0.956 | 2020-01-02 05:00:00 |空|
      #|C | 0.232 | 0.494 | 2020-01-03 06:00:00 |空|
      #|D | 0.294 | 0.258 | 2020-01-02 08:00:00 |空|
      #|D | 0.575 | 0.845 | 2020-01-03 07:00:00 |空|
      #|E | 0.294 | 0.394 | 2020-01-02 07:00:00 |空|
      #+----+-----+-----+-------------------+------------------+
      
      我在-7200部分得到一个错误,时间戳减去了一个整数:AnalysisException:cannotresolve'subtracttimestamps(
      try3
      ,7200)'由于数据类型不匹配:参数2需要时间戳类型,但是,'7200'是int类型。没关系--错过了unix时间戳转换!它可以工作!在同一示例中,如何获得累积计数(而不是总和)?