Apache spark pyspark:使用timeseries数据的滚动平均

Apache spark pyspark:使用timeseries数据的滚动平均,apache-spark,pyspark,window-functions,moving-average,Apache Spark,Pyspark,Window Functions,Moving Average,我有一个由时间戳列和美元列组成的数据集。我想找出每行时间戳结束的每周平均美元数。我最初查看的是pyspark.sql.functions.window函数,但它会按周存储数据 下面是一个例子: %pyspark import datetime from pyspark.sql import functions as F df1 = sc.parallelize([(17,"2017-03-11T15:27:18+00:00"), (13,"2017-03-11T12:27:18+00:00")

我有一个由时间戳列和美元列组成的数据集。我想找出每行时间戳结束的每周平均美元数。我最初查看的是pyspark.sql.functions.window函数,但它会按周存储数据

下面是一个例子:

%pyspark
import datetime
from pyspark.sql import functions as F

df1 = sc.parallelize([(17,"2017-03-11T15:27:18+00:00"), (13,"2017-03-11T12:27:18+00:00"), (21,"2017-03-17T11:27:18+00:00")]).toDF(["dollars", "datestring"])
df2 = df1.withColumn('timestampGMT', df1.datestring.cast('timestamp'))

w = df2.groupBy(F.window("timestampGMT", "7 days")).agg(F.avg("dollars").alias('avg'))
w.select(w.window.start.cast("string").alias("start"), w.window.end.cast("string").alias("end"), "avg").collect()
这将导致两项记录:

|        start        |          end         | avg |
|---------------------|----------------------|-----|
|'2017-03-16 00:00:00'| '2017-03-23 00:00:00'| 21.0|
|---------------------|----------------------|-----|
|'2017-03-09 00:00:00'| '2017-03-16 00:00:00'| 15.0|
|---------------------|----------------------|-----|
窗口函数将时间序列数据合并,而不是执行滚动平均

是否有一种方法可以执行滚动平均值,即返回每行的周平均值,时间段以该行的timestampGMT结束

编辑:

下面张的回答接近我想要的,但不是我想要的

这里有一个更好的例子来说明我想要得到什么:

%pyspark
from pyspark.sql import functions as F
df = spark.createDataFrame([(17, "2017-03-10T15:27:18+00:00"),
                        (13, "2017-03-15T12:27:18+00:00"),
                        (25, "2017-03-18T11:27:18+00:00")],
                        ["dollars", "timestampGMT"])
df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))
df = df.withColumn('rolling_average', F.avg("dollars").over(Window.partitionBy(F.window("timestampGMT", "7 days"))))
这将产生以下数据帧:

dollars timestampGMT            rolling_average
25      2017-03-18 11:27:18.0   25
17      2017-03-10 15:27:18.0   15
13      2017-03-15 12:27:18.0   15
我希望平均值在timestampGMT列中的日期之前的一周内,这将导致:

dollars timestampGMT            rolling_average
17      2017-03-10 15:27:18.0   17
13      2017-03-15 12:27:18.0   15
25      2017-03-18 11:27:18.0   19
在上述结果中,2017-03-10的滚动平均值为17,因为之前没有记录。2017-03-15的滚动平均值为15,因为它是2017-03-15的13和2017-03-10的17的平均值,在前7天窗口内。2017-03-18年的滚动平均值为19,因为它是对2017-03-18年的25个和2017-03-10年的13个在前7天窗口内的滚动平均值的平均值,而不包括2017-03-10年的17个,因为这不在前7天窗口内

有没有办法做到这一点,而不是每周窗口不重叠的装箱窗口?

你的意思是:

df = spark.createDataFrame([(17, "2017-03-11T15:27:18+00:00"),
                            (13, "2017-03-11T12:27:18+00:00"),
                            (21, "2017-03-17T11:27:18+00:00")],
                           ["dollars", "timestampGMT"])
df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))
df = df.withColumn('rolling_average', f.avg("dollars").over(Window.partitionBy(f.window("timestampGMT", "7 days"))))
输出:

+-------+-------------------+---------------+                                   
|dollars|timestampGMT       |rolling_average|
+-------+-------------------+---------------+
|21     |2017-03-17 19:27:18|21.0           |
|17     |2017-03-11 23:27:18|15.0           |
|13     |2017-03-11 20:27:18|15.0           |
+-------+-------------------+---------------+

我找到了使用stackoverflow计算移动/滚动平均值的正确方法:

基本思想是将时间戳列转换为秒,然后可以使用pyspark.sql.Window类中的rangeBetween函数在窗口中包含正确的行

以下是已解决的示例:

%pyspark
from pyspark.sql import functions as F
from pyspark.sql.window import Window


#function to calculate number of seconds from number of days
days = lambda i: i * 86400

df = spark.createDataFrame([(17, "2017-03-10T15:27:18+00:00"),
                        (13, "2017-03-15T12:27:18+00:00"),
                        (25, "2017-03-18T11:27:18+00:00")],
                        ["dollars", "timestampGMT"])
df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))

#create window by casting timestamp to long (number of seconds)
w = (Window.orderBy(F.col("timestampGMT").cast('long')).rangeBetween(-days(7), 0))

df = df.withColumn('rolling_average', F.avg("dollars").over(w))
这导致了我所寻找的滚动平均值的精确列:

dollars   timestampGMT            rolling_average
17        2017-03-10 15:27:18.0   17.0
13        2017-03-15 12:27:18.0   15.0
25        2017-03-18 11:27:18.0   19.0

值得注意的是,如果您不关心确切的日期,而是关心最近30天的平均值,则可以使用rowsBetween函数,如下所示:

w = Window.orderBy('timestampGMT').rowsBetween(-7, 0)

df = eurPrices.withColumn('rolling_average', F.avg('dollars').over(w))
由于您是按日期订购的,因此需要最后7次。
您保存了所有的铸造。

我将添加一个我个人认为非常有用的变体。我希望有人会发现它也很有用:

如果要分组,则在相应的组内计算移动平均值:

数据帧的示例:

from pyspark.sql.window import Window
from pyspark.sql import functions as func


df = spark.createDataFrame([("tshilidzi", 17.00, "2018-03-10T15:27:18+00:00"), 
  ("tshilidzi", 13.00, "2018-03-11T12:27:18+00:00"),   
  ("tshilidzi", 25.00, "2018-03-12T11:27:18+00:00"), 
  ("thabo", 20.00, "2018-03-13T15:27:18+00:00"), 
  ("thabo", 56.00, "2018-03-14T12:27:18+00:00"), 
  ("thabo", 99.00, "2018-03-15T11:27:18+00:00"), 
  ("tshilidzi", 156.00, "2019-03-22T11:27:18+00:00"), 
  ("thabo", 122.00, "2018-03-31T11:27:18+00:00"), 
  ("tshilidzi", 7000.00, "2019-04-15T11:27:18+00:00"),
  ("ash", 9999.00, "2018-04-16T11:27:18+00:00") 
  ],
  ["name", "dollars", "timestampGMT"])

# we need this timestampGMT as seconds for our Window time frame
df = df.withColumn('timestampGMT', df.timestampGMT.cast('timestamp'))

df.show(10000, False)
输出:

+---------+-------+---------------------+
|name     |dollars|timestampGMT         |
+---------+-------+---------------------+
|tshilidzi|17.0   |2018-03-10 17:27:18.0|
|tshilidzi|13.0   |2018-03-11 14:27:18.0|
|tshilidzi|25.0   |2018-03-12 13:27:18.0|
|thabo    |20.0   |2018-03-13 17:27:18.0|
|thabo    |56.0   |2018-03-14 14:27:18.0|
|thabo    |99.0   |2018-03-15 13:27:18.0|
|tshilidzi|156.0  |2019-03-22 13:27:18.0|
|thabo    |122.0  |2018-03-31 13:27:18.0|
|tshilidzi|7000.0 |2019-04-15 13:27:18.0|
|ash      |9999.0 |2018-04-16 13:27:18.0|
+---------+-------+---------------------+
+---------+-------+---------------------+------------------+
|name     |dollars|timestampGMT         |rolling_average   |
+---------+-------+---------------------+------------------+
|ash      |9999.0 |2018-04-16 13:27:18.0|9999.0            |
|tshilidzi|17.0   |2018-03-10 17:27:18.0|17.0              |
|tshilidzi|13.0   |2018-03-11 14:27:18.0|15.0              |
|tshilidzi|25.0   |2018-03-12 13:27:18.0|18.333333333333332|
|tshilidzi|156.0  |2019-03-22 13:27:18.0|156.0             |
|tshilidzi|7000.0 |2019-04-15 13:27:18.0|7000.0            |
|thabo    |20.0   |2018-03-13 17:27:18.0|20.0              |
|thabo    |56.0   |2018-03-14 14:27:18.0|38.0              |
|thabo    |99.0   |2018-03-15 13:27:18.0|58.333333333333336|
|thabo    |122.0  |2018-03-31 13:27:18.0|122.0             |
+---------+-------+---------------------+------------------+
要根据
名称
计算移动平均值并仍保留所有行,请执行以下操作:

#create window by casting timestamp to long (number of seconds)
w = (Window()
     .partitionBy(col("name"))
     .orderBy(F.col("timestampGMT").cast('long'))
     .rangeBetween(-days(7), 0))

df2 = df.withColumn('rolling_average', F.avg("dollars").over(w))

df2.show(100, False)
输出:

+---------+-------+---------------------+
|name     |dollars|timestampGMT         |
+---------+-------+---------------------+
|tshilidzi|17.0   |2018-03-10 17:27:18.0|
|tshilidzi|13.0   |2018-03-11 14:27:18.0|
|tshilidzi|25.0   |2018-03-12 13:27:18.0|
|thabo    |20.0   |2018-03-13 17:27:18.0|
|thabo    |56.0   |2018-03-14 14:27:18.0|
|thabo    |99.0   |2018-03-15 13:27:18.0|
|tshilidzi|156.0  |2019-03-22 13:27:18.0|
|thabo    |122.0  |2018-03-31 13:27:18.0|
|tshilidzi|7000.0 |2019-04-15 13:27:18.0|
|ash      |9999.0 |2018-04-16 13:27:18.0|
+---------+-------+---------------------+
+---------+-------+---------------------+------------------+
|name     |dollars|timestampGMT         |rolling_average   |
+---------+-------+---------------------+------------------+
|ash      |9999.0 |2018-04-16 13:27:18.0|9999.0            |
|tshilidzi|17.0   |2018-03-10 17:27:18.0|17.0              |
|tshilidzi|13.0   |2018-03-11 14:27:18.0|15.0              |
|tshilidzi|25.0   |2018-03-12 13:27:18.0|18.333333333333332|
|tshilidzi|156.0  |2019-03-22 13:27:18.0|156.0             |
|tshilidzi|7000.0 |2019-04-15 13:27:18.0|7000.0            |
|thabo    |20.0   |2018-03-13 17:27:18.0|20.0              |
|thabo    |56.0   |2018-03-14 14:27:18.0|38.0              |
|thabo    |99.0   |2018-03-15 13:27:18.0|58.333333333333336|
|thabo    |122.0  |2018-03-31 13:27:18.0|122.0             |
+---------+-------+---------------------+------------------+

谢谢张,这更接近我想要的,但不是我想要的。您的代码仍在通过日期装箱计算答案。我希望每一周的平均值都以行中的日期结束。我没有树立好榜样,这是我的错。我将用一个更新的示例来编辑我的帖子。如果你有一个完整的连续日期列,那么你可以使用
rowsBetween(-7,0)
这使用
窗口
函数强制数据帧进入单个节点。如果是非常大的数据帧,则会遇到内存问题。有没有一种方法可以利用spark数据帧的分布式计算来使用
Range-between