Pyspark从列级别内的时间戳获取时间差

Pyspark从列级别内的时间戳获取时间差,pyspark,python-datetime,Pyspark,Python Datetime,我尝试在Pyspark中的“name”内获取时间戳的时间差“time_d”(以秒为单位) +-------------------+----+ | timestamplast|name| +-------------------+----+ |2019-08-01 00:00:00| 1| |2019-08-01 00:01:00| 1| |2019-08-01 00:01:15| 1| |2019-08-01 03:00:00| 2| |2019-08-01 04:0

我尝试在Pyspark中的“name”内获取时间戳的时间差“time_d”(以秒为单位)

+-------------------+----+
|      timestamplast|name|
+-------------------+----+
|2019-08-01 00:00:00|   1|
|2019-08-01 00:01:00|   1|
|2019-08-01 00:01:15|   1|
|2019-08-01 03:00:00|   2|
|2019-08-01 04:00:00|   2|
|2019-08-01 00:15:00|   3|
+-------------------+----+
输出应如下所示:

+-------------------+----+--------+
|      timestamplast|name| time_d |
+-------------------+----+------- +
|2019-08-01 00:00:00|   1| 0      | 
|2019-08-01 00:01:00|   1| 60     | 
|2019-08-01 00:01:15|   1| 15     |
|2019-08-01 03:00:00|   2| 0      |
|2019-08-01 04:00:00|   2| 3600   |
|2019-08-01 00:15:00|   3| 0      |
+-------------------+----+--------+
在熊猫中,这将是:

df['time_d'] = df.groupby("name")['timestamplast'].diff().fillna(pd.Timedelta(0)).dt.total_seconds()

在Pyspark中如何实现这一点?

您可以使用
滞后窗口函数(按名称分区)
,然后使用
时间戳(unix\U时间戳)
计算差异

from pyspark.sql import functions as F
from pyspark.sql.window import Window

w=Window().partitionBy("name").orderBy(F.col("timestamplast"))
df.withColumn("time_d", F.lag(F.unix_timestamp("timestamplast")).over(w))\
  .withColumn("time_d", F.when(F.col("time_d").isNotNull(), F.unix_timestamp("timestamplast")-F.col("time_d"))\
                         .otherwise(F.lit(0))).orderBy("name","timestamplast").show()

#+-------------------+----+------+
#|      timestamplast|name|time_d|
#+-------------------+----+------+
#|2019-08-01 00:00:00|   1|     0|
#|2019-08-01 00:01:00|   1|    60|
#|2019-08-01 00:01:15|   1|    15|
#|2019-08-01 03:00:00|   2|     0|
#|2019-08-01 04:00:00|   2|  3600|
#|2019-08-01 00:15:00|   3|     0|
#+-------------------+----+------+

很好的解决方案。您是否有以下问题的答案: