Apache spark 使用窗口函数将上一行与当前行相加

Apache spark 使用窗口函数将上一行与当前行相加,apache-spark,pyspark,spark-dataframe,Apache Spark,Pyspark,Spark Dataframe,我有一个spark数据框,其中,我想根据当前行金额值和基于groupid和id的金额值的前一行总和计算一个运行总数。让我把df放出来 import findspark findspark.init() import pyspark from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() import pandas as pd sc = spark.sparkContext data1

我有一个spark数据框,其中,我想根据当前行金额值和基于groupid和id的金额值的前一行总和计算一个运行总数。让我把df放出来

import findspark
findspark.init()
import pyspark 
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
import pandas as pd


 sc = spark.sparkContext
data1 = {'date': {0: '2018-04-03', 1: '2018-04-04', 2: '2018-04-05', 3: '2018-04-06', 4: '2018-04-07'},
         'id': {0: 'id1', 1: 'id2', 2: 'id1', 3: 'id3', 4: 'id2'},
         'group': {0: '1', 1: '1', 2: '1', 3: '2', 4: '1'},
         'amount': {0: 50, 1: 40, 2: 50, 3: 55, 4: 20}}
df1_pd = pd.DataFrame(data1, columns=data1.keys())

df1 = spark.createDataFrame(df1_pd)
df1.show()


+----------+---+-----+------+
|      date| id|group|amount|
+----------+---+-----+------+
|2018-04-03|id1|    1|    50|
|2018-04-04|id2|    1|    40|
|2018-04-05|id1|    1|    50|
|2018-04-06|id3|    2|    55|
|2018-04-07|id2|    1|    20|
+----------+---+-----+------+
我要找的那个人

+----------+---+-----+------+---+
|      date| id|group|amount|sum|
+----------+---+-----+------+---+
|2018-04-03|id1|    1|    50|50 |
|2018-04-04|id2|    1|    40|90 |
|2018-04-05|id1|    1|    50|140|
|2018-04-06|id3|    2|    55|55 |
|2018-04-07|id2|    1|    20|160|
+----------+---+-----+------+---+
窗口定义:

从pyspark.sql.window导入窗口
从pyspark.sql.functions导入和
w=窗口。分区依据(“组”)。订单依据(“日期”)。行之间(
Window.unboundedPreceding,#从帧的开始处获取所有行
Window.currentRow#到当前行
)
总数:

(df1.带列(“总和”,总和(“金额”)。超过(w))
.orderBy(“日期”)#排序以便于检查。无需
.show())
结果:

+----------+---+-----+------+---+      
|      date| id|group|amount|sum|
+----------+---+-----+------+---+
|2018-04-03|id1|    1|    50| 50|
|2018-04-04|id2|    1|    40| 90|
|2018-04-05|id1|    1|    50|140|
|2018-04-06|id3|    2|    55| 55|
|2018-04-07|id2|    1|    20|160|
+----------+---+-----+------+---+
窗口定义:

从pyspark.sql.window导入窗口
从pyspark.sql.functions导入和
w=窗口。分区依据(“组”)。订单依据(“日期”)。行之间(
Window.unboundedPreceding,#从帧的开始处获取所有行
Window.currentRow#到当前行
)
总数:

(df1.带列(“总和”,总和(“金额”)。超过(w))
.orderBy(“日期”)#排序以便于检查。无需
.show())
结果:

+----------+---+-----+------+---+      
|      date| id|group|amount|sum|
+----------+---+-----+------+---+
|2018-04-03|id1|    1|    50| 50|
|2018-04-04|id2|    1|    40| 90|
|2018-04-05|id1|    1|    50|140|
|2018-04-06|id3|    2|    55| 55|
|2018-04-07|id2|    1|    20|160|
+----------+---+-----+------+---+

scala中是否有“Unbounddpreceiding”的等效函数?“Unbounddpreceiding”仅在spark 2.1中可用,我使用的是spark 2.0.1 scala中是否有“Unbounddpreceiding”的等效函数?“Unbounddpreceiding”仅在spark 2.1中可用,我使用的是spark 2.0.1