Python AnalysisException:基于前几行计算列值时,窗口函数中不支持表达式
我有4个字段amt1、amt2、amt3和amt4的示例数据。我们希望根据字段(amt1、amt2、amt3、amt4)和前一行的amt5值之和计算amt5的值 假设以下是数据集:Python AnalysisException:基于前几行计算列值时,窗口函数中不支持表达式,python,apache-spark,pyspark,apache-spark-sql,pyspark-sql,Python,Apache Spark,Pyspark,Apache Spark Sql,Pyspark Sql,我有4个字段amt1、amt2、amt3和amt4的示例数据。我们希望根据字段(amt1、amt2、amt3、amt4)和前一行的amt5值之和计算amt5的值 假设以下是数据集: +----+----+----+----+---+ |amt1|amt2|amt3|amt4|ids| +----+----+----+----+---+ | 1| 2| 3| 4| 1| | 1| 2| 3| 4| 2| | 1| 2| 3| 4| 3| |
+----+----+----+----+---+
|amt1|amt2|amt3|amt4|ids|
+----+----+----+----+---+
| 1| 2| 3| 4| 1|
| 1| 2| 3| 4| 2|
| 1| 2| 3| 4| 3|
| 1| 2| 3| 4| 4|
| 1| 2| 3| 4| 5|
| 1| 2| 3| 4| 6|
+----+----+----+----+---+
以下是我期望的输出:
+----+----+----+----+---+----+
|amt1|amt2|amt3|amt4|ids|amt5|
+----+----+----+----+---+----+
| 1| 2| 3| 4| 1|10 |
| 1| 2| 3| 4| 2|20 |
| 1| 2| 3| 4| 3|30 |
| 1| 2| 3| 4| 4|40 |
+----+----+----+----+---+----+
以下是执行上述代码后得到的例外情况:
from pyspark.sql import Row
from pyspark.sql.window import Window
import pyspark.sql.functions as func
def sum(*col):
sum = 0
for i in col:
sum = sum + i
return sum
rdd = sc.parallelize(["1,1,2,3,4", "2,1,2,3,4", "3,1,2,3,4", "4,1,2,3,4", "5,1,2,3,4", "6,1,2,3,4"])
finalRdd = rdd.map(lambda t: t.split(",")).map(lambda t: Row(ids=t[0],amt1=t[1],amt2=t[2],amt3=t[3],amt4=t[4]))
df = spark.createDataFrame(finalRdd)
w = Window.orderBy("ids").rowsBetween(
Window.unboundedPreceding, # Take all rows from the beginning of frame
Window.currentRow) # To current row
df1 = df.withColumn("amt5",sum(df.amt1,df.amt2,df.amt3,df.amt4))
df1.withColumn("amt5",sum(df1.amt5).over(w)).show()
以下是执行上述代码后得到的例外情况:
from pyspark.sql import Row
from pyspark.sql.window import Window
import pyspark.sql.functions as func
def sum(*col):
sum = 0
for i in col:
sum = sum + i
return sum
rdd = sc.parallelize(["1,1,2,3,4", "2,1,2,3,4", "3,1,2,3,4", "4,1,2,3,4", "5,1,2,3,4", "6,1,2,3,4"])
finalRdd = rdd.map(lambda t: t.split(",")).map(lambda t: Row(ids=t[0],amt1=t[1],amt2=t[2],amt3=t[3],amt4=t[4]))
df = spark.createDataFrame(finalRdd)
w = Window.orderBy("ids").rowsBetween(
Window.unboundedPreceding, # Take all rows from the beginning of frame
Window.currentRow) # To current row
df1 = df.withColumn("amt5",sum(df.amt1,df.amt2,df.amt3,df.amt4))
df1.withColumn("amt5",sum(df1.amt5).over(w)).show()
您在
sum
函数中遇到冲突。窗口函数应来自pyspark.sql.functions
包,因此应按如下方式调用它:
df1.withColumn("amt5",func.sum(df1.amt5).over(w)).show()