在Spark Scala中的列上运行累积/迭代Costum方法

在Spark Scala中的列上运行累积/迭代Costum方法,scala,apache-spark,recursion,apache-spark-sql,window-functions,Scala,Apache Spark,Recursion,Apache Spark Sql,Window Functions,您好,我是Spark/Scala的新手,我一直在尝试-又名失败,基于特定的递归公式在Spark数据帧中创建列: 这里是伪代码 someDf.col2[0] = 0 for i > 0 someDf.col2[i] = x * someDf.col1[i-1] + (1-x) * someDf.col2[i-1] 要深入了解更多细节,以下是我的出发点: 此数据帧是在日期和单个id级别上聚合的结果 所有进一步的计算必须针对特定的id,并且必须考虑到前一周发生的情况 为了说明这一点,我将值简

您好,我是Spark/Scala的新手,我一直在尝试-又名失败,基于特定的递归公式在Spark数据帧中创建列:

这里是伪代码

someDf.col2[0] = 0

for i > 0
someDf.col2[i] = x * someDf.col1[i-1] + (1-x) * someDf.col2[i-1]
要深入了解更多细节,以下是我的出发点: 此数据帧是在
日期
和单个
id
级别上聚合的结果

所有进一步的计算必须针对特定的
id
,并且必须考虑到前一周发生的情况

为了说明这一点,我将值简化为0和1,并删除了乘法器
x
1-x
,还将
col2
初始化为零

var someDf = Seq(("2016-01-10 00:00:00.0","385608",0,0), 
         ("2016-01-17 00:00:00.0","385608",0,0),
         ("2016-01-24 00:00:00.0","385608",1,0),
         ("2016-01-31 00:00:00.0","385608",1,0),
         ("2016-02-07 00:00:00.0","385608",1,0),
         ("2016-02-14 00:00:00.0","385608",1,0),
         ("2016-01-17 00:00:00.0","105010",0,0),
         ("2016-01-24 00:00:00.0","105010",1,0),
         ("2016-01-31 00:00:00.0","105010",0,0),
         ("2016-02-07 00:00:00.0","105010",1,0)
        ).toDF("dates", "id", "col1","col2" )

someDf.show()
+--------------------+------+----+----+
|日期| id | col1 | col2|
+--------------------+------+----+----+
|2016-01-10 00:00:...|385608|   0|   0|
|2016-01-17 00:00:...|385608|   0|   0|
|2016-01-24 00:00:...|385608|   1|   0|
|2016-01-31 00:00:...|385608|   1|   0|
|2016-02-07 00:00:...|385608|   1|   0|
|2016-02-14 00:00:...|385608|   1|   0|
+--------------------+------+----+----+
|2016-01-17 00:00:...|105010|   0|   0|
|2016-01-24 00:00:...|105010|   1|   0|
|2016-01-31 00:00:...|105010|   0|   0|
|2016-02-07 00:00:...|105010|   1|   0|
+--------------------+------+----+----+
到目前为止我所尝试的与所期望的

import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window

val date_id_window = Window.partitionBy("id").orderBy(asc("dates")) 

someDf.withColumn("col2", lag($"col1",1 ).over(date_id_window) + 
lag($"col2",1 ).over(date_id_window) ).show() 
+----------------+------+/+--------------------+
|日期| id | col1 | col2 |/|应该是什么||
+--------------------+------+----+----+ / +--------------------+
|2016-01-17 00:00:…| 105010 | 0 |空|/| 0 |
|2016-01-24 00:00:...|105010|   1|   0| / |                   0|
|2016-01-31 00:00:...|105010|   0|   1| / |                   1|
|2016-02-07 00:00:...|105010|   1|   0| / |                   1|
+-------------------------------------+ / +--------------------+
|2016-01-10 00:00:…| 385608 | 0 |空|/| 0|
|2016-01-17 00:00:...|385608|   0|   0| / |                   0|
|2016-01-24 00:00:...|385608|   1|   0| / |                   0|
|2016-01-31 00:00:...|385608|   1|   1| / |                   1|
|2016-02-07 00:00:...|385608|   1|   1| / |                   2|
|2016-02-14 00:00:...|385608|   1|   1| / |                   3|
+--------------------+------+----+----+ / +--------------------+
有没有办法用Spark dataframe做到这一点,我见过多次累积类型计算,但从未包含同一列,我认为问题在于没有考虑行I-1的新计算值,而是使用了旧的I-1,它始终为0


任何帮助都将不胜感激。

Dataset
应该可以正常工作:

val x = 0.1

case class Record(dates: String, id: String, col1: Int)

someDf.drop("col2").as[Record].groupByKey(_.id).flatMapGroups((_,  records) => {
  val sorted = records.toSeq.sortBy(_.dates)
  sorted.scanLeft((null: Record, 0.0)){
    case ((_, col2), record) => (record, x * record.col1 + (1 - x) * col2)
  }.tail
}).select($"_1.*", $"_2".alias("col2"))

您可以将
rowsBetween
api与您正在使用的
窗口
函数一起使用,并且您应该具有所需的输出

val date_id_window = Window.partitionBy("id").orderBy(asc("dates"))
someDf.withColumn("col2", sum(lag($"col1", 1).over(date_id_window)).over(date_id_window.rowsBetween(Long.MinValue, 0)))
  .withColumn("col2", when($"col2".isNull, lit(0)).otherwise($"col2"))
  .show()
给定输入
dataframe
as

+--------------------+------+----+----+
|               dates|    id|col1|col2|
+--------------------+------+----+----+
|2016-01-10 00:00:...|385608|   0|   0|
|2016-01-17 00:00:...|385608|   0|   0|
|2016-01-24 00:00:...|385608|   1|   0|
|2016-01-31 00:00:...|385608|   1|   0|
|2016-02-07 00:00:...|385608|   1|   0|
|2016-02-14 00:00:...|385608|   1|   0|
|2016-01-17 00:00:...|105010|   0|   0|
|2016-01-24 00:00:...|105010|   1|   0|
|2016-01-31 00:00:...|105010|   0|   0|
|2016-02-07 00:00:...|105010|   1|   0|
+--------------------+------+----+----+
应用上述逻辑后,您应该有输出dataframe,如下所示

+--------------------+------+----+----+
|               dates|    id|col1|col2|
+--------------------+------+----+----+
|2016-01-17 00:00:...|105010|   0|   0|
|2016-01-24 00:00:...|105010|   1|   0|
|2016-01-31 00:00:...|105010|   0|   1|
|2016-02-07 00:00:...|105010|   1|   1|
|2016-01-10 00:00:...|385608|   0|   0|
|2016-01-17 00:00:...|385608|   0|   0|
|2016-01-24 00:00:...|385608|   1|   0|
|2016-01-31 00:00:...|385608|   1|   1|
|2016-02-07 00:00:...|385608|   1|   2|
|2016-02-14 00:00:...|385608|   1|   3|
+--------------------+------+----+----+

我希望答案是有用的

您应该对数据帧应用转换,而不是将其视为
var
。获取所需内容的一种方法是使用窗口的
行在
之间,通过前一行(即行
-1
)对每个窗口分区内的行的
col1
的值进行累积求和:

import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window

val window = Window.partitionBy("id").orderBy("dates").rowsBetween(Long.MinValue, -1)

val newDF = someDf.
  withColumn(
    "col2", sum($"col1").over(window)
  ).withColumn(
    "col2", when($"col2".isNull, 0).otherwise($"col2")
  ).orderBy("id", "dates")

newDF.show
+--------------------+------+----+----+
|               dates|    id|col1|col2|
+--------------------+------+----+----+
|2016-01-17 00:00:...|105010|   0|   0|
|2016-01-24 00:00:...|105010|   1|   0|
|2016-01-31 00:00:...|105010|   0|   1|
|2016-02-07 00:00:...|105010|   1|   1|
|2016-01-10 00:00:...|385608|   0|   0|
|2016-01-17 00:00:...|385608|   0|   0|
|2016-01-24 00:00:...|385608|   1|   0|
|2016-01-31 00:00:...|385608|   1|   1|
|2016-02-07 00:00:...|385608|   1|   2|
|2016-02-14 00:00:...|385608|   1|   3|
+--------------------+------+----+----+