Spark Scala:多列移动平均

Spark Scala:多列移动平均,scala,apache-spark,Scala,Apache Spark,输入: val customers = sc.parallelize(List(("Alice", "2016-05-01", 50.00,4), ("Alice", "2016-05-03", 45.00,2), ("Alice", "2016-05-04", 55.00,4), ("Bob", "201

输入:

val customers = sc.parallelize(List(("Alice", "2016-05-01", 50.00,4),
                                ("Alice", "2016-05-03", 45.00,2),
                                ("Alice", "2016-05-04", 55.00,4),
                                ("Bob", "2016-05-01", 25.00,6),
                                ("Bob", "2016-05-04", 29.00,7),
                                ("Bob", "2016-05-06", 27.00,10))).
                           toDF("name", "date", "amountSpent","NumItems")
程序:

 // Import the window functions.
 import org.apache.spark.sql.expressions.Window
 import org.apache.spark.sql.functions._

 // Create a window spec.
 val wSpec1 = Window.partitionBy("name").orderBy("date").rowsBetween(-1, 1)
在此窗口规范中,数据按客户进行分区。每个客户的数据都是按日期排序的。并且,窗口框架定义为从-1开始(当前行之前的一行)到1结束(当前行之后的一行),在滑动窗口中总共有3行问题是对列列表进行基于窗口的求和。在本例中,它们是“已花费的金额”、“NumItems”。但是这个问题可能有数百列

下面是对每列进行基于窗口的求和的解决方案。但是,如何更有效地执行求和?因为我们不需要每次为每列查找滑动窗口行

 // Calculate the sum of spent
 customers.withColumn("sumSpent",sum(customers("amountSpent")).over(wSpec1)).show()

  +-----+----------+-----------+--------+--------+
  | name|      date|amountSpent|NumItems|sumSpent|
  +-----+----------+-----------+--------+--------+
  |Alice|2016-05-01|       50.0|       4|    95.0|
  |Alice|2016-05-03|       45.0|       2|   150.0|
  |Alice|2016-05-04|       55.0|       4|   100.0|
  |  Bob|2016-05-01|       25.0|       6|    54.0|
  |  Bob|2016-05-04|       29.0|       7|    81.0|
  |  Bob|2016-05-06|       27.0|      10|    56.0|
  +-----+----------+-----------+--------+--------+

 // Calculate the sum of items
 customers.withColumn( "sumItems",
                sum(customers("NumItems")).over(wSpec1)  ).show()

  +-----+----------+-----------+--------+--------+
  | name|      date|amountSpent|NumItems|sumItems|
  +-----+----------+-----------+--------+--------+
  |Alice|2016-05-01|       50.0|       4|       6|
  |Alice|2016-05-03|       45.0|       2|      10|
  |Alice|2016-05-04|       55.0|       4|       6|
  |  Bob|2016-05-01|       25.0|       6|      13|
  |  Bob|2016-05-04|       29.0|       7|      23|
  |  Bob|2016-05-06|       27.0|      10|      17|
  +-----+----------+-----------+--------+--------+

我想,目前不可能使用窗口函数更新多个列。你可以表现得好像它同时发生,如下所示

val customers = sc.parallelize(List(("Alice", "2016-05-01", 50.00,4),
  ("Alice", "2016-05-03", 45.00,2),
  ("Alice", "2016-05-04", 55.00,4),
  ("Bob", "2016-05-01", 25.00,6),
  ("Bob", "2016-05-04", 29.00,7),
  ("Bob", "2016-05-06", 27.00,10))).
  toDF("name", "date", "amountSpent","NumItems")

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._

// Create a window spec.
val wSpec1 = Window.partitionBy("name").orderBy("date").rowsBetween(-1, 1)
var tempdf = customers
val colNames = List("amountSpent", "NumItems")
for(column <- colNames){
  tempdf = tempdf.withColumn(column+"Sum", sum(tempdf(column)).over(wSpec1))
}
tempdf.show(false)

是的,可以只计算一次窗口(如果您有Spark 2,它允许您对结构类型使用
collect\u list
),假设在您的代码中有dataframe和windowSpec,那么:

val colNames = List("amountSpent","NumItems")
val cols= colNames.map(col(_))

// put window-content of all columns in one struct
val df_wc_arr = customers
.withColumn("window_content_arr",collect_list(struct(cols:_*)).over(wSpec1))

// calculate sum of window-content for each column
// aggregation exression used later
val aggExpr = colNames.map(n => sum(col("window_content."+n)).as(n+"Sum"))

df_wc_arr
.withColumn("window_content",explode($"window_content_arr"))
.drop($"window_content_arr")
.groupBy(($"name" :: $"date" :: cols):_*)
.agg(aggExpr.head,aggExpr.tail:_*)
.orderBy($"name",$"date")
.show
给予


您是否需要为每个移动总和指定一个单独的列?所有汇总的列都可以位于单独的数据框中。@如果答案有助于您,请单击“HappyCodeing”进行投票/接受
val colNames = List("amountSpent","NumItems")
val cols= colNames.map(col(_))

// put window-content of all columns in one struct
val df_wc_arr = customers
.withColumn("window_content_arr",collect_list(struct(cols:_*)).over(wSpec1))

// calculate sum of window-content for each column
// aggregation exression used later
val aggExpr = colNames.map(n => sum(col("window_content."+n)).as(n+"Sum"))

df_wc_arr
.withColumn("window_content",explode($"window_content_arr"))
.drop($"window_content_arr")
.groupBy(($"name" :: $"date" :: cols):_*)
.agg(aggExpr.head,aggExpr.tail:_*)
.orderBy($"name",$"date")
.show
+-----+----------+-----------+--------+--------------+-----------+
| name|      date|amountSpent|NumItems|amountSpentSum|NumItemsSum|
+-----+----------+-----------+--------+--------------+-----------+
|Alice|2016-05-01|       50.0|       4|          95.0|          6|
|Alice|2016-05-03|       45.0|       2|         150.0|         10|
|Alice|2016-05-04|       55.0|       4|         100.0|          6|
|  Bob|2016-05-01|       25.0|       6|          54.0|         13|
|  Bob|2016-05-04|       29.0|       7|          81.0|         23|
|  Bob|2016-05-06|       27.0|      10|          56.0|         17|
+-----+----------+-----------+--------+--------------+-----------+