Apache spark 基于多列的Spark数据帧窗口滞后函数

Apache spark 基于多列的Spark数据帧窗口滞后函数,apache-spark,apache-spark-sql,spark-dataframe,window-functions,Apache Spark,Apache Spark Sql,Spark Dataframe,Window Functions,这很好用 但如果我还有一个专栏,如下图所示 val df = sc.parallelize(Seq((201601, 100.5), (201602, 120.6), (201603, 450.2), (201604, 200.7), (201605, 121.4))).toDF("date", "volume") val w = org.apache.spark.sql.expressions.Window.orderBy("date") val leadDf = d

这很好用

但如果我还有一个专栏,如下图所示

val df = sc.parallelize(Seq((201601, 100.5),
  (201602, 120.6),
  (201603, 450.2),
  (201604, 200.7),
  (201605, 121.4))).toDF("date", "volume")

val w = org.apache.spark.sql.expressions.Window.orderBy("date")    
val leadDf = df.withColumn("new_col", lag("volume", 1, 0).over(w))
leadDf.show()

+------+------+-------+
|  date|volume|new_col|
+------+------+-------+
|201601| 100.5|    0.0|
|201602| 120.6|  100.5|
|201603| 450.2|  120.6|
|201604| 200.7|  450.2|
|201605| 121.4|  200.7|
+------+------+-------+

我的要求是针对同一地区,我想查找上个月的交易量(如果存在)如果不存在,只需指定一个值0.0

如果我理解正确,您需要相同地区上一个日期的交易量

如果是这样,只需添加partitionBy,即按如下方式重新定义窗口规格:

val df = sc.parallelize(Seq((201601, ter1, 10.1),
  (201601, ter2, 10.6),
  (201602, ter1, 10.7),
  (201603, ter3, 10.8),
  (201603, ter4, 10.8),
  (201603, ter3, 10.8),
  (201604, ter4, 10.9))).toDF("date", "territory", "volume")

我怎么能这样做呢?我试过这样做。
val w=org.apache.spark.sql.expressions.Window.orderBy(“日期”、“区域”)val leadDf=df.withColumn(“新列”,lag(“卷”,1,0)。over(w))
但不起作用我只是在orderBy子句中包含了区域……没有给出正确的结果我使用的是spark 1.6.2,Scala 2.10
val w = org.apache.spark.sql.expressions.Window.partitionBy("territory").orderBy("date")