Scala 比较windows函数中具有不同spark列的两行
对于Spark 2.1.0,我有一个使用Windows功能的典型情况。下面是工作示例-Scala 比较windows函数中具有不同spark列的两行,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,对于Spark 2.1.0,我有一个使用Windows功能的典型情况。下面是工作示例- import org.apache.spark.sql.expressions.Window import org.apache.spark.sql.functions._ import spark.implicits._ val df = Seq( // Below is REJOIN case when next start date of the updated record is after t
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions._
import spark.implicits._
val df = Seq(
// Below is REJOIN case when next start date of the updated record is after the previous record's end date
(1,173777,"7777","2018-12-18","2019-01-04"),
(1,173777,"7778","2019-01-05","2019-01-10"),
(1,173777,"7779","2019-02-01",null),
(2,173788,"6666","2004-09-16","2006-03-18"),
(2,173788,"6668","2018-12-18",null),
(3,173799,"1111","2002-09-16","2003-03-18"),
(3,173799,"1112","2007-09-16","2008-03-18"),
(4,173711,"9566","2009-09-16","2011-03-18"),
(4,173711,"9555","2007-09-16","2008-03-18"),
// Below is UPDATED case when either all record are null with end_date for a window or don't have next record start date > previous end like above
(5,1737111,"1111","2016-09-26",null),
(5,1737111,"1112","2017-09-26",null),
(5,1737111,"1113","2018-09-26",null),
(6,1737444,"3334","2004-09-16","2019-02-15"),
(6,1737444,"3333","2018-12-18","2019-01-31"),
(7,1737555,"5555","2009-01-01","2011-02-18"),
(7,1737555,"5556","2008-12-18","2019-01-31"),
(7,1737555,"5557","2019-01-01","2019-02-15"),
(7,1737555,"5558","2019-02-14","2020-02-18"),
(7,1737555,"5559","2010-01-01","2026-02-18"))
.toDF("id","user_id", "record_modify_ts", "start_time", "end_time")
val partitionWindowR = Window.partitionBy($"id", $"user_id").orderBy($"record_modify_ts".asc)
我已经计算了结束日期的下一个值,如下所示-
val nextStart = lead($"start_time", 1).over(partitionWindowR)
我试图将状态
计算为另一个包含以下代码的列-
df.withColumn("next_start", nextStart)
.withColumn("status", when(to_date($"next_start") >= to_date($"end_time"), lit("REJOIN")).otherwise(lit("UPDATE"))).show
看起来上次记录的next_start
返回的null
不是null
的问题,并显示分配记录状态的问题。我只需要一个额外的列,其中包含状态
以下是预期产出-
/*
+---+-------+----------------+----------+----------+----------+-------+
| id|user_id|record_modify_ts|start_time| end_time|next_start| status|
+---+-------+----------------+----------+----------+----------+-------+
| 4| 173711| 9555|2007-09-16|2008-03-18|2009-09-16| REJOIN|
| 4| 173711| 9566|2009-09-16|2011-03-18| null| REJOIN| <--
| 3| 173799| 1111|2002-09-16|2003-03-18|2007-09-16| REJOIN|
| 3| 173799| 1112|2007-09-16|2008-03-18| null| REJOIN| <--
| 15|1737111| 1111|2016-09-26| null|2017-09-26|UPDATED|
| 15|1737111| 1112|2017-09-26| null|2018-09-26|UPDATED|
| 15|1737111| 1113|2018-09-26| null| null|UPDATED|
| 16|1737444| 3333|2018-12-18|2019-01-31|2004-09-16|UPDATED|
| 16|1737444| 3334|2004-09-16|2019-02-15| null|UPDATED|
| 17|1737555| 5555|2009-01-01|2011-02-18|2008-12-18|UPDATED|
| 17|1737555| 5556|2008-12-18|2019-01-31|2019-01-01|UPDATED|
| 17|1737555| 5557|2019-01-01|2019-02-15|2019-02-14|UPDATED|
| 17|1737555| 5558|2019-02-14|2020-02-18|2010-01-01|UPDATED|
| 17|1737555| 5559|2010-01-01|2026-02-18| null|UPDATED|
| 1| 173777| 7777|2018-12-18|2019-01-04|2019-01-05| REJOIN|
| 1| 173777| 7778|2019-01-05|2019-01-10|2019-02-01| REJOIN|
| 1| 173777| 7779|2019-02-01| null| null| REJOIN| <--
| 2| 173788| 6666|2004-09-16|2006-03-18|2018-12-18| REJOIN|
| 2| 173788| 6668|2018-12-18| null| null| REJOIN| <--
+---+-------+----------------+----------+----------+----------+-------+
*/
/*
+---+-------+----------------+----------+----------+----------+-------+
|id |用户id |记录|修改|开始|时间|结束|时间|下一次|开始|状态|
+---+-------+----------------+----------+----------+----------+-------+
|4 | 173711 | 9555 | 2007-09-16 | 2008-03-18 | 2009-09-16 |重新加入|
|4 | 173711 | 9566 | 2009-09-16 | 2011-03-18 | null | REJOIN |这应该可以解决您的问题
只需添加null检查,因为对于null情况,否则()将始终执行
val df = Seq((1,173777,"7777","2018-12-18","2019-01-04"),(1,173777,"7778","2019-01-05","2019-01-10"),(1,173777,"7779","2019-02-01",null),(2,173788,"6666","2004-09-16","2006-03-18"),(2,173788,"6668","2018-12-18",null),(3,173799,"1111","2002-09-16","2003-03-18"),(3,173799,"1112","2007-09-16","2008-03-18"),(4,173711,"9566","2009-09-16","2011-03-18"),(4,173711,"9555","2007-09-16","2008-03-18"),(5,1737111,"1111","2016-09-26",null),(5,1737111,"1112","2017-09-26",null),(5,1737111,"1113","2018-09-26",null),(6,1737444,"3334","2004-09-16","2019-02-15"),(6,1737444,"3333","2018-12-18","2019-01-31"),(7,1737555,"5555","2009-01-01","2011-02-18"),(7,1737555,"5556","2008-12-18","2019-01-31"),(7,1737555,"5557","2019-01-01","2019-02-15"),(7,1737555,"5558","2019-02-14","2020-02-18"),(7,1737555,"5559","2010-01-01","2026-02-18")).toDF("id","user_id", "record_modify_ts", "start_time", "end_time")
val partitionWindowR = Window.partitionBy($"id", $"user_id").orderBy($"record_modify_ts".asc)
val nextStart = lead($"start_time", 1).over(partitionWindowR)
val df_t =df.withColumn("next_start", nextStart).withColumn("status", when( to_date($"next_start") >= to_date($"end_time"), lit("REJOIN")).otherwise(lit("UPDATE"))).show()
输出:
+---+-------+----------------+----------+----------+----------+------+
| id|user_id|record_modify_ts|start_time| end_time|next_start|status|
+---+-------+----------------+----------+----------+----------+------+
| 4| 173711| 9555|2007-09-16|2008-03-18|2009-09-16|REJOIN|
| 4| 173711| 9566|2009-09-16|2011-03-18| null|UPDATE|
| 6|1737444| 3333|2018-12-18|2019-01-31|2004-09-16|UPDATE|
| 6|1737444| 3334|2004-09-16|2019-02-15| null|UPDATE|
| 3| 173799| 1111|2002-09-16|2003-03-18|2007-09-16|REJOIN|
| 3| 173799| 1112|2007-09-16|2008-03-18| null|UPDATE|
| 5|1737111| 1111|2016-09-26| null|2017-09-26|UPDATE|
| 5|1737111| 1112|2017-09-26| null|2018-09-26|UPDATE|
| 5|1737111| 1113|2018-09-26| null| null|UPDATE|
| 1| 173777| 7777|2018-12-18|2019-01-04|2019-01-05|REJOIN|
| 1| 173777| 7778|2019-01-05|2019-01-10|2019-02-01|REJOIN|
| 1| 173777| 7779|2019-02-01| null| null|UPDATE|
| 7|1737555| 5555|2009-01-01|2011-02-18|2008-12-18|UPDATE|
| 7|1737555| 5556|2008-12-18|2019-01-31|2019-01-01|UPDATE|
| 7|1737555| 5557|2019-01-01|2019-02-15|2019-02-14|UPDATE|
| 7|1737555| 5558|2019-02-14|2020-02-18|2010-01-01|UPDATE|
| 7|1737555| 5559|2010-01-01|2026-02-18| null|UPDATE|
| 2| 173788| 6666|2004-09-16|2006-03-18|2018-12-18|REJOIN|
| 2| 173788| 6668|2018-12-18| null| null|UPDATE|
+---+-------+----------------+----------+----------+----------+------+
df_t.withColumn("status_updated",first($"status").over(partitionWindowR)).show()
固定输出:
+---+-------+----------------+----------+----------+----------+------+--------------+
| id|user_id|record_modify_ts|start_time| end_time|next_start|status|status_updated|
+---+-------+----------------+----------+----------+----------+------+--------------+
| 4| 173711| 9555|2007-09-16|2008-03-18|2009-09-16|REJOIN| REJOIN|
| 4| 173711| 9566|2009-09-16|2011-03-18| null|UPDATE| REJOIN|
| 6|1737444| 3333|2018-12-18|2019-01-31|2004-09-16|UPDATE| UPDATE|
| 6|1737444| 3334|2004-09-16|2019-02-15| null|UPDATE| UPDATE|
| 3| 173799| 1111|2002-09-16|2003-03-18|2007-09-16|REJOIN| REJOIN|
| 3| 173799| 1112|2007-09-16|2008-03-18| null|UPDATE| REJOIN|
| 5|1737111| 1111|2016-09-26| null|2017-09-26|UPDATE| UPDATE|
| 5|1737111| 1112|2017-09-26| null|2018-09-26|UPDATE| UPDATE|
| 5|1737111| 1113|2018-09-26| null| null|UPDATE| UPDATE|
| 1| 173777| 7777|2018-12-18|2019-01-04|2019-01-05|REJOIN| REJOIN|
| 1| 173777| 7778|2019-01-05|2019-01-10|2019-02-01|REJOIN| REJOIN|
| 1| 173777| 7779|2019-02-01| null| null|UPDATE| REJOIN|
| 7|1737555| 5555|2009-01-01|2011-02-18|2008-12-18|UPDATE| UPDATE|
| 7|1737555| 5556|2008-12-18|2019-01-31|2019-01-01|UPDATE| UPDATE|
| 7|1737555| 5557|2019-01-01|2019-02-15|2019-02-14|UPDATE| UPDATE|
| 7|1737555| 5558|2019-02-14|2020-02-18|2010-01-01|UPDATE| UPDATE|
| 7|1737555| 5559|2010-01-01|2026-02-18| null|UPDATE| UPDATE|
| 2| 173788| 6666|2004-09-16|2006-03-18|2018-12-18|REJOIN| REJOIN|
| 2| 173788| 6668|2018-12-18| null| null|UPDATE| REJOIN|
+---+-------+----------------+----------+----------+----------+------+--------------+
能否请您提供问题的详细信息或您得到的输出now@saurabhshashank两个人都没有工作,而是标上了No。您可以使用IDE或控制台来制作它吗?看来我们很接近了。问题似乎是,如果我们修复了一个类别,那么另一个类别就会出现问题。如果end\u time
为null
或者只是为了区分end\u time
null与next\u start
null之间的差异而任意添加“9999-01-01”值?我给你的所有数据都是一位数(1-5)的id
,这只是REJOIN
的情况,两位数字中的id
来自更新的。如果我将您的逻辑应用于我的数据,看起来仍然需要修复,|15 | 1737111 | 1113 | 2018-09-26 | null | null | REJOIN |
<代码>16 | 1737444 | 3334 | 2004-09-16 | 2019-02-15 |空|重新加入|
<代码>17 | 1737555 | 5559 | 2010-01-01 | 2026-02-18 |空|重新加入|
是错误的。应该是UPDATED
但显示REJOIN的情况。顺便说一句,这是我之前得到的,这些列显示的问题是null
作为next\u start
。你能做一点修改吗?简单地说,如果我将下一次开始的案例重新连接为空,那么更新的案例的最后记录就会出错。反之亦然。请定义用户ID的重新加入和更新。我会解决这个问题。