Scala 如何按两个数据帧中的列分组,然后在行之间应用聚合差异函数?

Scala 如何按两个数据帧中的列分组,然后在行之间应用聚合差异函数?,scala,apache-spark,apache-spark-sql,spark-dataframe,Scala,Apache Spark,Apache Spark Sql,Spark Dataframe,我有两个数据帧,如下所示: +--------+----------+------+-------------------+ |readerId|locationId|userId| timestamp| +--------+----------+------+-------------------+ | R2| l1| u2|2018-04-12 05:00:00| | R1| l1| u1|2018-04-12 0

我有两个数据帧,如下所示:

+--------+----------+------+-------------------+
|readerId|locationId|userId|          timestamp|
+--------+----------+------+-------------------+
|      R2|        l1|    u2|2018-04-12 05:00:00|
|      R1|        l1|    u1|2018-04-12 05:00:00|
|      R3|        l3|    u3|2018-04-12 05:00:00|
+--------+----------+------+-------------------+

+--------+----------+------+-------------------+
|readerId|locationId|userId|          timestamp|
+--------+----------+------+-------------------+
|      R1|        l1|    u1|2018-04-12 07:00:00|
|      R2|        l1|    u2|2018-04-12 10:00:00|
|      R3|        l3|    u3|2018-04-12 07:00:00|
+--------+----------+------+-------------------+
我想对
readerId
locationId
进行分组,然后查找分组值的时间戳差异。例如:对于readerID
R1
,locationID
l1
,时间戳差为2小时

我通过连接两个数据帧并使用
with column
实现了它

val joinedDf = asKuduDf.join(
        asOutToInDf,
        col("kdf.locationId") <=> col("outInDf.locationId") &&
          (col("kdf.readerId") <=> col("outInDf.readerId")),
        "inner")
      //Time loged in calculation
      val timestampDf = joinedDf.withColumn(
        "totalTime",
        ((unix_timestamp($"outInDf.timestamp") -
          unix_timestamp($"kdf.timestamp"))/60).cast("long")
      ).toDF()

但上述方法的问题在于没有“差异”函数

join
是正确的解决方案。通常情况下,
groupby
和聚合不是一个选项,特别是当(
readerId
locationId
)不是唯一标识符时

你可以

unionDf
  .groupBy($"readerId", $"locationId")
  .agg((max($"timestamp").cast("long") - min($"timestamp").cast(long) / 60).alias("diff"))

但这是一个高度人工的解决方案,与
join
相比没有任何优势。它对一些微妙的数据问题也很敏感。

您可以使用
union
合并这两个数据帧,在聚合中,您可以根据需要计算差异

val mergedDF = asKuduDf.union(asOutToInDf)
  .groupBy($"readerId", $"locationId")
  .agg(collect_list($"timestamp").as("time"))

mergedDF.withColumn("dif",
  abs(unix_timestamp($"time" (0)) - unix_timestamp($"time" (1))) / 60
)
输出:

+--------+----------+------------------------------------------+-----+
|readerId|locationId|time                                      |dif  |
+--------+----------+------------------------------------------+-----+
|R3      |l3        |[2018-04-12 05:00:00, 2018-04-12 07:00:00]|120.0|
|R2      |l1        |[2018-04-12 05:00:00, 2018-04-12 10:00:00]|300.0|
|R1      |l1        |[2018-04-12 05:00:00, 2018-04-12 07:00:00]|120.0|
+--------+----------+------------------------------------------+-----+

希望这有帮助

虽然您正确地认为
join
是正确的解决方案,但我认为您必须假设
(readerId,locationId)
对是唯一的,否则整个问题毫无意义,无论是使用聚合还是连接-当每边都有多个值时,差异的含义是什么?
+--------+----------+------------------------------------------+-----+
|readerId|locationId|time                                      |dif  |
+--------+----------+------------------------------------------+-----+
|R3      |l3        |[2018-04-12 05:00:00, 2018-04-12 07:00:00]|120.0|
|R2      |l1        |[2018-04-12 05:00:00, 2018-04-12 10:00:00]|300.0|
|R1      |l1        |[2018-04-12 05:00:00, 2018-04-12 07:00:00]|120.0|
+--------+----------+------------------------------------------+-----+