Scala 火花&x2013;联接第一行匹配条件

Scala 火花&x2013;联接第一行匹配条件,scala,apache-spark,Scala,Apache Spark,我正在寻找连接以下两个Spark数据集的方法: # city_visits: person_id city timestamp ----------------------------------------------- 1 Paris 2017-01-01 00:00:00 1 Amsterdam 2017-01-03 00:00:00 1 Bruss

我正在寻找连接以下两个Spark数据集的方法:

# city_visits:
person_id         city                timestamp
-----------------------------------------------
        1        Paris      2017-01-01 00:00:00
        1    Amsterdam      2017-01-03 00:00:00
        1     Brussels      2017-01-04 00:00:00
        1       London      2017-01-06 00:00:00
        2       Berlin      2017-01-01 00:00:00
        2     Brussels      2017-01-02 00:00:00
        2       Berlin      2017-01-06 00:00:00
        2      Hamburg      2017-01-07 00:00:00

# ice_cream_events:
person_id      flavour                timestamp
-----------------------------------------------
        1      Vanilla      2017-01-02 00:12:00
        1    Chocolate      2017-01-05 00:18:00
        2   Strawberry      2017-01-03 00:09:00
        2      Caramel      2017-01-05 00:15:00
因此,对于
city\u visions
中的每一行,
ice\u cream\u events
中具有相同
person\u id
和next
timestamp
值的行被合并,从而产生此输出:

person_id       city            timestamp  ic_flavour          ic_timestamp
---------------------------------------------------------------------------
        1      Paris  2017-01-01 00:00:00     Vanilla   2017-01-02 00:12:00
        1  Amsterdam  2017-01-03 00:00:00   Chocolate   2017-01-05 00:18:00
        1   Brussels  2017-01-04 00:00:00   Chocolate   2017-01-05 00:18:00
        1     London  2017-01-06 00:00:00        null                  null
        2     Berlin  2017-01-01 00:00:00  Strawberry   2017-01-03 00:09:00
        2   Brussels  2017-01-02 00:00:00  Strawberry   2017-01-03 00:09:00
        2     Berlin  2017-01-06 00:00:00        null                  null
        2    Hamburg  2017-01-07 00:00:00        null                  null
到目前为止,我得到的最接近的解决方案是以下内容,但这显然会将
冰淇淋事件中符合条件的每一行都连接起来,而不仅仅是第一行:

val cv = city_visits.orderBy("person_id", "timestamp")
val ic = ice_cream_events.orderBy("person_id", "timestamp")
val result = cv.join(ic, ic("person_id") === cv("person_id")
                         && ic("timestamp") > cv("timestamp"))

是否有一种(最好是有效的)方法来指定仅在第一个匹配的
冰激凌事件
行上需要加入,而不是所有事件?

请求请在问题中包括
sc.parallalize
代码。这样更容易回答

val city_visits = sc.parallelize(Seq((1, "Paris", "2017-01-01 00:00:00"),(1, "Amsterdam", "2017-01-03 00:00:00"),(1, "Brussels", "2017-01-04 00:00:00"),(1, "London", "2017-01-06 00:00:00"),(2, "Berlin", "2017-01-01 00:00:00"),(2, "Brussels", "2017-01-02 00:00:00"),(2, "Berlin", "2017-01-06 00:00:00"),(2, "Hamburg", "2017-01-07 00:00:00"))).toDF("person_id", "city", "timestamp")
val ice_cream_events = sc.parallelize(Seq((1, "Vanilla", "2017-01-02 00:12:00"),(1, "Chocolate", "2017-01-05 00:18:00"),(2, "Strawberry", "2017-01-03 00:09:00"), (2, "Caramel", "2017-01-05 00:15:00"))).toDF("person_id", "flavour", "timestamp")
解决方案1: 按照注释中的建议,您可以首先进行连接,这将创建所有可能的行组合

val joinedRes = city_visits.as("C").
    join(ice_cream_events.as("I")
      , joinType = "LEFT_OUTER"
      , joinExprs =
        $"C.person_id" === $"I.person_id" &&
        $"C.timestamp"  <  $"I.timestamp"
    ).select($"C.person_id", $"C.city", $"C.timestamp", $"I.flavour".as("ic_flavour"), $"I.timestamp".as("ic_timestamp"))
joinedRes.orderBy($"person_id", $"timestamp").show

现在是更棘手的部分。正如我面对的那样。在执行连接操作时,上面的连接会创建大量数据。Spark必须等到连接完成后才能运行导致内存问题的
groupBy

解决方案2:(概率方法) 使用有状态联接。为此,我们在每个执行器中维护一个状态,使用bloom过滤器中的本地状态,每个执行器只发出一行

import org.apache.spark.sql.functions._
var bloomFilter      = breeze.util.BloomFilter.optimallySized[String](city_visits.count(), falsePositiveRate = 0.0000001)
val isFirstOfItsName = udf((uniqueKey: String, joinExprs:Boolean) => if (joinExprs) { // Only update bloom filter if all other expresions are evaluated to true. Dataframe evaluation of join clause order is not guranteed so we have to enforce this here.
    val res = bloomFilter.contains(uniqueKey)
    bloomFilter += uniqueKey
    !res
  } else false)

val joinedRes = city_visits.as("C").
    join(ice_cream_events.as("I")
      , joinType = "LEFT_OUTER"
      , joinExprs = isFirstOfItsName(
          concat($"C.person_id", $"C.city", $"C.timestamp"), // Unique key to identify first of its kind.
          $"C.person_id" === $"I.person_id" && $"C.timestamp"  <  $"I.timestamp")// All the other join conditions here.
    ).select($"C.person_id", $"C.city", $"C.timestamp", $"I.flavour".as("ic_flavour"), $"I.timestamp".as("ic_timestamp"))
joinedRes.orderBy($"person_id", $"timestamp").show

像这样加入,这可能是你能做的最好的开箱即用。如果
冰激凌事件
很大,但足够小,可以放入内存,您可以创建一个udf,并在优化结构上进行搜索(二叉树,在排序列表上进行二叉搜索),但我不会打扰您,除非您的性能显著下降。@zero323谢谢,这很有帮助——尽管我希望有一种类似于SQL的
JOIN-LATERAL
的方法,但它似乎没有类似的火花
import org.apache.spark.sql.functions._
var bloomFilter      = breeze.util.BloomFilter.optimallySized[String](city_visits.count(), falsePositiveRate = 0.0000001)
val isFirstOfItsName = udf((uniqueKey: String, joinExprs:Boolean) => if (joinExprs) { // Only update bloom filter if all other expresions are evaluated to true. Dataframe evaluation of join clause order is not guranteed so we have to enforce this here.
    val res = bloomFilter.contains(uniqueKey)
    bloomFilter += uniqueKey
    !res
  } else false)

val joinedRes = city_visits.as("C").
    join(ice_cream_events.as("I")
      , joinType = "LEFT_OUTER"
      , joinExprs = isFirstOfItsName(
          concat($"C.person_id", $"C.city", $"C.timestamp"), // Unique key to identify first of its kind.
          $"C.person_id" === $"I.person_id" && $"C.timestamp"  <  $"I.timestamp")// All the other join conditions here.
    ).select($"C.person_id", $"C.city", $"C.timestamp", $"I.flavour".as("ic_flavour"), $"I.timestamp".as("ic_timestamp"))
joinedRes.orderBy($"person_id", $"timestamp").show
val firstMatchRes =  joinedRes.
    groupBy($"person_id", $"city", $"timestamp").
    agg(first($"ic_flavour"), first($"ic_timestamp"))