Scala 如何在Apache Spark中执行UPSERT操作?

Scala 如何在Apache Spark中执行UPSERT操作?,scala,apache-spark,Scala,Apache Spark,我正在尝试使用Apache Spark使用唯一列“ID”更新旧数据框并将记录插入其中。为了更新数据框,您可以对唯一列执行“左反”联接,然后将其与包含新记录的数据框合并 def refreshUnion(oldDS: Dataset[_], newDS: Dataset[_], usingColumns: Seq[String]): Dataset[_] = { val filteredNewDS = selectAndCastColumns(newDS, oldDS) oldDS

我正在尝试使用Apache Spark使用唯一列“ID”更新旧数据框并将记录插入其中。

为了更新数据框,您可以对唯一列执行“左反”联接,然后将其与包含新记录的数据框合并

def refreshUnion(oldDS: Dataset[_], newDS: Dataset[_], usingColumns: Seq[String]): Dataset[_] = {
    val filteredNewDS = selectAndCastColumns(newDS, oldDS)
    oldDS.join(
      filteredNewDS,
      usingColumns,
      "left_anti")
      .select(oldDS.columns.map(columnName => col(columnName)): _*)
      .union(filteredNewDS.toDF)
  }

  def selectAndCastColumns(ds: Dataset[_], refDS: Dataset[_]): Dataset[_] = {
    val columns = ds.columns.toSet
    ds.select(refDS.columns.map(c => {
      if (!columns.contains(c)) {
        lit(null).cast(refDS.schema(c).dataType) as c
      } else {
        ds(c).cast(refDS.schema(c).dataType) as c
      }
    }): _*)
  }

val df = refreshUnion(oldDS, newDS, Seq("ID"))

Spark数据帧是不可变的结构。因此,您不能基于ID进行任何更新

更新数据帧的方法是合并较旧的数据帧和较新的数据帧,并将合并后的数据帧保存在HDFS上。要更新旧的ID,您需要一些重复数据消除密钥(时间戳可能为)

我正在scala中为此添加示例代码。您需要使用uniqueId和timestamp列名调用
merge
函数。时间戳应该是长的

case class DedupableDF(unique_id: String, ts: Long);

def merge(snapshot: DataFrame)(
      delta: DataFrame)(uniqueId: String, timeStampStr: String): DataFrame = {
    val mergedDf = snapshot.union(delta)
    return dedupeData(mergedDf)(uniqueId, timeStampStr)

  }

def dedupeData(dataFrameToDedupe: DataFrame)(
      uniqueId: String,
      timeStampStr: String): DataFrame = {
    import sqlContext.implicits._

    def removeDuplicates(
        duplicatedDataFrame: DataFrame): Dataset[DedupableDF] = {
      val dedupableDF = duplicatedDataFrame.map(a =>
        DedupableDF(a(0).asInstanceOf[String], a(1).asInstanceOf[Long]))
      val mappedPairRdd =
        dedupableDF.map(row ⇒ (row.unique_id, (row.unique_id, row.ts))).rdd;
      val reduceByKeyRDD = mappedPairRdd
        .reduceByKey((row1, row2) ⇒ {
          if (row1._2 > row2._2) {
            row1
          } else {
            row2
          }
        })
        .values;
      val ds = reduceByKeyRDD.toDF.map(a =>
        DedupableDF(a(0).asInstanceOf[String], a(1).asInstanceOf[Long]))
      return ds;
    }

    /** get distinct unique_id, timestamp combinations **/
    val filteredData =
      dataFrameToDedupe.select(uniqueId, timeStampStr).distinct

    val dedupedData = removeDuplicates(filteredData)

    dataFrameToDedupe.createOrReplaceTempView("duplicatedDataFrame");
    dedupedData.createOrReplaceTempView("dedupedDataFrame");

    val dedupedDataFrame =
      sqlContext.sql(s""" select distinct duplicatedDataFrame.*
                  from duplicatedDataFrame
                  join dedupedDataFrame on
                  (duplicatedDataFrame.${uniqueId} = dedupedDataFrame.unique_id
                  and duplicatedDataFrame.${timeStampStr} = dedupedDataFrame.ts)""")
    return dedupedDataFrame
  }


只是想了解selectandCastcolumns函数的功能。如果您能通过此功能的指示灯,那就太好了。selectandCastcolumns尝试重新排列、强制转换列,并按照REFD使用空值填充未命中的列