Apache spark SCD-2在数据块中使用增量

Apache spark SCD-2在数据块中使用增量,apache-spark,databricks,delta-lake,Apache Spark,Databricks,Delta Lake,我正在尝试构建SCD-2转换,但无法在Databricks中使用Delta实现 例如: //Base Table val employeeDf = Seq((1,"John","CT"), (2,"Mathew","MA"), (3,"Peter","CA"),

我正在尝试构建SCD-2转换,但无法在Databricks中使用Delta实现

例如:

//Base Table
 val employeeDf = Seq((1,"John","CT"),
                     (2,"Mathew","MA"),
                     (3,"Peter","CA"),
                     (4,"Joel","NY"))
                    .toDF("ID","NAME","ADDRESS")

val empBaseDf = employeeDf.withColumn("IS_ACTIVE",lit(1))
  .withColumn("EFFECTIVE_DATE",current_date())
  .withColumn("TERMINATION_DATE",lit(null).cast(StringType))      

empBaseDf.write.format("delta").mode("overwrite").saveAsTable("empBase")



empbaseTable
.as(“基础”)
.merge(processRec.as(“batch1”),“base.ID=mergeKey”)
.whenMatched(“base.IS\u ACTIVE=true和base.address batch1.address”)
.updateExpr(地图(
“是否处于活动状态”->“错误”,
“终止日期”->“当前日期()”)
.whennot匹配()
.insertExpr((映射(“ID”->“batch1.ID”,
“名称”->“batch1.NAME”,
“地址”->“批处理1.地址”,
“是否处于活动状态”->“为真”,
“生效日期”->“当前日期()”,
“终止日期”->“空”))
.execute()
//通过多次运行上述代码,将插入重复记录。我需要限制delta表中的重复条目。
ID名称地址为\u活动生效日期\u终止日期\u
1约翰新罕布什尔州1 2020-06-25零
1约翰CT 0 2020-06-25 2020-06-25
1约翰新罕布什尔州1 2020-06-25零
2马修MA 1 2020-06-25空
3彼得CA 1 2020-06-25零
4 Joel NY 1 2020-06-25无效
5 Adam NJ 1 2020-06-25无效
6菲利普CT 1 2020-06-25无效
我遵循了databricks为SCD-2转换提供的文档,但没有为我工作


任何建议都是有用的。

当您为员工记录接收的更新创建新条目时,您必须通过添加谓词
emp.is_ACTIVE=true
,确保更新记录应根据员工表中员工的最新条目进行验证,这将避免重复

// Rows to INSERT new addresses of existing customers
val newAddressesToInsert = empBatch
  .as("batch")
  .join(empbaseTable.toDF.as("emp"), "ID")
  .where("emp.IS_ACTIVE = true and batch.ADDRESS <> emp.ADDRESS").selectExpr("batch.*")
//插入现有客户新地址的行
val newAddressesToInsert=empBatch
.作为(“批次”)
.join(empbaseTable.toDF.as(“emp”),“ID”)
。其中(“emp.IS\u ACTIVE=true和batch.ADDRESS emp.ADDRESS”)。选择expr(“batch.*”)
import io.delta.tables._    
val empbaseTable: DeltaTable =  DeltaTable.forName("empBase")          
val empBatch = table("empBatch")

// Rows to INSERT new addresses of existing customers
val newAddressesToInsert = empBatch
  .as("batch")
  .join(empbaseTable.toDF.as("emp"), "ID")
  .where("batch.ADDRESS <> emp.ADDRESS").selectExpr("batch.*")

newAddressesToInsert.show()
val processRec = newAddressesToInsert
  .selectExpr("NULL as mergeKey", "*")
  .union(empBatch.selectExpr("ID as mergeKey", "*")  )                 
processRec.show()
empbaseTable
  .as("base")
  .merge(processRec.as("batch1"),"base.ID = mergeKey")
  .whenMatched("base.IS_ACTIVE = true AND base.address <> batch1.address")
  .updateExpr(Map(                                      
    "IS_ACTIVE" -> "false",
    "TERMINATION_DATE" -> "current_date()"))
  .whenNotMatched()  
  .insertExpr((Map("ID" -> "batch1.ID",
              "NAME" -> "batch1.NAME",
              "ADDRESS" -> "batch1.ADDRESS",             
              "IS_ACTIVE" -> "true",              
              "EFFECTIVE_DATE" -> "current_date()",
               "TERMINATION_DATE" -> "null" )))
  .execute()

//With multiple run of the above code duplicate records are getting inserted. I need to restrict the duplicate entry into the delta table.
ID  NAME    ADDRESS IS_ACTIVE   EFFECTIVE_DATE  TERMINATION_DATE
1   John    NH  1   2020-06-25  null
1   John    CT  0   2020-06-25  2020-06-25
1   John    NH  1   2020-06-25  null
2   Mathew  MA  1   2020-06-25  null
3   Peter   CA  1   2020-06-25  null
4   Joel    NY  1   2020-06-25  null
5   Adam    NJ  1   2020-06-25  null
6   Philip  CT  1   2020-06-25  null
// Rows to INSERT new addresses of existing customers
val newAddressesToInsert = empBatch
  .as("batch")
  .join(empbaseTable.toDF.as("emp"), "ID")
  .where("emp.IS_ACTIVE = true and batch.ADDRESS <> emp.ADDRESS").selectExpr("batch.*")