Java 如何在流批量流连接中定义连接条件?

Java 如何在流批量流连接中定义连接条件?,java,apache-spark,apache-spark-sql,spark-structured-streaming,Java,Apache Spark,Apache Spark Sql,Spark Structured Streaming,我正在使用spark-sql-2.4.1v和Java1.8。和卡夫卡版本spark-sql-kafka-0-10_2.11_2.4.3 我正在尝试将静态数据帧(即元数据)与另一个流数据帧连接起来,如下所示: Dataset<Row> streamingDs = //read from kafka topic Dataset<Row> staticDf= //read from oracle meta-data table. Dataset<Row>

我正在使用spark-sql-2.4.1v和Java1.8。和卡夫卡版本spark-sql-kafka-0-10_2.11_2.4.3

我正在尝试将静态数据帧(即元数据)与另一个流数据帧连接起来,如下所示:

 Dataset<Row> streamingDs  = //read from kafka topic
 Dataset<Row> staticDf=  //read from oracle meta-data table.


Dataset<Row> joinDf = streamingDs.as("c").join(staticDf.as("i") ,
                      "c.code = i.industry_code"
                      );
Dataset<Row> joinDf = streamingDs.as("c").join(staticDf.as("i") ,
                      "c.code = i.industry_code",
                      "inner"
                      );
这将产生以下错误:

类型数据集中的joinDataset,String方法不适用于参数Dataset,String,String

tl;dr c.code=i.industry\u code被认为是要连接的列的名称,而不是连接表达式

将代码更改为如下所示:

streamingDs.as("c").join(staticDf.as("i")) // INNER JOIN is the default
  .where("c.code = i.industry_code")

在这里,下面的代码甚至会读取每个批次的最新更新维度数据,但请记住,在我的情况下,新维度数据国家/地区信息必须位于新文件中

package com.capone.streaming.BraodcastJoin

import com.amazonaws.services.dynamodbv2.model.AttributeValue
import com.capone.streaming.BroadCastStreamJoin.getClass
import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.{DataFrame, ForeachWriter, Row, SparkSession}
import org.apache.spark.sql.functions.{broadcast, expr}
import org.apache.spark.sql.types.{StringType, StructField, StructType, TimestampType}

object BroadCastStreamJoin2 {

  def main(args: Array[String]) = {

    @transient lazy val logger: Logger = Logger.getLogger(getClass.getName)

    Logger.getLogger("akka").setLevel(Level.WARN)
    Logger.getLogger("org").setLevel(Level.ERROR)
    Logger.getLogger("com.amazonaws").setLevel(Level.ERROR)
    Logger.getLogger("com.amazon.ws").setLevel(Level.ERROR)
    Logger.getLogger("io.netty").setLevel(Level.ERROR)

    val spark = SparkSession
      .builder()
      .master("local")
      .getOrCreate()

    val schemaUntyped1 = StructType(
      Array(
        StructField("id", StringType),
        StructField("customrid", StringType),
        StructField("customername", StringType),
        StructField("countrycode", StringType),
        StructField("timestamp_column_fin_1", TimestampType)
      ))

    val schemaUntyped2 = StructType(
      Array(
        StructField("id", StringType),
        StructField("countrycode", StringType),
        StructField("countryname", StringType),
        StructField("timestamp_column_fin_2", TimestampType)
      ))

    import org.apache.spark.sql.streaming.Trigger
    val factDf1 = spark.readStream
      .schema(schemaUntyped1)
      .option("header", "true")
      //.option("maxFilesPerTrigger", 1)
      .csv("src/main/resources/broadcasttest/fact")

    var countrDf: Option[DataFrame] = None: Option[DataFrame]

    def readDim() = {
      val dimDf2 = spark.read
        .schema(schemaUntyped2)
        .option("header", "true")
        .csv("src/main/resources/broadcasttest/dimension")

      if (countrDf != None) {
        countrDf.get.unpersist()
      }

      countrDf = Some(
        dimDf2
          .withColumnRenamed("id", "id_2")
          .withColumnRenamed("countrycode", "countrycode_2"))

      countrDf.get.show()
    }

    factDf1.writeStream
      .outputMode("append")
      .foreachBatch { (batchDF: DataFrame, batchId: Long) =>
        batchDF.show(10)
        readDim()

        batchDF
          .join(
            countrDf.get,
            expr(
              """
      countrycode_2 = countrycode 
      """
            ),
            "leftOuter"
          )
          .show

      }
      .start()
      .awaitTermination()

  }

}