Apache spark 在spark结构化流媒体中顺序写入多个流

Apache spark 在spark结构化流媒体中顺序写入多个流,apache-spark,spark-structured-streaming,Apache Spark,Spark Structured Streaming,我通过spark结构化流媒体使用卡夫卡的数据,并试图将其写入3个不同的源。我希望流按顺序执行,因为stream2(query2)中的逻辑(在writer中)依赖于stream1(query1)。现在发生的是在query1和我的逻辑中断之前执行query2 val inputDf = spark.readStream .format("kafka") .option("kafka.bootstrap.servers", brokers) .option("assign"," {\""+

我通过spark结构化流媒体使用卡夫卡的数据,并试图将其写入3个不同的源。我希望流按顺序执行,因为stream2(query2)中的逻辑(在writer中)依赖于stream1(query1)。现在发生的是在query1和我的逻辑中断之前执行query2

val inputDf = spark.readStream
  .format("kafka")
  .option("kafka.bootstrap.servers", brokers)
  .option("assign"," {\""+topic+"\":[0]}") 
  .load()

 val query1 = inputDf.selectExpr("CAST (partition AS INT)","CAST (offset AS INT)","CAST (timestamp AS STRING)")
df1.agg(min("offset"), max("offset"))
  .writeStream
  .foreach(writer)
  .outputMode("complete")
  .trigger(Trigger.ProcessingTime("2 minutes"))
  .option("checkpointLocation", checkpoint_loc1).start()


 //result= (derived from some processing over 'inputDf' dataframe)

  val query2 = result.select(result("eventdate")).distinct   
 distDates.writeStream.foreach(writer1)
 .trigger(Trigger.ProcessingTime("2 minutes"))
 .option("checkpointLocation", checkpoint_loc2).start()


 val query3 = result.writeStream
  .outputMode("append")
  .format("orc")
  .partitionBy("eventdate")
  .option("path", "/warehouse/test_duplicate/download/data1")
  .option("checkpointLocation", checkpoint_loc)
  .option("maxRecordsPerFile", 999999999)
  .trigger(Trigger.ProcessingTime("2 minutes"))
  .start()

  spark.streams.awaitAnyTermination()
  result.checkpoint()

你可能希望cricket_007进行评估。你可能希望cricket_007进行评估。