Python PySpark Kafka用于groupBy聚合的结构化流式空输出

Python PySpark Kafka用于groupBy聚合的结构化流式空输出,python,pyspark,apache-kafka,spark-structured-streaming,Python,Pyspark,Apache Kafka,Spark Structured Streaming,我正试图通过处理卡夫卡数据来获得结构化流媒体聚合/分组。我正在研究(Py)Spark 2.4.6。但我得到的是空输出,不管是哪种模式 python psuedo代码是 tmpDF = spark.readStream.format("kafka").option("kafka.bootstrap.servers", broker).option("subscribe", topic).option('includeTimestamp',

我正试图通过处理卡夫卡数据来获得结构化流媒体聚合/分组。我正在研究(Py)Spark 2.4.6。但我得到的是空输出,不管是哪种模式

python psuedo代码是

tmpDF = spark.readStream.format("kafka").option("kafka.bootstrap.servers", broker).option("subscribe", topic).option('includeTimestamp', 'true').option("startingOffsets", "latest").load().selectExpr("CAST(value AS STRING)")

raw_data = tmpDF.select(
        # explode turns each item in an array into a separate row
        explode(
            split(tmpDF.value, '\r\n')
        ).alias('device')
    )
rawDF = raw_data.na.drop()
rawDF = rawDF.withColumn("epoch", rawDF.orig_time.cast("long"))

df_result = (rawDF.groupBy("epoch").count())
query = df_result \
        .writeStream.outputMode("complete") \
        .format('console') \
        .option('truncate', 'false') \
        .start()
我得到以下输出

-------------------------------------------
Batch: 0
-------------------------------------------
+-----+-----+
|epoch|count|
+-----+-----+
+-----+-----+

-------------------------------------------
Batch: 1
-------------------------------------------
+-----+-----+
|epoch|count|
+-----+-----+
+-----+-----+

-------------------------------------------
Batch: 2
-------------------------------------------
+-----+-----+
|epoch|count|
+-----+-----+
+-----+-----+
不知道我在哪里遗漏了什么