Scala 流式查询未显示Spark中的任何进度
我从Spark结构化流媒体应用程序获取表单的状态消息:Scala 流式查询未显示Spark中的任何进度,scala,apache-spark,spark-structured-streaming,Scala,Apache Spark,Spark Structured Streaming,我从Spark结构化流媒体应用程序获取表单的状态消息: 18/02/12 16:38:54 INFO StreamExecution: Streaming query made progress: { "id" : "a6c37f0b-51f4-47c5-a487-8bd269b80142", "runId" : "061e41b4-f488-4483-a290-403f1f7eff03", "name" : null, "timestamp" : "2018-02-12T1
18/02/12 16:38:54 INFO StreamExecution: Streaming query made progress: {
"id" : "a6c37f0b-51f4-47c5-a487-8bd269b80142",
"runId" : "061e41b4-f488-4483-a290-403f1f7eff03",
"name" : null,
"timestamp" : "2018-02-12T11:08:54.323Z",
"numInputRows" : 0,
"processedRowsPerSecond" : 0.0,
"durationMs" : {
"getOffset" : 30,
"triggerExecution" : 46
},
"eventTime" : {
"watermark" : "1970-01-01T00:00:00.000Z"
},
"stateOperators" : [ ],
"sources" : [ {
"description" : "FileStreamSource[file:/home/chiralcarbon/IdeaProjects/spark_structured_streaming/args[0]]",
"startOffset" : null,
"endOffset" : null,
"numInputRows" : 0,
"processedRowsPerSecond" : 0.0
} ],
"sink" : {
"description" : "org.apache.spark.sql.execution.streaming.ConsoleSink@bcc171"
}
}
所有消息都具有值为0的numInputRows
该程序将来自拼花地板文件的数据流输出到控制台。以下是代码:
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder.
master("local")
.appName("sparkSession")
.getOrCreate()
val schema = ..
val in = spark.readStream
.schema(schema)
.parquet("args[0]")
val query = in.writeStream
.format("console")
.outputMode("append")
.start()
query.awaitTermination()
}
}
原因是什么?如何解决此问题?您在readStream中遇到错误:
val in = spark.readStream
.schema(schema)
.parquet("args[0]")
您可能希望从第一个参数中提供的目录中读取。然后改用直接调用或字符串插值:
val in = spark.readStream
.schema(schema)
.parquet(args(0))
或最后一行,如果表达式较长或在其他情况下有一些连接:
.parquet(s"${args(0)}")
当前,您的代码尝试从不存在的目录中读取,因此不会读取任何文件。更改后,目录将以正确的方式提供,Spark将开始读取文件我也有类似的问题,但在我的情况下,我正在从Kafka读取并写入Parquet,查询进度显示0个输入行,但偏移正在正确提交。有什么想法吗?我提出了一个单独的问题: