Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/variables/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala Spark结构化流从查询异常中恢复_Scala_Apache Spark_Spark Structured Streaming - Fatal编程技术网

Scala Spark结构化流从查询异常中恢复

Scala Spark结构化流从查询异常中恢复,scala,apache-spark,spark-structured-streaming,Scala,Apache Spark,Spark Structured Streaming,是否可以从查询执行期间引发的异常中自动恢复? 上下文:我正在开发一个Spark应用程序,它从卡夫卡主题中读取数据,处理数据,并输出到S3。但是,在生产环境中运行了几天后,spark应用程序将面临来自S3的一些网络故障,导致抛出异常并停止应用程序。还值得一提的是,该应用程序在Kubernetes上运行时使用 从我到目前为止所看到的情况来看,这些异常很小,只需重新启动应用程序就可以解决问题。我们能否处理这些异常并自动重新启动结构化流式查询 下面是一个抛出异常的示例: Exception in

是否可以从查询执行期间引发的异常中自动恢复?

上下文:我正在开发一个Spark应用程序,它从卡夫卡主题中读取数据,处理数据,并输出到S3。但是,在生产环境中运行了几天后,spark应用程序将面临来自S3的一些网络故障,导致抛出异常并停止应用程序。还值得一提的是,该应用程序在Kubernetes上运行时使用

从我到目前为止所看到的情况来看,这些异常很小,只需重新启动应用程序就可以解决问题。我们能否处理这些异常并自动重新启动结构化流式查询

下面是一个抛出异常的示例:

    Exception in thread "main" org.apache.spark.sql.streaming.StreamingQueryException: Job aborted.
    === Streaming Query ===
    Identifier: ...
    Current Committed Offsets: ...
    Current Available Offsets: ...

    Current State: ACTIVE
    Thread State: RUNNABLE

    Logical Plan: ...

        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:297)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:193)
    Caused by: org.apache.spark.SparkException: Job aborted.
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198)
        at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)
        at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
        at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
        at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
        at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
        at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
        at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
        at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
        at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
        at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
        at io.blahblahView$$anonfun$11$$anonfun$apply$2.apply(View.scala:90)
        at io.blahblahView $$anonfun$11$$anonfun$apply$2.apply(View.scala:82)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at io.blahblahView$$anonfun$11.apply(View.scala:82)
        at io.blahblahView$$anonfun$11.apply(View.scala:79)
        at org.apache.spark.sql.execution.streaming.sources.ForeachBatchSink.addBatch(ForeachBatchSink.scala:35)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5$$anonfun$apply$17.apply(MicroBatchExecution.scala:537)
        at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5.apply(MicroBatchExecution.scala:535)
        at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
        at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:534)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:198)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
        at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
        at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:166)
        at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:281)
        ... 1 more
    Caused by: java.io.FileNotFoundException: No such file or directory: s3a://.../view/v1/_temporary/0
        at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:993)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.listStatus(S3AFileSystem.java:734)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1517)
        at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1557)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.getAllCommittedTaskPaths(FileOutputCommitter.java:291)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal(FileOutputCommitter.java:361)
        at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:334)
        at org.apache.parquet.hadoop.ParquetOutputCommitter.commitJob(ParquetOutputCommitter.java:48)
        at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:166)
        at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:187)
        ... 47 more

自动处理此类问题的最简单方法是什么?不,没有可靠的方法可以做到这一点。顺便说一句,不也是一个答案

  • 检查异常的逻辑通常是通过在驱动程序上运行try/catch来实现的

  • 由于Spark框架本身已经为结构化流媒体标准地处理了执行器级别的意外情况,并且如果错误是不可恢复的,则应用程序/作业在向驱动程序发回错误信号后就会崩溃,除非您在各种foreachXXX构造中编写了try/catch代码

    • 这就是说,foreachXXX构造不清楚微批次是否可以在这种方法中恢复afaics,微批次的某些部分极有可能丢失。但很难测试
  • 既然Spark标准地满足了您无法钩住的东西,为什么可能在程序源代码中插入循环或try/catch?同样,广播变量也是一个问题——尽管他们说有些人在这方面有技巧。但这并不符合框架的精神


所以,这是一个很好的问题,因为我(在过去)对此感到好奇。

在花了太多时间试图找到一个优雅的解决方案,但什么都没有找到之后,我想到了以下几点

有人可能会说这是一个黑客,但它很简单,它可以工作并解决一个复杂的问题。我在生产中对它进行了测试,它解决了由于偶尔出现的小异常而导致的故障自动恢复问题

我把它称为查询看门狗。下面是最简单的版本,看门狗将无限期地重试运行查询:

val writer = df.writeStream...

while (true) {
   val query = writer.start()

   try {
        query.awaitTermination()
   } 
   catch {
       case e: StreamingQueryException => println("Streaming Query Exception caught!: " + e);
   }
}
有些人可能想用某种计数器来代替
while(true)
,以限制重试次数。有人还可以补充此代码,并在重试时通过slack或电子邮件发送通知。其他人可以简单地收集普罗米修斯的重试次数

希望有帮助


干杯

没有。Databricks的手册中也没有任何内容,只有关于节点恢复和工人检查点等的内容。如果驱动程序失败,它也就结束了。偶尔的小异常意味着?老实说,我不相信。偶尔的小异常意味着重新启动查询时消失的错误,网络故障就像S3中的临时404。那么,为什么它不在databricks指南中呢?好问题。我假设他们不想在官方文档中推荐解决方法。