Exception 阿帕奇·弗林克抛出;“分区已完成”;例外情况

Exception 阿帕奇·弗林克抛出;“分区已完成”;例外情况,exception,kubernetes,apache-flink,partition,Exception,Kubernetes,Apache Flink,Partition,我们正在Kubernetes中运行ApacheFlink1.9。我们有一些工作消耗卡夫卡事件并每分钟收集计数。这些工作一直做得很好,但最近突然出现了许多错误 java.lang.RuntimeException: Partition already finished. at org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:110)

我们正在Kubernetes中运行ApacheFlink1.9。我们有一些工作消耗卡夫卡事件并每分钟收集计数。这些工作一直做得很好,但最近突然出现了许多错误

java.lang.RuntimeException: Partition already finished.
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:110)
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:727)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:705)
    at org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollectWithTimestamp(StreamSourceContexts.java:310)
    at org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collectWithTimestamp(StreamSourceContexts.java:409)
引发错误的代码来自获取事件并发出水印的侦听器

    // We use an underlying API lib to get a source Context from Flink, sorry not to have source code here
    import org.apache.flink.streaming.api.functions.source.SourceFunction
    protected var context: SourceFunction.SourceContext[T] = ...

    validEventsSorted.foreach { event =>
      try {
        context.collectWithTimestamp(event, event.occurredAt.toEpochMilli)
        context.emitWatermark(new Watermark(event.occurredAt.minusSeconds(30).toEpochMilli))
      } catch {
        case e: Throwable =>
          logger.error(
              s"Failed to add to context. Event EID: ${event.nakadiMetaData.eid}." +
                s" Event: $event",
              e
            )
      }

    }
重新启动Flink作业管理器和任务管理器将结束错误,但此问题稍后可能会再次出现

据我理解和猜测,
分区已经完成
是由于操作员试图将事件传递给下一个操作员(分区)而导致的,但我不理解这是如何发生的

这是我们的源代码

import org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction

class SourceImpl[T: ClassTag](
    listener: KafkaListener[T]
)
extends RichParallelSourceFunction[T] {

  @volatile private var isCancelled: Boolean = false

  @volatile private var consumerFuture: java.util.concurrent.Future[_] = _

  override def run(ctx: SourceFunction.SourceContext[T]): Unit = {
    
    while (!isCancelled) {
        val runnable = KafkaClient
          .stream(subscription)
          .withStreamParameters(streamParameters)
          .runnable(classTag[T].runtimeClass.asInstanceOf[Class[T]], listener)

        val executorService = Executors.newSingleThreadExecutor()
        consumerFuture = executorService.submit(runnable)
        consumerFuture.get() // This is blocking
      } catch {
        case e: Throwable =>
          logger.warn(s"Unknown error consuming events", e)
      }
    }
  }

  override def cancel(): Unit = {
    isCancelled = true
    consumerFuture.cancel(true)
  }
}

有人知道为什么以及如何解决这个问题吗?非常感谢

原来我们的
SourceImpl
中有一个bug。当JobManager取消此作业时,
cancel
方法被调用,但可能会失败,
executorService
未关闭,
runnable
仍在TaskManager中运行,TaskManager会消耗事件并发出水印。由于作业已在JobManager和TaskManager中标记为已取消,水印发射将导致分区已完成异常

因此,我们明确地修复了shutdown
executureservice

    // Shutdown executorService
    if (executorService != null && !executorService.isShutdown) {
      executorService.shutdownNow()
    }
完整代码如下

import org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction

class SourceImpl[T: ClassTag](
    listener: KafkaListener[T]
)
extends RichParallelSourceFunction[T] {

  @volatile private var isCancelled: Boolean = false

  @volatile private var consumerFuture: java.util.concurrent.Future[_] = _

  override def run(ctx: SourceFunction.SourceContext[T]): Unit = {
    
    val executorService = Executors.newSingleThreadExecutor()
  
    while (!isCancelled) {
        val runnable = KafkaClient
          .stream(subscription)
          .withStreamParameters(streamParameters)
          .runnable(classTag[T].runtimeClass.asInstanceOf[Class[T]], listener)

        consumerFuture = executorService.submit(runnable)
        consumerFuture.get() // This is blocking
      } catch {
        case e: Throwable =>
          logger.warn(s"Unknown error consuming events", e)
      }
    }

    // Shutdown executorService
    if (executorService != null && !executorService.isShutdown) {
      executorService.shutdownNow()
    }
  }

  override def cancel(): Unit = {
    isCancelled = true
    consumerFuture.cancel(true)
  }
}

顺便说一句,我们有一个新的
ExecutorService
的原因是在一个单独的线程池中运行侦听器,这不会影响Flink线程池。然而,如果你认为这不是正确的方法,请在这里评论。谢谢

您能否与我们共享发生异常的运行的完整用户代码和日志?@TillRohrmann很抱歉我的响应太晚。我更新了这个问题,我想我可能会找到原因,所以我在下面贴了一个答案。如果你对我的回答有任何问题或评论,请发表评论。非常感谢!