Apache flink Flink使用TimeCharacteristic.IngestionTime设置StreamExecutionEnvironment时,向流添加重新平衡会导致作业失败

Apache flink Flink使用TimeCharacteristic.IngestionTime设置StreamExecutionEnvironment时,向流添加重新平衡会导致作业失败,apache-flink,flink-streaming,Apache Flink,Flink Streaming,我正在尝试运行流媒体作业,该作业使用来自卡夫卡的消息,将它们转换并接收到卡桑德拉 当前代码段失败 val env: StreamExecutionEnvironment = getExecutionEnv("dev") env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime) . . . . val source = env.addSource(kafkaConsumer) .ui

我正在尝试运行流媒体作业,该作业使用来自卡夫卡的消息,将它们转换并接收到卡桑德拉

当前代码段失败

val env: StreamExecutionEnvironment = getExecutionEnv("dev")
    env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime)
.
.
.
.

  val source = env.addSource(kafkaConsumer)
              .uid("kafkaSource")
              .rebalance

    val transformedObjects = source.process(new EnrichEventWithIngestionTimestamp)
        .setParallelism(dataSinkParallelism)
    sinker.apply(transformedObjects,dataSinkParallelism)


  class EnrichEventWithIngestionTimestamp extends ProcessFunction[RawData, TransforemedObjects] {
    override def processElement(rawData: RawData,
                                context: ProcessFunction[RawData, TransforemedObjects]#Context,
                                collector: Collector[TransforemedObjects]): Unit = {
     val currentTimestamp=context.timerService().currentProcessingTime()
      context.timerService().registerProcessingTimeTimer(currentTimestamp)
      collector.collect(TransforemedObjects.fromRawData(rawData,currentTimestamp))
    }
}
但是,如果注释掉了
重新平衡
,或者作业被更改为使用TimeCharacteristic.EventTime和水印分配,就像在搁置代码片段中一样,那么它可以工作

val env: StreamExecutionEnvironment = getExecutionEnv("dev")
    env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
.
.

  val source = env.addSource(kafkaConsumer)
              .uid("kafkaSource")
              .rebalance
              .assignTimestampsAndWatermarks(new BoundedOutOfOrdernessRawDataTimestampExtractor[RawData](Time.seconds(1)))


val transformedObjects = source.map(rawData=>TransforemedObjects.fromRawData(rawData))
        .setParallelism(dataSinkParallelism)
    sinker.apply(transformedObjects,dataSinkParallelism)

堆栈跟踪是:

java.lang.Exception: java.lang.RuntimeException: 1
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.checkThrowSourceExecutionException(SourceStreamTask.java:217)
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.processInput(SourceStreamTask.java:133)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:301)
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:406)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: 1
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:110)
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:89)
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:45)
    at org.apache.flink.streaming.api.collector.selector.DirectedOutput.collect(DirectedOutput.java:143)
    at org.apache.flink.streaming.api.collector.selector.DirectedOutput.collect(DirectedOutput.java:45)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:727)
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:705)
    at org.apache.flink.streaming.api.operators.StreamSourceContexts$AutomaticWatermarkContext.processAndCollect(StreamSourceContexts.java:176)
    at org.apache.flink.streaming.api.operators.StreamSourceContexts$AutomaticWatermarkContext.processAndCollectWithTimestamp(StreamSourceContexts.java:194)
    at org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collectWithTimestamp(StreamSourceContexts.java:409)
    at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
    at org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:91)
    at org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:156)
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:715)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:203)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
    at org.apache.flink.runtime.io.network.api.writer.RecordWriter.getBufferBuilder(RecordWriter.java:246)
    at org.apache.flink.runtime.io.network.api.writer.RecordWriter.copyFromSerializerToTargetChannel(RecordWriter.java:169)
    at org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:154)
    at org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:120)
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:107)
    ... 16 more
我做错什么了吗? 或者当TimeCharacteristic设置为InjectionTime时,使用
再平衡
功能存在限制


提前感谢您……

您能提供您正在使用的flink版本吗

看来你的问题和这张Jira票有关


您在任务中是否只使用了一次
重新平衡
recordWriter
可能共享相同的
channelSelector
,该选择器决定将记录转发到何处。堆栈跟踪显示它正在尝试选择一个越界通道。

能否提供您正在使用的flink版本

看来你的问题和这张Jira票有关


您在任务中是否只使用了一次
重新平衡
recordWriter
可能共享相同的
channelSelector
,该选择器决定将记录转发到何处。您的堆栈跟踪显示它正在尝试选择一个越界频道。

感谢您的快速回复。我使用的是Flink 1.9.1,在使用了卡夫卡的数据后,我只做了一次重新平衡。我用的是同一个Flink版本,它对我有效。您使用的是哪个卡夫卡消费者?你的工作有什么并行性?卡夫卡0.10.2.1,并行性=5谢谢你的快速回复。我使用的是Flink 1.9.1,在使用了卡夫卡的数据后,我只做了一次重新平衡。我用的是同一个Flink版本,它对我有效。您使用的是哪个卡夫卡消费者?你工作的平行度是多少?卡夫卡0.10.2.1,平行度=5