Apache kafka 反应器Kafka异步保证顺序消息消耗失败

Apache kafka 反应器Kafka异步保证顺序消息消耗失败,apache-kafka,reactor,reactor-kafka,Apache Kafka,Reactor,Reactor Kafka,reactor Kafka文档概述了用于按顺序使用来自Kafka分区的消息的示例代码,但是在示例代码中,处理方法是同步的。根据一些局部测试,顺序处理和背压在该特定样本中工作良好 public Flux<?> flux() { Scheduler scheduler = Schedulers.newBoundedElastic(60, Integer.MAX_VALUE, "sample", 60, true); re

reactor Kafka文档概述了用于按顺序使用来自Kafka分区的消息的示例代码,但是在示例代码中,处理方法是同步的。根据一些局部测试,顺序处理和背压在该特定样本中工作良好

public Flux<?> flux() {
            Scheduler scheduler = Schedulers.newBoundedElastic(60, Integer.MAX_VALUE, "sample", 60, true);
            return KafkaReceiver.create(receiverOptions(Collections.singleton(topic)).commitInterval(Duration.ZERO))
                            .receive()
                            .groupBy(m -> m.receiverOffset().topicPartition())
                            .flatMap(partitionFlux -> partitionFlux.publishOn(scheduler)
                                                                   .map(r -> processRecord(partitionFlux.key(), r))
                                                                   .sample(Duration.ofMillis(5000))
                                                                   .concatMap(offset -> offset.commit()))
                            .doOnCancel(() -> close());
        }
        public ReceiverOffset processRecord(TopicPartition topicPartition, ReceiverRecord<Integer, Person> message) {
            log.info("Processing record {} from partition {} in thread{}",
                    message.value().id(), topicPartition, Thread.currentThread().getName());
            return message.receiverOffset();
        }
公共流量(){
Scheduler Scheduler=Schedulers.newboundedelistic(60,Integer.MAX_值,“sample”,60,true);
返回KafkaReceiver.create(receiverOptions(Collections.singleton(topic)).commitInterval(Duration.ZERO))
.receive()
.groupBy(m->m.receiverOffset().topicPartition())
.flatMap(partitionFlux->partitionFlux.publishOn(调度器)
.map(r->processRecord(partitionFlux.key(),r))
.样本(持续时间:百万(5000))
.concatMap(offset->offset.commit())
.doOnCancel(()->close());
}
公共ReceiverOffset进程记录(TopicPartition TopicPartition,ReceiverRecord消息){
info(“正在处理来自线程{}中分区{}的记录{}”,
message.value().id(),topicPartition,Thread.currentThread().getName());
返回消息。receiverOffset();
}
我们的用例将处理逻辑作为一个异步函数,如下图所示,因此我们的用例中的processRecord返回一个Mono。processRecord方法大约需要3-4秒才能完成,在这种情况下,流量不受背压的影响。越来越多的消息被提取,而之前的消息没有被使用。这会导致系统逐渐变得不稳定,并最终导致OutOfMemory异常。遵守订单,但背压不适用

public Mono<ReceiverOffset> processRecord(TopicPartition topicPartition, ReceiverRecord<Integer, Person> message) {
            log.info("Processing record {} from partition {} in thread{}",
                    message.value().id(), topicPartition, Thread.currentThread().getName());
....
        }
公共Mono进程记录(TopicPartition TopicPartition,ReceiverRecord消息){
info(“正在处理来自线程{}中分区{}的记录{}”,
message.value().id(),topicPartition,Thread.currentThread().getName());
....
}
我们是否有一个使用reactor从kafka异步消费消息的示例