Akka 发送给碎片演员时对卡夫卡的反压力

Akka 发送给碎片演员时对卡夫卡的反压力,akka,apache-kafka,actor,reactive,backpressure,Akka,Apache Kafka,Actor,Reactive,Backpressure,我已经编写了一个Akka应用程序,它从Kafka获取输入,然后使用分片演员处理数据,并输出到Kafka 但在某些情况下,碎片区域无法处理负载,我得到: 您可能应该实现流控制,以避免淹没 远程连接 如何在此链/流中实施背压 卡夫卡消费者->共享演员->卡夫卡制作人 代码中的一些片段: ReactiveKafka kafka = new ReactiveKafka(); Subscriber subscriber = kafka.publish(pp, system); ActorRef kaf

我已经编写了一个Akka应用程序,它从Kafka获取输入,然后使用分片演员处理数据,并输出到Kafka

但在某些情况下,碎片区域无法处理负载,我得到:

您可能应该实现流控制,以避免淹没 远程连接

如何在此链/流中实施背压

卡夫卡消费者->共享演员->卡夫卡制作人

代码中的一些片段:

ReactiveKafka kafka = new ReactiveKafka();

Subscriber subscriber = kafka.publish(pp, system);

ActorRef kafkaWriterActor = (ActorRef) Source.actorRef(10000, OverflowStrategy.dropHead())
                .map(ix -> KeyValueProducerMessage.apply(Integer.toString(ix.hashCode()), ix))
                .to(Sink.fromSubscriber(subscriber))
                .run(materializer);

ConsumerProperties cp = new PropertiesBuilder.Consumer(brokerList, intopic, consumergroup, new ByteArrayDeserializer(), new NgMsgDecoder())
                        .build().consumerTimeoutMs(5000).commitInterval(Duration.create(60, TimeUnit.SECONDS)).readFromEndOfStream();

Publisher<ConsumerRecord<byte[], StreamEvent>> publisher = kafka.consume(cp,system);

ActorRef streamActor = ClusterSharding.get(system).start("StreamActor",
                Props.create(StreamActor.class, synctime), ClusterShardingSettings.create(system), messageExtractor);

shardRegionTypenames.add("StreamActor");


Source.fromPublisher(publisher)                
                .runWith(Sink.foreach(msg -> {                    
                    streamActor.tell(msg.value(),ActorRef.noSender());
                }), materializer);
ReactiveKafka kafka=new ReactiveKafka();
订阅者订阅者=kafka.publish(pp,system);
ActorRef kafkaWriterActor=(ActorRef)Source.ActorRef(10000,OverflowStrategy.dropHead())
.map(ix->KeyValueProducerMessage.apply(Integer.toString(ix.hashCode()),ix))
.to(Sink.fromSubscriber(订户))
.run(物化器);
ConsumerProperties cp=new PropertiesBuilder.Consumer(brokerList、intopic、consumergroup、new ByteArrayDeserializer()、new NgMsgDecoder())
.build().consumerTimeoutMs(5000).commitInterval(Duration.create(60,TimeUnit.SECONDS)).readFromEndOfStream();
Publisher-Publisher=kafka.consume(cp,系统);
ActorRef streamActor=ClusterSharding.get(system.start)(“streamActor”,
创建(StreamActor.class,synctime),ClusterShardingSettings.create(system),messageExtractor);
添加(“StreamActor”);
Source.fromPublisher(publisher)
.runWith(Sink.foreach(msg->{
streamActor.tell(msg.value(),ActorRef.noSender());
})物化器);

< /代码> 也许您可以考虑将主题并行化为分区(如果适用的话),并通过使用<代码> ConsumerWithPerPartitionBackpressure < /C> >以与您的演员集成使用.b/p>来创建消费者。但是你可以尝试使用Akka Streams来处理背压技术。