Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/374.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 如何让RetryAdvice为KafkaProducerMessageHandler工作_Java_Spring Boot_Spring Integration_Spring Kafka - Fatal编程技术网

Java 如何让RetryAdvice为KafkaProducerMessageHandler工作

Java 如何让RetryAdvice为KafkaProducerMessageHandler工作,java,spring-boot,spring-integration,spring-kafka,Java,Spring Boot,Spring Integration,Spring Kafka,我正试图为卡夫卡handler编写RetryAdvice;然后退回到MongoDB,保存为RecoveryCallback @Bean(name = "kafkaSuccessChannel") public ExecutorChannel kafkaSuccessChannel() { return MessageChannels.executor("kafkaSuccessChannel", asyncExecutor()).get(); }

我正试图为卡夫卡handler编写
RetryAdvice
;然后退回到MongoDB,保存为
RecoveryCallback

@Bean(name = "kafkaSuccessChannel")
public ExecutorChannel kafkaSuccessChannel() {
    return MessageChannels.executor("kafkaSuccessChannel", asyncExecutor()).get();
}

@Bean(name = "kafkaErrorChannel")
public ExecutorChannel kafkaErrorChannel() {
    return MessageChannels.executor("kafkaSuccessChannel", asyncExecutor()).get();
}

@Bean
@ServiceActivator(inputChannel = "kafkaPublishChannel")
public KafkaProducerMessageHandler<String, String> kafkaProducerMessageHandler(
        @Autowired ExecutorChannel kafkaSuccessChannel,
        @Autowired RequestHandlerRetryAdvice retryAdvice) {
    KafkaProducerMessageHandler<String, String> handler = new KafkaProducerMessageHandler<>(kafkaTemplate());
    handler.setHeaderMapper(mapper());
    handler.setLoggingEnabled(TRUE);
    handler.setTopicExpression(
            new SpelExpressionParser()
                    .parseExpression(
                            "headers['" + upstreamTypeHeader + "'] + '_' + headers['" + upstreamInstanceHeader + "']"));
    handler.setSendSuccessChannel(kafkaSuccessChannel);
    handler.setAdviceChain(Arrays.asList(retryAdvice));
    // sync true implies that this Kafka handler will wait for results of kafka operations; to be used only for testing purposes.
    handler.setSync(testMode);
    return handler;
}
最后,我有一个
Mongo
处理程序,可以将失败的消息保存到某个集合中

@Bean
@ServiceActivator(inputChannel = "kafkaErrorChannel")
public MongoDbStoringMessageHandler kafkaFailureHandler(@Autowired MongoDatabaseFactory mongoDbFactory,
                                                        @Autowired MongoConverter mongoConverter) {
    String collectionExpressionString = "headers['" + upstreamTypeHeader + "'] + '_'+ headers['" + upstreamInstanceHeader + "']+ '_FAIL'";
    return getMongoDbStoringMessageHandler(mongoDbFactory, mongoConverter, collectionExpressionString);
}
我很难弄清楚我是否将所有这些都连接到了正确的位置,因为测试似乎从来都不起作用,在测试类I中,不要设置任何嵌入式卡夫卡或连接到卡夫卡,这样消息发布就会失败,期望这会触发重试建议并最终保存到mongo中的死信集合

@Test
void testFailedKafkaPublish() {

    //Dummy message
    Map<String, String> map = new HashMap<>();
    map.put("key", "value");
    // Publish Message
    Message<Map<String, String>> message = MessageBuilder.withPayload(map)
            .setHeader("X-UPSTREAM-TYPE", "alm")
            .setHeader("X-INSTANCE-HEADER", "jira")
            .build();

    kafkaGateway.publish(message);

    //assert successful message is saved in FAIL collection
    assertThat(mongoTemplate.findAll(DBObject.class, "alm_jira_FAIL"))
            .extracting("key")
            .containsOnly("value");
}
卡夫卡形态;在上面的测试中,我通过
@TestPropertySource
注释将其设置为true:

@TestPropertySource(properties = {
        "spring.main.banner-mode=off",
        "spring.data.mongodb.database=swiftalk_db",
        "spring.data.mongodb.port=29019",
        "spring.data.mongodb.host=localhost",
        "digite.swiftalk.kafka.test-mode=true",

})
我仍然看不到执行重试建议的任何注销,也看不到Mongo中保存的失败消息。另一个想法是使用
waitibility
,但我不确定应该在
until()
方法中设置什么条件才能使其工作

更新

添加了Kafka的调试日志,我注意到制作者进入了一个循环,试图在一个单独的线程中重新连接Kafka

2021-03-25 10:56:02.640 DEBUG 66997 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Initiating connection to node localhost:9999 (id: -1 rack: null) using address localhost/127.0.0.1
2021-03-25 10:56:02.641 DEBUG 66997 --- [dPoolExecutor-1] o.a.k.clients.producer.KafkaProducer     : [Producer clientId=producer-1] Kafka producer started
2021-03-25 10:56:02.666 DEBUG 66997 --- [ad | producer-1] o.apache.kafka.common.network.Selector   : [Producer clientId=producer-1] Connection with localhost/127.0.0.1 disconnected

java.net.ConnectException: Connection refused
    at java.base/sun.nio.ch.Net.pollConnect(Native Method) ~[na:na]
    at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:660) ~[na:na]
    at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:875) ~[na:na]
    at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) ~[kafka-clients-2.6.0.jar:na]
    at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:219) ~[kafka-clients-2.6.0.jar:na]
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:530) ~[kafka-clients-2.6.0.jar:na]
    at org.apache.kafka.common.network.Selector.poll(Selector.java:485) ~[kafka-clients-2.6.0.jar:na]
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:544) ~[kafka-clients-2.6.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325) ~[kafka-clients-2.6.0.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240) ~[kafka-clients-2.6.0.jar:na]
    at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
而测试到达断言,因此失败

    //assert successful message is saved in FAIL collection
    assertThat(mongoTemplate.findAll(DBObject.class, "alm_jira_FAIL"))
            .extracting("key")
            .containsOnly("value");
因此,重试建议似乎没有接管前两次失败

更新2

已更新配置类以添加属性

@Value("${spring.kafka.producer.properties.max.block.ms:1000}")
private Integer productMaxBlockDurationMs;
并将以下行添加到
kafkaTemplate
配置方法

props.put(ProducerConfig.MAX\u BLOCK\u MS\u CONFIG,productMaxBlockDurationMs)

这就解决了问题

更新3

正如Gary所说,我们可以完全不用添加所有这些道具等;我从我的类中删除了以下方法

@Bean
KafkaTemplate<String, String> kafkaTemplate() {
    Map<String, Object> props = new HashMap<>();
    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, productMaxBlockDurationMs);
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
    return new KafkaTemplate<>(new DefaultKafkaProducerFactory<>(props));
}

KafkaProducer
s块默认情况下持续60秒,然后失败

尝试减少
max.block.ms
producer属性

编辑

下面是一个例子:

@springboot应用程序
公共类SO66768745应用程序{
公共静态void main(字符串[]args){
SpringApplication.run(So66768745Application.class,args);
}
@豆子
集成流(KafkaTemplate模板、RequestHandlerRetryAdvice retryAdvice){
返回IntegrationFlows.from(Gate.class)
.handle(Kafka.outboundChannelAdapter(模板)
.topic(“testTopic”),e->e
.advice(retryAdvice))
.get();
}
@豆子
RequestHandlerRetryAdvice retryAdvice(队列通道){
RequestHandlerRetryAdvice=new RequestHandlerRetryAdvice();
advice.setRecoveryCallback(新的ErrorMessageSendingRecoverer(通道));
回信;
}
@豆子
QueueChannel通道(){
返回新的队列通道();
}
}
接口门{
void sendToKafka(串出);
}
@SpringBootTest
@TestPropertySource(属性={
“spring.kafka.bootstrap-servers:localhost:9999”,
“spring.kafka.producer.properties.max.block.ms:500”})
SO66768745类应用程序测试{
@自动连线
大门;
@自动连线
排队信道;
@试验
无效测试(){
这个.gate.sendToKafka(“测试”);
消息em=此通道接收(60_000);
assertThat(em.isNotNull();
系统输出打印项次(em);
}
}

我添加了一个有效的示例。请注意,对于此特定错误(无法获取元数据),不需要同步,因为调用线程上抛出了
TimeoutException
。对于其他(异步)错误,您不需要进行
sync
。谢谢@Gary Russel,尝试了你的改变,也意识到我可以写流来让它清晰明了:-)但我仍然无法复制你答案中的行为;你能帮我指出我的代码哪里出错了吗?这里的要点很难说;尝试将org.apache.kafka日志级别设置为DEBUG,以查看它是否提供了任何线索-确保在product config INFO日志中应用了
max.block.ms
。如果您仍然无法理解,请将其精简到最低限度(如我的),并将完整的项目发布到某个地方,以便我可以在本地运行它。我改变了我的示例,使用执行器通道,它仍然有效。对,但这是生产者I/O线程,
main
线程应该像我的示例中那样超时;确保正确配置了
max.block.ms
(在
ProducerConfig
INFO日志中)。如果您仍然无法理解,请将一个最小的项目(如我的)发布到某个地方,我来看看有什么问题。
application.yml/properties
仅自动应用于Boot的自动配置bean(在本例中为producer factory)。如果定义自己的基础结构bean,则必须对其进行完全配置。通常不需要定义自己的生产者工厂bean,为什么不直接使用Boot的自动配置工厂呢?
@Value("${spring.kafka.producer.properties.max.block.ms:1000}")
private Integer productMaxBlockDurationMs;
@Bean
KafkaTemplate<String, String> kafkaTemplate() {
    Map<String, Object> props = new HashMap<>();
    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    props.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, productMaxBlockDurationMs);
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
    return new KafkaTemplate<>(new DefaultKafkaProducerFactory<>(props));
}
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.producer.properties.max.block.ms=1000
spring.kafka.producer.properties.enable.idempotence=true
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
2021-03-23 15:16:13.908 ERROR 2668 --- [           main] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='test' to topic testTopic:

org.apache.kafka.common.errors.TimeoutException: Topic testTopic not present in metadata after 500 ms.

2021-03-23 15:16:14.343  WARN 2668 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Connection to node -1 (localhost/127.0.0.1:9999) could not be established. Broker may not be available.
2021-03-23 15:16:14.343  WARN 2668 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Bootstrap broker localhost:9999 (id: -1 rack: null) disconnected
2021-03-23 15:16:14.415 ERROR 2668 --- [           main] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='test' to topic testTopic:

org.apache.kafka.common.errors.TimeoutException: Topic testTopic not present in metadata after 500 ms.

2021-03-23 15:16:14.921 ERROR 2668 --- [           main] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='test' to topic testTopic:

org.apache.kafka.common.errors.TimeoutException: Topic testTopic not present in metadata after 500 ms.

ErrorMessage [payload=org.springframework.messaging.MessagingException: Failed to handle; nested exception is org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic testTopic not present in metadata after 500 ms., failedMessage=GenericMessage [payload=test, headers={replyChannel=nullChannel, errorChannel=, id=d8ce277a-3d9a-b0bc-c14b-80d63ca13858, timestamp=1616526973218}], headers={id=1a6c29d2-f8d8-adf0-7569-db7610b020ef, timestamp=1616526974921}]