Spring Kafka 2.6.x ErrorHandler和DeadLetterPublishingRecoverer与ConcurrentKafkaListenerContainerFactory

Spring Kafka 2.6.x ErrorHandler和DeadLetterPublishingRecoverer与ConcurrentKafkaListenerContainerFactory,spring,apache-kafka,spring-kafka,Spring,Apache Kafka,Spring Kafka,我们试图在SpringKafka2.6.x中使用DLT特性。这是配置yml: kafka: bootstrap-servers: localhost:9092 auto-offset-reset: earliest consumer: key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer value-deserialize

我们试图在SpringKafka2.6.x中使用DLT特性。这是配置yml:

  kafka:
    bootstrap-servers: localhost:9092
    auto-offset-reset: earliest
    consumer:
      key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
      value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
      enable-auto-commit: false
      properties:
        isolation.level: read_committed
        fetch.max.wait.ms: 100
        spring.json.value.default.type: 'com.sample.entity.Event'
        spring.json.trusted.packages: 'com.sample.entity.*'
        spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
        spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
    producer:
      bootstrap-servers: localhost:9092
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.springframework.kafka.support.serializer.JsonDeserializer
这是卡夫卡尼菲的课程:

@EnableKafka
@Configuration
@Log4j2
public class KafkaConfig {

    @Bean
    public ConcurrentKafkaListenerContainerFactory<String, Event>
    kafkaListenerContainerFactory(ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
                                  ConsumerFactory<Object, Object> consumerFactory) {

        ConcurrentKafkaListenerContainerFactory<String, Event> factory =
                new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory);
        return factory;
    }

    @Bean
    public SeekToCurrentErrorHandler errorHandler(DeadLetterPublishingRecoverer deadLetterPublishingRecoverer) {
        SeekToCurrentErrorHandler handler = new SeekToCurrentErrorHandler(deadLetterPublishingRecoverer);
        handler.addNotRetryableExceptions(UnprocessableException.class);
        return handler;
    }

    @Bean
    public DeadLetterPublishingRecoverer publisher(KafkaOperations kafkaOperations) {
        return new DeadLetterPublishingRecoverer(kafkaOperations);
    }
}
@EnableKafka
@配置
@Log4j2
公共级卡夫卡康菲{
@豆子
公共并发KafkalistenerContainerFactory
kafkaListenerContainerFactory(ConcurrentKafkaListenerContainerFactoryConfigurer,
消费者工厂(消费者工厂){
ConcurrentKafkalistener集装箱工厂=
新的ConcurrentKafkaListenerContainerFactory();
setConsumerFactory(consumerFactory);
返回工厂;
}
@豆子
public-SeekTocurInterrorHandler errorHandler(死信发布恢复程序死信发布恢复程序){
SeekToCurrentErrorHandler=新的SeekToCurrentErrorHandler(deadLetterPublishingRecoverer);
handler.AddNotRetryableException(UnprocessableException.class);
返回处理程序;
}
@豆子
public DeadLetterPublishingRecoverer publisher(卡夫卡操作卡夫卡操作){
返回新的死信发布回收器(kafkaOperations);
}
}
没有
ConcurrentKafkAlidenrContainerFactory也可以,但是由于我们想放大或缩小实例的数量,所以我们想使用
ConcurrentKafkAlidenrContainer

正确的方法是什么


此外,我发现如果是反序列化异常,.DLT中的消息发送不正确(不是正确的JSON),而如果是“不可处理异常”(我们在侦听器中抛出的自定义异常),则.DLT中的消息是正确的JSON,因为您正在连接自己的消费者工厂,因此必须在其上设置错误处理程序

但是,由于我们希望放大或缩小实例的数量,因此我们希望使用
ConcurrentKafkListenerContainer

Boot的自动配置将并发容器连接到
concurrency=1
(如果没有
…listener.concurrency
属性);所以你可以使用Boot的工厂

对于反序列化异常(所有异常),
record.value()。如果这不是您看到的,请提供原始记录和DLT中的示例