Spring boot 自动提交偏移失败&;重试也不作为例外工作

Spring boot 自动提交偏移失败&;重试也不作为例外工作,spring-boot,spring-kafka,Spring Boot,Spring Kafka,我将spring boot 2.1.9与spring Kafka 2.2.9一起使用 我在日志文件中得到一些警告,说提交失败,并且我正在使用SeektocurInterrorHandler在重试时捕获错误,但有时如果提交失败,它会继续迭代 这是我的配置类 @Configuration @EnableKafka public class KafkaReceiverConfig { // Kafka Server Configuration @Value("${kafka.serv

我将spring boot 2.1.9与spring Kafka 2.2.9一起使用

我在日志文件中得到一些警告,说提交失败,并且我正在使用SeektocurInterrorHandler在重试时捕获错误,但有时如果提交失败,它会继续迭代

这是我的配置类

@Configuration
@EnableKafka
public class KafkaReceiverConfig {

    // Kafka Server Configuration
    @Value("${kafka.servers}")
    private String kafkaServers;

    // Group Identifier
    @Value("${kafka.groupId}")
    private String groupId;

    // Kafka Max Retry Attempts
    @Value("${kafka.retry.maxAttempts:5}")
    private Integer retryMaxAttempts;

    // Kafka Max Retry Interval
    @Value("${kafka.retry.interval:180000}")
    private Long retryInterval;

    // Kafka Concurrency
    @Value("${kafka.concurrency:10}")
    private Integer concurrency;

    // Kafka Concurrency
    @Value("${kafka.poll.timeout:100}")
    private Integer pollTimeout;

    // Kafka Consumer Offset
    @Value("${kafka.consumer.auto-offset-reset:earliest}")
    private String offset = "earliest";

    // Logger
    private static final Logger log = LoggerFactory.getLogger(KafkaReceiverConfig.class);

    /**
     * Defines the Max Number of Retry Attempts
     * 
     * @return Return the Retry Policy @see {@link RetryPolicy}
     */
    @Bean
    public RetryPolicy retryPolicy() {
        SimpleRetryPolicy simpleRetryPolicy = new SimpleRetryPolicy();
        simpleRetryPolicy.setMaxAttempts(retryMaxAttempts);
        return simpleRetryPolicy;
    }

    /**
     * Time before the next Retry can happen, the Time used is in Milliseconds
     * 
     * @return Return the BackOff Policy @see {@link BackOffPolicy}
     */
    @Bean
    public BackOffPolicy backOffPolicy() {
        FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
        backOffPolicy.setBackOffPeriod(retryInterval);
        return backOffPolicy;
    }

    /**
     * Get Retry Template
     * 
     * @return Return the Retry Template @see {@link RetryTemplate}
     */
    @Bean
    public RetryTemplate retryTemplate() {
        RetryTemplate retryTemplate = new RetryTemplate();
        retryTemplate.setRetryPolicy(retryPolicy());
        retryTemplate.setBackOffPolicy(backOffPolicy());
        return retryTemplate;
    }

    /**
     * String Kafka Listener Container Factor
     * 
     * @return @see {@link KafkaListenerContainerFactory}
     */
    @Bean
    public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory(
            ChainedKafkaTransactionManager<String, String> chainedTM, MessageProducer messageProducer) {
        ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<String, String>();
        factory.setConsumerFactory(consumerFactory());
        factory.setConcurrency(concurrency);
        factory.getContainerProperties().setPollTimeout(pollTimeout);
        factory.getContainerProperties().setSyncCommits(true);
        factory.setRetryTemplate(retryTemplate());
        factory.getContainerProperties().setAckOnError(false);
        factory.getContainerProperties().setTransactionManager(chainedTM);
        factory.setStatefulRetry(true);
        // NOTE: retryMaxAttempts should always +1 due to spring kafka bug
        SeekToCurrentErrorHandler errorHandler = new SeekToCurrentErrorHandler((record, exception) -> {
            log.warn("failed to process kafka message (retries are exausted). topic name:"+record.topic()+" value:"+record.value());
            messageProducer.saveFailedMessage(record, exception);
        }, retryMaxAttempts + 1);

        factory.setErrorHandler(errorHandler);
        log.debug("Kafka Receiver Config kafkaListenerContainerFactory created");
        return factory;
    }

    /**
     * String Consumer Factory
     * 
     * @return @see {@link ConsumerFactory}
     */
    @Bean
    public ConsumerFactory<String, String> consumerFactory() {
        log.debug("Kafka Receiver Config consumerFactory created");
        return new DefaultKafkaConsumerFactory<>(consumerConfigs());
    }

    /**
     * Consumer Configurations
     * 
     * @return @see {@link Map}
     */
    @Bean
    public Map<String, Object> consumerConfigs() {
        Map<String, Object> props = new ConcurrentHashMap<String, Object>();
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaServers);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        // Disable the Auto Commit if required for testing
        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
        props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, offset);
        props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
        log.debug("Kafka Receiver Config consumerConfigs created");
        return props;
    }

}

  • 我的配置文件有问题吗
  • 如何设置最大轮询和会话超时以及所有?(给我举个例子)
  • 如何在SpringKafka 2.2.9中设置SeektocurInterrorHandler以使其工作良好(因为我无法升级SpringKafka,因为存在一些其他依赖项)

  • 处理poll()返回的记录花费的时间太长

    您需要减少
    max.poll.records
    ConsumerConfig.max\u poll\u records\u CONFIG
    )和/或增加
    max.poll.interval.ms


    出现此错误后,您无法执行搜索-您丢失了分区。

    默认的max.poll.records为500,max.poll.interval.ms为300000,因此我将更改max.poll.records=50和max.poll.interval.ms=50000。好吗?或者我还需要更改任何其他属性吗?setPollTimeout()方法和max.poll.interval.ms是否相同?否。根本不需要。轮询超时是一个容器属性。投票间隔是卡夫卡消费属性。
    2019-10-30 15:48:05.907  WARN [xxxxx-component-workflow-starter,,,] 11 --- [nt_create-2-C-1] o.a.k.c.c.internals.ConsumerCoordinator  : [Consumer clientId=consumer-4, groupId=fulfillment_create] Synchronous auto-commit of offsets {fulfillment_create-4=OffsetAndMetadata{offset=32, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.