Java Kafka使用者异常和偏移量提交

Java Kafka使用者异常和偏移量提交,java,spring,apache-kafka,kafka-consumer-api,spring-kafka,Java,Spring,Apache Kafka,Kafka Consumer Api,Spring Kafka,我一直在为春季卡夫卡做一些POC工作。具体来说,我想尝试一下在卡夫卡内部使用消息时处理错误的最佳实践是什么 我想知道是否有人能够帮助: 分享关于卡夫卡消费者应该做什么的最佳实践 当出现故障时 帮助我了解AckMode记录是如何工作的,以及在侦听器方法中引发异常时如何防止提交到Kafka偏移量队列 下面给出了2的代码示例: 假设AckMode设置为RECORD,则根据: 当侦听器在处理 记录 如果侦听器方法抛出异常,我会认为偏移量不会增加。然而,当我使用下面的代码/config/command组合

我一直在为春季卡夫卡做一些POC工作。具体来说,我想尝试一下在卡夫卡内部使用消息时处理错误的最佳实践是什么

我想知道是否有人能够帮助:

  • 分享关于卡夫卡消费者应该做什么的最佳实践 当出现故障时
  • 帮助我了解AckMode记录是如何工作的,以及在侦听器方法中引发异常时如何防止提交到Kafka偏移量队列 下面给出了2的代码示例:

    假设AckMode设置为RECORD,则根据:

    当侦听器在处理 记录

    如果侦听器方法抛出异常,我会认为偏移量不会增加。然而,当我使用下面的代码/config/command组合测试它时,情况并非如此。偏移量仍然得到更新,下一条消息将继续处理

    我的配置:

        private Map<String, Object> producerConfigs() {
        Map<String, Object> props = new HashMap<>();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.1:9092");
        props.put(ProducerConfig.RETRIES_CONFIG, 0);
        props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
        props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
        props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        return props;
    }
    
       @Bean
    ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
                new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
        factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.RECORD);
        return factory;
    }
    
    我使用的是kafka_2.12-0.10.2.0和org.springframework.kafka:spring kafka:1.1.3.RELEASE

    容器(通过
    ContainerProperties
    )有一个属性,
    ackOnError
    ,默认情况下为true

    /**
     * Set whether or not the container should commit offsets (ack messages) where the
     * listener throws exceptions. This works in conjunction with {@link #ackMode} and is
     * effective only when the kafka property {@code enable.auto.commit} is {@code false};
     * it is not applicable to manual ack modes. When this property is set to {@code true}
     * (the default), all messages handled will have their offset committed. When set to
     * {@code false}, offsets will be committed only for successfully handled messages.
     * Manual acks will be always be applied. Bear in mind that, if the next message is
     * successfully handled, its offset will be committed, effectively committing the
     * offset of the failed message anyway, so this option has limited applicability.
     * Perhaps useful for a component that starts throwing exceptions consistently;
     * allowing it to resume when restarted from the last successfully processed message.
     * @param ackOnError whether the container should acknowledge messages that throw
     * exceptions.
     */
    public void setAckOnError(boolean ackOnError) {
        this.ackOnError = ackOnError;
    }
    
    但是,请记住,如果下一条消息成功,它的偏移量将被提交,这实际上也会提交失败的偏移量

    编辑


    从2.3版开始,
    ackOnError
    现在默认为
    false

    谢谢@Gary的提示。你知道有没有关于如何处理卡夫卡消费者的错误的最佳实践?这似乎是一个开箱即用的错误,读取一条消息就会被记录下来,然后被吞没。此外,我注意到代码的注释似乎是矛盾的:“只有当自动确认为false时才有效;它不适用于手动确认。”。我猜第一部分应该是正确的,而不是错误的?失败交付的最佳实践可能是将错误消息保存到某个地方(可能是另一个死信主题)。如果需要严格的消息排序,则可能需要不提交偏移量(
    ackOnError=false
    )并停止容器。嗨@Gary,我注意到当消息反序列化出现错误时,Spring Kafka就会陷入循环,不断尝试一次又一次地读取同一个不可序列化的消息,而从不移动到下一个偏移量。这听起来有点不一致——也就是说,如果您可以反序列化消息,但它会出错,Spring Kafka将转到下一个偏移量。但是,如果您无法反序列化消息,Spring Kafka实际上会被卡住。这就是你所理解的行为吗?对于如何处理反序列化错误,您有什么建议吗?非常感谢!不幸的是,卡夫卡反序列化发生在Spring卡夫卡看到数据之前;所以我们对此无能为力。您需要一个更智能的反序列化程序来捕获异常,并可能返回一些将反序列化错误传递给应用程序层的值。
    bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group test-group
    
    /**
     * Set whether or not the container should commit offsets (ack messages) where the
     * listener throws exceptions. This works in conjunction with {@link #ackMode} and is
     * effective only when the kafka property {@code enable.auto.commit} is {@code false};
     * it is not applicable to manual ack modes. When this property is set to {@code true}
     * (the default), all messages handled will have their offset committed. When set to
     * {@code false}, offsets will be committed only for successfully handled messages.
     * Manual acks will be always be applied. Bear in mind that, if the next message is
     * successfully handled, its offset will be committed, effectively committing the
     * offset of the failed message anyway, so this option has limited applicability.
     * Perhaps useful for a component that starts throwing exceptions consistently;
     * allowing it to resume when restarted from the last successfully processed message.
     * @param ackOnError whether the container should acknowledge messages that throw
     * exceptions.
     */
    public void setAckOnError(boolean ackOnError) {
        this.ackOnError = ackOnError;
    }