Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/336.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Kafka CommitFailedException在使用不同主题分别运行2个消费者时发生故障_Java_Apache Kafka_Kafka Consumer Api - Fatal编程技术网

Java Kafka CommitFailedException在使用不同主题分别运行2个消费者时发生故障

Java Kafka CommitFailedException在使用不同主题分别运行2个消费者时发生故障,java,apache-kafka,kafka-consumer-api,Java,Apache Kafka,Kafka Consumer Api,我正在尝试运行订阅了2个不同主题的2个消费者。一次运行一个使用者程序时,两个使用者程序都能正常运行,但同时运行时,一个使用者始终显示异常: org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means

我正在尝试运行订阅了2个不同主题的2个消费者。一次运行一个使用者程序时,两个使用者程序都能正常运行,但同时运行时,一个使用者始终显示异常:

org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
我按照建议将
max.pool.size
设置为2,将
session.timeout.ms
设置为30000,将heartbeat.interval.ms设置为1000

下面是我的consumer函数,这两个文件的函数都是相同的,只是主题名变为
Test2
,我在两个不同的类中同时运行这两个函数

    public void consume()
    {
        //Kafka consumer configuration settings
        List<String> topicNames = new ArrayList<String>();
        topicNames.add("Test1");
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092");
        props.put("group.id", "test");
        props.put("enable.auto.commit", "false");
        props.put("session.timeout.ms", "30000");
        props.put("heartbeat.interval.ms", "1000");
        props.put("max.poll.records", "2");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); 
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
        consumer.subscribe(topicNames);
        try {
            while (true) {
                ConsumerRecords<String, String> records = consumer.poll(100);
                for (ConsumerRecord<String, String> record : records) {
            System.out.println("Record: "+record.value());
                String responseString = "successfull";
                if (responseString.equals("successfull")) {
                    consumer.commitSync();
                }
            }
        }
    }
        catch (Exception e) {
            LOG.error("Exception: ", e);
        }
        finally {
            consumer.close();
        }
    }

public-void-consumer()
{
//卡夫卡消费者配置设置
List topicNames=new ArrayList();
添加(“测试1”);
Properties props=新属性();
put(“bootstrap.servers”,“localhost:9092”);
props.put(“group.id”、“test”);
props.put(“enable.auto.commit”、“false”);
props.put(“session.timeout.ms”,“30000”);
道具放置(“心跳间隔毫秒”,“1000”);
道具放置(“最大投票记录”,“2”);
put(“key.deserializer”、“org.apache.kafka.common.serialization.StringDeserializer”);
put(“value.deserializer”、“org.apache.kafka.common.serialization.StringDeserializer”);
卡夫卡消费者=新卡夫卡消费者(道具);
消费者。订阅(主题名称);
试一试{
while(true){
ConsumerRecords记录=consumer.poll(100);
对于(消费者记录:记录){
System.out.println(“记录:+Record.value());
字符串responseString=“successfull”;
if(responseString.equals(“successfull”)){
consumer.commitSync();
}
}
}
}
捕获(例外e){
日志错误(“异常:”,e);
}
最后{
consumer.close();
}
}
由于此错误,记录未在
Kafka
主题中提交。
如何克服此错误?

在您的情况下,您需要为消费者分配不同的组ID。您正在使用相同的组ID创建两个消费者(这是可以的),但是两次呼叫subscribe是不可以的

您一次只能运行一个消费者,因为您只调用了一次subscribe


如果你需要任何进一步的帮助,请告诉我。很乐意帮忙。

卡夫卡的运营情况如何?当您启动/重新启动它(即代理)时,它是否仍然包含来自上一个会话的管理信息?请为每个在终端上分别运行kafka服务器和代理的consumerI am尝试不同的组ID