Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka 当服务器或端口详细信息出错时,kafka消费者会无限等待_Apache Kafka_Kafka Consumer Api - Fatal编程技术网

Apache kafka 当服务器或端口详细信息出错时,kafka消费者会无限等待

Apache kafka 当服务器或端口详细信息出错时,kafka消费者会无限等待,apache-kafka,kafka-consumer-api,Apache Kafka,Kafka Consumer Api,我已使用以下属性设置kafka consumer Properties consumerProperties = new Properties(); consumerProperties.put("bootstrap.servers",server); consumerProperties.put("group.id",groupId); consumerProperties.put("security.protocol", "SASL_PLAINTEXT");

我已使用以下属性设置kafka consumer

    Properties consumerProperties = new Properties();
    consumerProperties.put("bootstrap.servers",server);
    consumerProperties.put("group.id",groupId);
    consumerProperties.put("security.protocol", "SASL_PLAINTEXT");
    consumerProperties.put("sasl.mechanism", "PLAIN");
    consumerProperties.put("enable.auto.commit", "false");
    consumerProperties.put("acks", "all");
    consumerProperties.put("request.timeout.ms", 12000);
    consumerProperties.put("max.block.ms",500);
    consumerProperties.put("session.timeout.ms", 11000);
    consumerProperties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    consumerProperties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
 //Object creation from above properties
 KafkaConsumer<String, String> consumer = new KafkaConsumer<>(consumerProperties);
 // i have try catch blocks but exceptions aren't being thrown.

请建议我需要做哪些更改来中断针对不正确服务器的无限等待轮询?

我认为除了从外部检查第一次轮询的运行时间并在一段时间后中断之外,您目前没有什么可以做的。最好的方法可能是使用前面讨论过的ExecutorService


《卡夫卡圣歌》中有几张关于这方面的票,你可以看看这(,)方面的任何进展,但最近关于这个话题的讨论并不多。

这是一个已知的问题。见JIRA问题:


跳出无限循环的唯一方法是从另一个线程调用。

正如其他人所指出的,Kafka的内部ConsumerCoordinator在尝试确保协调器就绪时,当前具有9223372036854775807ms的内置超时


如果您只是想在尝试轮询消费者之前确保主机/端口信息正确,则只需调用
consumer.listTopics()
即可。如果无法连接,它将抛出一个
org.apache.kafka.common.errors.TimeoutException

重新启动zookeeper和kafka对我都有效

您可能知道如何处理“WARN 19508---[ad|producer-1]org.apache.kafka.clients.NetworkClient:获取相关id为22的元数据时出错:{dummytic=LEADER_NOT_AVAILABLE}”,它只是在控制台上记录警告,而不是引发异常。
            try{
                LOGGER.info("Subscribing to topic.");
                consumer.subscribe(Arrays.asList(topic));
                LOGGER.info("Subscribed to topic successfully.");
                LOGGER.info("Start of polling records for consumer. ");
                ***records = consumer.poll(100);***
         //CODE GETS STUCK IN ABOVE LINE FOR INFINITE TIME AND DOESNT COMES OUT
                LOGGER.info("Returning records to microservice.");
            }
            catch(InterruptException interruptException) {
                LOGGER.error("interrupt exception "+interruptException);
            }
            catch(TimeoutException timeoutException) {
                LOGGER.error("Time out exception "+timeoutException);
            }
            catch (KafkaException kafkaException) {
                LOGGER.error("Kafka Exception occurred while consuming records by consumer. Message: "+kafkaException.getMessage());
            }
            catch(Exception exception){
                LOGGER.error("Exception occured while creating consumer object "+exception);
            }