Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/349.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 卡夫卡错误:org.apache.Kafka.common.errors.TimeoutException_Java_Docker_Spring Boot_Apache Kafka - Fatal编程技术网

Java 卡夫卡错误:org.apache.Kafka.common.errors.TimeoutException

Java 卡夫卡错误:org.apache.Kafka.common.errors.TimeoutException,java,docker,spring-boot,apache-kafka,Java,Docker,Spring Boot,Apache Kafka,我有一个简单的应用程序,它正在发送关于这个主题的消息。下面是我的代码: Sender.java @Component public class Sender { private static final Logger LOGGER = LoggerFactory.getLogger(Sender.class); @Autowired private KafkaTemplate<String, String> kafkaTemplate

我有一个简单的应用程序,它正在发送关于这个主题的消息。下面是我的代码:

Sender.java

@Component
public class Sender {

 private static final Logger LOGGER =
          LoggerFactory.getLogger(Sender.class);

      @Autowired
      private KafkaTemplate<String, String> kafkaTemplate;

      public void send(String topic, String payload) {
        LOGGER.info("sending payload='{}' to topic='{}'", payload, topic);
        kafkaTemplate.send(topic, payload);
      }
}
ReceiverConfig.java

@EnableKafka
@Configuration
public class ReceiverConfig {

@Value("${kafka.bootstrap-servers}")
  private String bootstrapServers;

  @Bean
  public Map<String, Object> consumerConfigs() {
    Map<String, Object> props = new HashMap<>();
    // list of host:port pairs used for establishing the initial connections to the Kafka cluster
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
        bootstrapServers);
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
        StringDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
        StringDeserializer.class);
    // allows a pool of processes to divide the work of consuming and processing records
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "helloworld");
    // automatically reset the offset to the earliest offset
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

    return props;
  }

  @Bean
  public ConsumerFactory<String, String> consumerFactory() {
    return new DefaultKafkaConsumerFactory<>(consumerConfigs());
  }

  @Bean
  public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
    ConcurrentKafkaListenerContainerFactory<String, String> factory =
        new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(consumerFactory());

    return factory;
  }

  @Bean
  public Receiver receiver() {
    return new Receiver();
  }
}
当我发布到一个不存在的主题时,上面的代码可以正常工作。它创建了主题。例如,在上面的application.yml文件中,如果主题名为“myTopic”,但该名称不存在,则代码工作正常,消息被使用。但是如果我在docker yml中运行docker,我将创建一些主题名称。如果我尝试发布这些主题,我会得到下面的超时异常

Kafka错误:org.apache.Kafka.common.errors.TimeoutException:自批创建加上延迟时间后,TOPICNAME-0的1条记录已过期:30010毫秒

我浏览了一些关于堆栈溢出的帖子,但是他们建议增加超时时间,但是在我的例子中,我只发布了一条消息。因此,我不确定为什么会出现此错误。

我猜docker会将IP分配给kafka服务器,并且localhost可能不再有效,请尝试更改application.yml以使其具有适合该服务器的IP。您可以发布docker配置吗?还有,你是如何在docker中启动kafka的,即docker运行、组合或堆栈部署?我已经解决了。这是版本不匹配。我的Spring Boot应用程序中的Kafka版本是最新版本,而docker设置中使用的版本是旧版本。当我在Ubuntu上更新了我的docker和docker compose,并使用最新的卡夫卡版本再次运行时,它就工作了。谢谢你们两位的帮助。
@Component
 public class Receiver {

private static final Logger LOGGER =
          LoggerFactory.getLogger(Receiver.class);

@Autowired
  private KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;

      private CountDownLatch latch = new CountDownLatch(1);

      public CountDownLatch getLatch() {
        return latch;
      }

      @KafkaListener(topics = "${kafka.topic.helloworld}")
      public void receive(String payload) {
        LOGGER.info("received payload='{}'", payload);
        latch.countDown();
      }
}
@EnableKafka
@Configuration
public class ReceiverConfig {

@Value("${kafka.bootstrap-servers}")
  private String bootstrapServers;

  @Bean
  public Map<String, Object> consumerConfigs() {
    Map<String, Object> props = new HashMap<>();
    // list of host:port pairs used for establishing the initial connections to the Kafka cluster
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
        bootstrapServers);
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
        StringDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
        StringDeserializer.class);
    // allows a pool of processes to divide the work of consuming and processing records
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "helloworld");
    // automatically reset the offset to the earliest offset
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

    return props;
  }

  @Bean
  public ConsumerFactory<String, String> consumerFactory() {
    return new DefaultKafkaConsumerFactory<>(consumerConfigs());
  }

  @Bean
  public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
    ConcurrentKafkaListenerContainerFactory<String, String> factory =
        new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(consumerFactory());

    return factory;
  }

  @Bean
  public Receiver receiver() {
    return new Receiver();
  }
}
 kafka:
   bootstrap-servers: localhost:9092
   topic:
      name: myTopic
 server:
    port: 9191