Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka 无法在Loh4j2中的appender[Kafka]java.util.concurrent.TimeoutException中写入Kafka_Apache Kafka_Kafka Consumer Api_Kafka Producer Api - Fatal编程技术网

Apache kafka 无法在Loh4j2中的appender[Kafka]java.util.concurrent.TimeoutException中写入Kafka

Apache kafka 无法在Loh4j2中的appender[Kafka]java.util.concurrent.TimeoutException中写入Kafka,apache-kafka,kafka-consumer-api,kafka-producer-api,Apache Kafka,Kafka Consumer Api,Kafka Producer Api,我正在尝试将日志从log4j2流到卡夫卡主题。 我的zookeeper&kafka服务器正在运行。我已经为它创建了一个主题 Unable to write to Kafka in appender [Kafka] java.util.concurrent.TimeoutException: Timeout after waiting for 30000 ms. at org.apache.kafka.clients.producer.internals.FutureRecordMeta

我正在尝试将日志从log4j2流到卡夫卡主题。 我的zookeeper&kafka服务器正在运行。我已经为它创建了一个主题

 Unable to write to Kafka in appender [Kafka] java.util.concurrent.TimeoutException: Timeout after waiting for 30000 ms.
    at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:76)
    at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
    at org.apache.logging.log4j.core.appender.mom.kafka.KafkaManager.send(KafkaManager.java:116)
    at org.apache.logging.log4j.core.appender.mom.kafka.KafkaAppender.tryAppend(KafkaAppender.java:169)
    at org.apache.logging.log4j.core.appender.mom.kafka.KafkaAppender.append(KafkaAppender.java:150)
    at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156)
    at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129)
    at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120)
    at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
    at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:448)
    at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:433)
    at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
    at org.apache.logging.log4j.core.config.LoggerConfig.logParent(LoggerConfig.java:439)
    at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:434)
    at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:417)
    at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:403)
    at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)
    at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)
    at org.apache.logging.slf4j.Log4jLogger.log(Log4jLogger.java:376)
我的动物园管理员&卡夫卡如下

[2018-07-03 18:38:05,866] INFO Client attempting to establish new session at 
/127.0.0.1:57207 (org.apache.zookeeper.server.ZooKeeperServer)
[2018-07-03 18:38:05,867] INFO Creating new log file: log.cb 
(org.apache.zookeeper.server.persistence.FileTxnLog)
[2018-07-03 18:38:05,899] INFO Established session 0x1646041bc4d0000 with 
negotiated timeout 6000 for client /127.0.0.1:57207 
(org.apache.zookeeper.server.ZooKeeperServer)
kafka:
[2018-07-03 18:38:05,807] INFO [ZooKeeperClient] Waiting until connected. 
(kafka.zookeeper.ZooKeeperClient)
[2018-07-03 18:38:05,807] INFO Opening socket connection to server 
127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL 
(unknown error) (org.apache.zookeeper.ClientCnxn)
[2018-07-03 18:38:05,810] INFO Socket connection established to 
127.0.0.1/127.0.0.1:2181, initiating session 
(org.apache.zookeeper.ClientCnxn)
[2018-07-03 18:38:05,901] INFO Session establishment complete on server 
127.0.0.1/127.0.0.1:2181, sessionid = 0x1646041bc4d0000, negotiated timeout = 
6000 (org.apache.zookeeper.ClientCnxn)
[2018-07-03 18:38:05,905] INFO [ZooKeeperClient] Connected. 
(kafka.zookeeper.ZooKeeperClient)
和消费者代码如下

Properties consumerConfig = new Properties();
  consumerConfig.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "172.19.102.93:9092");
  consumerConfig.put(ConsumerConfig.GROUP_ID_CONFIG, "my-group");
  consumerConfig.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
  consumerConfig.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
 "org.apache.kafka.common.serialization.StringDeserializer");
  consumerConfig.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
 "org.apache.kafka.common.serialization.StringDeserializer");
  KafkaConsumer<byte[], byte[]> consumer = new KafkaConsumer<>
 (consumerConfig);
  TestConsumerRebalanceListener rebalanceListener = new 
  TestConsumerRebalanceListener();
  consumer.subscribe(Collections.singletonList("TestKafkaTopic"), 
  rebalanceListener);

  while (true) {
      ConsumerRecords<byte[], byte[]> records = consumer.poll(1);
      for (ConsumerRecord<byte[], byte[]> record : records) {
          System.out.printf("Received Message topic =%s, partition =%s, offset = %d, key = %s, value = %s\n", record.topic(), record.partition(), record.offset(), record.key(), record.value());
      }

      consumer.commitSync();
  }
Properties-consumerConfig=new-Properties();
consumerConfig.put(consumerConfig.BOOTSTRAP_SERVERS_CONFIG,“172.19.102.93:9092”);
consumerConfig.put(consumerConfig.GROUP_ID_CONFIG,“我的组”);
consumerConfig.put(consumerConfig.AUTO_OFFSET_RESET_CONFIG,“最早”);
consumerConfig.put(consumerConfig.VALUE\u反序列化程序\u类\u配置,
“org.apache.kafka.common.serialization.StringDeserializer”);
consumerConfig.put(consumerConfig.KEY\u反序列化程序\u类\u配置,
“org.apache.kafka.common.serialization.StringDeserializer”);
卡夫卡消费者=新卡夫卡消费者
(消费者配置);
TestConsumerBalanceListener重新平衡Listener=新建
TestConsumerBalanceListener();
consumer.subscribe(Collections.singletonList(“TestKafkaTopic”),
再平衡;
while(true){
ConsumerRecords记录=consumer.poll(1);
对于(消费者记录:记录){
System.out.printf(“收到的消息主题=%s,分区=%s,偏移量=%d,键=%s,值=%s\n”,record.topic(),record.partition(),record.offset(),record.key(),record.value());
}
consumer.commitSync();
}
Log4j2配置:

 <Kafka name="Kafka" topic="TestKafkaTopic">
                  <PatternLayout pattern="|%-5p|%d{yyyy-MM-dd|HH:mm:ss,SSS}|%X{InterfaceId}|%X{SeqNo}|%X{Ouid} %X{srch1} %X{BussRef}|${sys:hostname}|${sys:ApplicationComponent}|%X{ExternalRefSend}|%m||%C{6}:%L|%t%n"/>
<Property name="metadata.broker.list">****:9092</Property>
        <Property 
 name="serializer.class">kafka.serializer.StringEncoder</Property>
        <Property name="bootstrap.servers">****:9092</Property>

    </Kafka>

****:9092
kafka.serializer.StringEncoder
****:9092

知道我做错了什么吗。当我使用log4j2发送带有生产者位的消息进行测试时,由于我的主题是使用satndalone代码,因此无法将错误发送给消费者。

我也遇到了与log4j2 kafka appender相同的问题。由于根日志级别设置为跟踪,因此失败。我试图将org.apache.kafka的日志记录级别提高到等于或高于INFO的级别,但其他所有日志记录在所需的日志级别上(例如:trace,debug)。之后,它开始工作


参考:

我在log4j2 kafka appender中也遇到了同样的问题。由于根日志级别设置为跟踪,因此失败。我试图将org.apache.kafka的日志记录级别提高到等于或高于INFO的级别,但其他所有日志记录在所需的日志级别上(例如:trace,debug)。之后,它开始工作

参考: