Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/logging/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java kafka-log4j-appender 0.9不工作_Java_Logging_Log4j_Apache Kafka - Fatal编程技术网

Java kafka-log4j-appender 0.9不工作

Java kafka-log4j-appender 0.9不工作,java,logging,log4j,apache-kafka,Java,Logging,Log4j,Apache Kafka,我在log4j.properties中添加了一个log4j卡夫卡appender,但它并没有像我预期的那样工作 在我发布这个问题之前,我检查了我的log4j.properties-based,大约是0.8。然而,我并不幸运 这是我的log4j属性 log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender log4j.appender.Kafka.topic=my-topic log4j.appender.Kafk

我在log4j.properties中添加了一个log4j卡夫卡appender,但它并没有像我预期的那样工作

在我发布这个问题之前,我检查了我的log4j.properties-based,大约是0.8。然而,我并不幸运

这是我的log4j属性

log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.Kafka.topic=my-topic
log4j.appender.Kafka.brokerList=localhost:9092
log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
log4j.rootLogger=DEBUG, Console

log4j.appender.Console=org.apache.log4j.ConsoleAppender
log4j.appender.Console.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Console.layout.ConversionPattern=%d [%t] %-5p %c - %m%n

log4j.category.foxgem=DEBUG, Kafka
log4j.additivity.foxgem=false

log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.Kafka.topic=logTopic
log4j.appender.Kafka.brokerList=localhost:9092
log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n

log4j.logger.io.vertx=WARN
log4j.logger.io.netty=WARN
log4j.rootLogger=DEBUG, Console, Kafka
当我启动我的应用程序时,我可以看到Kafka producer启动了:

[main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread.
但appender不工作,并引发异常:

[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
我还检查了我的Kafka+Zookeeper环境,它在我的log4j.properties中是正确的。现在,我对此一无所知。希望有人能帮我一把。以下是整个输出:

[main] INFO  org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
        compression.type = none
        metric.reporters = []
        metadata.max.age.ms = 300000
        metadata.fetch.timeout.ms = 60000
        reconnect.backoff.ms = 50
        sasl.kerberos.ticket.renew.window.factor = 0.8
        bootstrap.servers = [localhost:9092]
        retry.backoff.ms = 100
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        buffer.memory = 33554432
        timeout.ms = 30000
        key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        ssl.keystore.type = JKS
        ssl.trustmanager.algorithm = PKIX
        block.on.buffer.full = false
        ssl.key.password = null
        max.block.ms = 60000
        sasl.kerberos.min.time.before.relogin = 60000
        connections.max.idle.ms = 540000
        ssl.truststore.password = null
        max.in.flight.requests.per.connection = 5
        metrics.num.samples = 2
        client.id =
        ssl.endpoint.identification.algorithm = null
        ssl.protocol = TLS
        request.timeout.ms = 30000
        ssl.provider = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        acks = 1
        batch.size = 16384
        ssl.keystore.location = null
        receive.buffer.bytes = 32768
        ssl.cipher.suites = null
        ssl.truststore.type = JKS
        security.protocol = PLAINTEXT
        retries = 0
        max.request.size = 1048576
        value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
        ssl.truststore.location = null
        ssl.keystore.password = null
        ssl.keymanager.algorithm = SunX509
        metrics.sample.window.ms = 30000
        partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
        send.buffer.bytes = 131072
        linger.ms = 0

[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bufferpool-wait-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name buffer-exhausted-records
[main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(nodes = [Node(-1, localhost, 9092)], partitions = [])
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name batch-size
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name compression-rate
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name queue-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name request-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name produce-throttle-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-per-request
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-retries
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name errors
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-size-max
[main] INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.9.0.0
[main] INFO  org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : fc7243c2af4b2b4a
[main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread.
...
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.

谢谢

最后,我把它修好了。这是我的新log4j属性

log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.Kafka.topic=my-topic
log4j.appender.Kafka.brokerList=localhost:9092
log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
log4j.rootLogger=DEBUG, Console

log4j.appender.Console=org.apache.log4j.ConsoleAppender
log4j.appender.Console.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Console.layout.ConversionPattern=%d [%t] %-5p %c - %m%n

log4j.category.foxgem=DEBUG, Kafka
log4j.additivity.foxgem=false

log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.Kafka.topic=logTopic
log4j.appender.Kafka.brokerList=localhost:9092
log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n

log4j.logger.io.vertx=WARN
log4j.logger.io.netty=WARN
log4j.rootLogger=DEBUG, Console, Kafka
我还创建了一个示例来演示如何在上使用这个appender

我所做的改变是:

  • 从根记录器中删除kafka appender。在我以前的log4j中

    log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
    log4j.appender.Kafka.topic=my-topic
    log4j.appender.Kafka.brokerList=localhost:9092
    log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
    log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
    
    log4j.rootLogger=DEBUG, Console
    
    log4j.appender.Console=org.apache.log4j.ConsoleAppender
    log4j.appender.Console.layout=org.apache.log4j.EnhancedPatternLayout
    log4j.appender.Console.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
    
    log4j.category.foxgem=DEBUG, Kafka
    log4j.additivity.foxgem=false
    
    log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
    log4j.appender.Kafka.topic=logTopic
    log4j.appender.Kafka.brokerList=localhost:9092
    log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
    log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
    
    log4j.logger.io.vertx=WARN
    log4j.logger.io.netty=WARN
    
    log4j.rootLogger=DEBUG, Console, Kafka
    
  • 为日志输出将转到Kafka的包添加日志类别

    log4j.category.foxgem=DEBUG, Kafka
    log4j.additivity.foxgem=false
    

  • 我认为原因是:对于旧的rootLogger,来自Kafka的日志输出也被发送到Kafka,这导致超时。

    我在log4j和logback中都遇到了类似的问题

    当appender级别为INFO时,一切正常,但当我将级别更改为DEBUG时,过了一段时间后出现以下错误: org.apache.kafka.common.errors.TimeoutException:未能在60000毫秒后更新元数据

    问题是KafkaProducer本身有跟踪和调试日志,并且试图将这些日志附加到Kafka中,因此它被困在一个循环中

    将org.apache.kafka包日志级别更改为INFO或将其appender更改为将日志写入文件或标准输出解决了此问题