Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala Kafka错误:SLF4J:对类型为[org.apache.Kafka.common.Cluster]的对象调用toString()失败_Scala_Apache Kafka_Slf4j_Gatling - Fatal编程技术网

Scala Kafka错误:SLF4J:对类型为[org.apache.Kafka.common.Cluster]的对象调用toString()失败

Scala Kafka错误:SLF4J:对类型为[org.apache.Kafka.common.Cluster]的对象调用toString()失败,scala,apache-kafka,slf4j,gatling,Scala,Apache Kafka,Slf4j,Gatling,我试图用加特林和卡夫卡一起使用,但经常会出现这样的错误: 01:32:53.933 [kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.NetworkClient - Sending metadata request ClientRequest(expectResponse=true, payload=null, request=RequestSend(header={api_key=3,api_ve

我试图用加特林和卡夫卡一起使用,但经常会出现这样的错误:

01:32:53.933 [kafka-producer-network-thread | producer-1] DEBUG o.apache.kafka.clients.NetworkClient - Sending metadata request ClientRequest(expectResponse=true, payload=null, request=RequestSend(header={api_key=3,api_version=0,correlation_id=12,client_id=producer-1}, body={topics=[test]})) to node 1011
SLF4J: Failed toString() invocation on an object of type [org.apache.kafka.common.Cluster]
java.lang.NullPointerException
at org.apache.kafka.common.PartitionInfo.toString(PartitionInfo.java:72)
    at java.lang.String.valueOf(String.java:2994)
    at java.lang.StringBuilder.append(StringBuilder.java:131)
    at java.util.AbstractCollection.toString(AbstractCollection.java:462)
    at java.lang.String.valueOf(String.java:2994)
    at java.lang.StringBuilder.append(StringBuilder.java:131)
    at org.apache.kafka.common.Cluster.toString(Cluster.java:151)
    at org.slf4j.helpers.MessageFormatter.safeObjectAppend(MessageFormatter.java:305)
    at org.slf4j.helpers.MessageFormatter.deeplyAppendParameter(MessageFormatter.java:277)
    at org.slf4j.helpers.MessageFormatter.arrayFormat(MessageFormatter.java:231)
    at ch.qos.logback.classic.spi.LoggingEvent.getFormattedMessage(LoggingEvent.java:298)
    at ch.qos.logback.classic.spi.LoggingEvent.prepareForDeferredProcessing(LoggingEvent.java:208)
    at ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:212)
    at ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:103)
    at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
    at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
    at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:273)
    at ch.qos.logback.classic.Logger.callAppenders(Logger.java:260)
    at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:442)
    at ch.qos.logback.classic.Logger.filterAndLog_2(Logger.java:433)
    at ch.qos.logback.classic.Logger.debug(Logger.java:511)
    at org.apache.kafka.clients.producer.internals.Metadata.update(Metadata.java:133)
    at org.apache.kafka.clients.NetworkClient.handleMetadataResponse(NetworkClient.java:313)
    at org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:298)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:199)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
    at java.lang.Thread.run(Thread.java:745)
01:32:53.937 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.producer.internals.Metadata - Updated cluster metadata version 14 to [FAILED toString()]
我不确定错误是否与代码有关,但下面是我的BasicSimulation.scala:

class BasicSimulation extends Simulation {

val kafkaConf = kafka
    .topic("test")
    .properties(
      Map(
        ProducerConfig.ACKS_CONFIG -> "1",
        ProducerConfig.BOOTSTRAP_SERVERS_CONFIG -> "kafka:9092",
        ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG ->
          "org.apache.kafka.common.serialization.ByteArraySerializer",
        ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG ->
          "org.apache.kafka.common.serialization.ByteArraySerializer"))

  val scn = scenario("Kafka Test")
    .feed(csv("data.csv").circular)
    .exec(kafka("request")
    .send("${data}".getBytes: Array[Byte]))

  setUp(
    scn
      .inject(constantUsersPerSec(10) during(10 seconds)))
    .protocols(kafkaConf)
}
以下是我的docker-compose.yml中与卡夫卡相关的部分:

zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"

  kafka:
    image: wurstmeister/kafka
    ports:
      - "9092:9092"
    depends_on:
      - zookeeper
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_CREATE_TOPICS: "test:1:1"
      KAFKA_LOG_CLEANER_ENABLE: 'true'
    volumes:
      - /tmp/docker.sock:/var/run/docker.sock

停止并移除我的docker容器似乎已经修复了它

docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)

就我而言,这是一个版本问题。我使用的是不兼容的卡夫卡构建和卡夫卡客户端库


当我切换到卡夫卡2.11-0.10.2.1和卡夫卡客户端0.10.2.0时,问题得到了解决。

这是卡夫卡依赖版本的问题安装的卡夫卡版本和卡夫卡客户端依赖项版本技术不正确。首先验证已安装的Kafaka版本,并在POM中相应更新Kafka客户端版本。我也面临同样的问题,并通过修改卡夫卡客户端版本来解决

已安装的卡夫卡版本:0.11

Maven Dependency:
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>0.11.0.0</version>
        </dependency>
Maven依赖项:
org.apache.kafka
卡夫卡客户
0.11.0.0
现在对我来说很好