Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka 使用kafka流将数据从输入主题写入输出主题时,kafka使用者出错_Apache Kafka_Kafka Consumer Api_Apache Kafka Streams - Fatal编程技术网

Apache kafka 使用kafka流将数据从输入主题写入输出主题时,kafka使用者出错

Apache kafka 使用kafka流将数据从输入主题写入输出主题时,kafka使用者出错,apache-kafka,kafka-consumer-api,apache-kafka-streams,Apache Kafka,Kafka Consumer Api,Apache Kafka Streams,通过使用卡夫卡连接器,我正在将avro格式的数据写入卡夫卡主题,然后通过使用卡夫卡流,我正在映射一些值,并使用以下命令将输出写入其他主题: Stream.to("output_topic"); 我的数据正在写入输出主题,但我面临偏移量问题。如果我的输入主题中有25条记录,它会将所有25条记录写入我的输出主题,但会抛出一个错误,如下所示: 以下是我的全部错误: > [2018-06-25 12:42:50,243] ERROR [ConsumerFetcher >

通过使用卡夫卡连接器,我正在将avro格式的数据写入卡夫卡主题,然后通过使用卡夫卡流,我正在映射一些值,并使用以下命令将输出写入其他主题:

Stream.to("output_topic");
我的数据正在写入输出主题,但我面临偏移量问题。如果我的输入主题中有25条记录,它会将所有25条记录写入我的输出主题,但会抛出一个错误,如下所示:

以下是我的全部错误:

> [2018-06-25 12:42:50,243] ERROR [ConsumerFetcher
> consumerId=console-consumer-3500_kafka-connector-1529910768088-712e7106,
> leaderId=0, fetcherId=0] Error due to
> (kafka.consumer.ConsumerFetcherThread) kafka.common.KafkaException:
> Error processing data for partition Stream-0 offset 25    at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:204)
>   at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:169)
>   at scala.Option.foreach(Option.scala:257)   at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:169)
>   at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:166)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)  at
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)    at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)    at
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)  at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:166)
>   at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:166)
>   at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:166)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)   at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:164)
>   at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)

> Caused by: java.lang.IllegalArgumentException: Illegal batch type
> class org.apache.kafka.common.record.DefaultRecordBatch. The older
> message format classes only support conversion from class
> org.apache.kafka.common.record.AbstractLegacyRecordBatch, which is
> used for magic v0 and v1  at
> kafka.message.MessageAndOffset$.fromRecordBatch(MessageAndOffset.scala:29)
>   at
> kafka.message.ByteBufferMessageSet$$anonfun$internalIterator$1.apply(ByteBufferMessageSet.scala:169)
>   at
> kafka.message.ByteBufferMessageSet$$anonfun$internalIterator$1.apply(ByteBufferMessageSet.scala:169)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)  at
> scala.collection.Iterator$class.toStream(Iterator.scala:1320)     at
> scala.collection.AbstractIterator.toStream(Iterator.scala:1334)   at
> scala.collection.TraversableOnce$class.toSeq(TraversableOnce.scala:298)
>   at scala.collection.AbstractIterator.toSeq(Iterator.scala:1334)     at
> kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:59)
>   at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcherThread.scala:87)
>   at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcherThread.scala:37)
>   at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:183)
>   ... 15 more

我在使用kafka-consumer-console.sh时遇到了相同的错误

问题在于-zookeeper选项。 如果您提供-zookeeper选项,则默认情况下会启动旧的使用者,并且magic选项将设置为默认的v0或v1(当前卡夫卡版本1.1使用v2) 这就是版本不匹配的原因

您可以通过使用-bootstrap server选项而不是-zookeeper来解决此错误。(这意味着运行新版本的consumer)

当您提供-bootstrap服务器选项时,必须有代理的域(或ip)和端口号。 e、 g.)-引导服务器kafka。域:9092,kafka2。域:9092


代理(Kafka服务器)默认端口为9092,您可以在Kafka/config/server.properties中更改端口。

听起来像是版本不匹配。。。您的代理是什么,消息格式,Connect,Kafka Streams版本?broker-confluent-4.1.0,message format-Avro,Connect-Kafka-Connect jdbc,Kafka-Stream-1.0.0-cp1By“message format”我不是指您的数据类型(似乎是Avro),但是卡夫卡消息格式:有一个config
message.format.version
log.message.format.version
。另外,您是否在某个时候升级了您的代理,并使用升级之前创建的主题?
> [2018-06-25 12:42:50,243] ERROR [ConsumerFetcher
> consumerId=console-consumer-3500_kafka-connector-1529910768088-712e7106,
> leaderId=0, fetcherId=0] Error due to
> (kafka.consumer.ConsumerFetcherThread) kafka.common.KafkaException:
> Error processing data for partition Stream-0 offset 25    at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:204)
>   at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:169)
>   at scala.Option.foreach(Option.scala:257)   at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:169)
>   at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:166)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)  at
> scala.collection.AbstractIterator.foreach(Iterator.scala:1334)    at
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72)    at
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)  at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply$mcV$sp(AbstractFetcherThread.scala:166)
>   at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:166)
>   at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.apply(AbstractFetcherThread.scala:166)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)   at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:164)
>   at
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111)
>   at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)

> Caused by: java.lang.IllegalArgumentException: Illegal batch type
> class org.apache.kafka.common.record.DefaultRecordBatch. The older
> message format classes only support conversion from class
> org.apache.kafka.common.record.AbstractLegacyRecordBatch, which is
> used for magic v0 and v1  at
> kafka.message.MessageAndOffset$.fromRecordBatch(MessageAndOffset.scala:29)
>   at
> kafka.message.ByteBufferMessageSet$$anonfun$internalIterator$1.apply(ByteBufferMessageSet.scala:169)
>   at
> kafka.message.ByteBufferMessageSet$$anonfun$internalIterator$1.apply(ByteBufferMessageSet.scala:169)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)  at
> scala.collection.Iterator$class.toStream(Iterator.scala:1320)     at
> scala.collection.AbstractIterator.toStream(Iterator.scala:1334)   at
> scala.collection.TraversableOnce$class.toSeq(TraversableOnce.scala:298)
>   at scala.collection.AbstractIterator.toSeq(Iterator.scala:1334)     at
> kafka.consumer.PartitionTopicInfo.enqueue(PartitionTopicInfo.scala:59)
>   at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcherThread.scala:87)
>   at
> kafka.consumer.ConsumerFetcherThread.processPartitionData(ConsumerFetcherThread.scala:37)
>   at
> kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$anonfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:183)
>   ... 15 more