Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka 丢失的avro架构:检索id X的avro架构时出错_Apache Kafka_Kafka Consumer Api_Confluent Platform_Confluent Schema Registry - Fatal编程技术网

Apache kafka 丢失的avro架构:检索id X的avro架构时出错

Apache kafka 丢失的avro架构:检索id X的avro架构时出错,apache-kafka,kafka-consumer-api,confluent-platform,confluent-schema-registry,Apache Kafka,Kafka Consumer Api,Confluent Platform,Confluent Schema Registry,我正在使用confluent schema registry,并且由于无法恢复的kafka消息而出现生产问题。原因似乎是找不到消息生成器使用的架构。以下是堆栈跟踪: 2021-04-02 13:18:04.947+02:00 ERROR org.apache.spark.executor.Executor:91 - Exception in task 0.3 in stage 0.0 (TID 3) org.apache.kafka.common.errors.SerializationExc

我正在使用confluent schema registry,并且由于无法恢复的kafka消息而出现生产问题。原因似乎是找不到消息生成器使用的架构。以下是堆栈跟踪:

2021-04-02 13:18:04.947+02:00  ERROR org.apache.spark.executor.Executor:91 - Exception in task 0.3 in stage 0.0 (TID 3)
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition pos_transaction-0 at offset 889990. If needed, please seek past the record to continue consumption.
Caused by: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro schema for id 281
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unexpected character ('<' (code 60)): expected a valid value (number, String, array, object, 'true', 'false' or 'null')
 at [Source: (sun.net.www.protocol.http.HttpURLConnection$HttpInputStream); line: 1, column: 2]; error code: 50005
    at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:202)
    at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:229)
    at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:409)
    at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:402)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:119)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getBySubjectAndId(CachedSchemaRegistryClient.java:192)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getById(CachedSchemaRegistryClient.java:168)
    at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:121)
    at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:93)
    at io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:55)
    at org.apache.kafka.common.serialization.ExtendedDeserializer$Wrapper.deserialize(ExtendedDeserializer.java:65)
    at org.apache.kafka.common.serialization.ExtendedDeserializer$Wrapper.deserialize(ExtendedDeserializer.java:55)
    at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:1009)
    at org.apache.kafka.clients.consumer.internals.Fetcher.access$3400(Fetcher.java:96)
    at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1186)
    at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1500(Fetcher.java:1035)
    at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:544)
    at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:505)
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1259)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
    at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.poll(CachedKafkaConsumer.scala:136)
    at org.apache.spark.streaming.kafka010.CachedKafkaConsumer.get(CachedKafkaConsumer.scala:68)
    at org.apache.spark.streaming.kafka010.KafkaRDDIterator.next(KafkaRDD.scala:271)
    at org.apache.spark.streaming.kafka010.KafkaRDDIterator.next(KafkaRDD.scala:231)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:462)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
2021-04-02 13:18:04.947+02:00错误org.apache.spark.executor.executor:91-任务0.3在0.0阶段出现异常(TID 3)
org.apache.kafka.common.errors.SerializationException:在偏移量889990处反序列化分区pos_事务-0的键/值时出错。如果需要,请通过记录继续消费。
原因:org.apache.kafka.common.errors.SerializationException:检索id 281的Avro架构时出错

由以下原因引起:io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException:意外字符('您如何知道偏移量889990处的记录
pos\u事务-0实际上是Avro序列化的?该ID只是字节片[1-5]的int表示形式]…注册表中有哪些ID可用于pos_transaction-key和/或pos_transaction-value?您是否尝试使用_schemas主题来查看该ID是否存在?