Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/spring-boot/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Spring Boot Kafka consumer具有多种崩溃类型_Java_Spring Boot_Apache Kafka - Fatal编程技术网

Java Spring Boot Kafka consumer具有多种崩溃类型

Java Spring Boot Kafka consumer具有多种崩溃类型,java,spring-boot,apache-kafka,Java,Spring Boot,Apache Kafka,我的理解是,如果我在同一流程中有topic1=ClassA,topic2=ClassB,我需要两个集装箱工厂 我的配置类: @Bean public ConsumerFactory<String, MessageADto> xxxConsumerFactory() { return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(), new JsonDeser

我的理解是,如果我在同一流程中有topic1=ClassA,topic2=ClassB,我需要两个集装箱工厂

我的配置类:

@Bean
public ConsumerFactory<String, MessageADto> xxxConsumerFactory() {
    return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(), new JsonDeserializer<>(MessageADto.class));
}

@Bean
public ConcurrentKafkaListenerContainerFactory<String, MessageADto> xxxListenerContainerFactory() {
    ConcurrentKafkaListenerContainerFactory<String, MessageADto> factory = new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(xxxConsumerFactory());
    return factory;
}

@Bean
public ConsumerFactory<String, MessageBDto> xxx2ConsumerFactory() {
    return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(), new JsonDeserializer<>(MessageBDto.class));
}

@Bean
public ConcurrentKafkaListenerContainerFactory<String, MessageBDto> xxx2ListenerContainerFactory() {
    ConcurrentKafkaListenerContainerFactory<String, MessageBDto> factory = new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(xxx2ConsumerFactory());
    return factory;
}

private Map<String, Object> consumerConfigs() {
    Map<String, Object> props = new HashMap<>();
    props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, getKafkaHost());
    props.put(ConsumerConfig.GROUP_ID_CONFIG, "xxx");
    props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
    props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
    return props;
}
对于rest控制器方法,我必须发送MessageADto和MessageBDto:

        MessageBDto messageBDto = new MessageBDto() {{ setMessage(message.getMessage()); setCount(17); }};
        this.kafkaService.sendMessageB(messageBDto);
        return convertToDto(this.kafkaService.sendMessageA(message).get().getRecordMetadata());
这会产生一个异常:

org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition dtms2-0 at offset 0. If needed, please seek past the record to continue consumption.
Caused by: java.lang.IllegalArgumentException: The class 'org.xxx.xxx.controllers.KafkaController$1' is not in the trusted packages: [java.util, java.lang, org.xxx.xxx.dtos]. If you believe this class is safe to deserialize, please provide its name. If the serialization is only done by a trusted source, you can also enable trust all (*).
    at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.getClassIdType(DefaultJackson2JavaTypeMapper.java:125) ~[spring-kafka-2.4.1.RELEASE.jar:2.4.1.RELEASE]
    at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.toJavaType(DefaultJackson2JavaTypeMapper.java:99) ~[spring-kafka-2.4.1.RELEASE.jar:2.4.1.RELEASE]
    at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:425) ~[spring-kafka-2.4.1.RELEASE.jar:2.4.1.RELEASE]
    at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:1268) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.Fetcher.access$3600(Fetcher.java:124) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1492) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1600(Fetcher.java:1332) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:645) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:606) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1263) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1225) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1201) ~[kafka-clients-2.3.1.jar:na]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1012) ~[spring-kafka-2.4.1.RELEASE.jar:2.4.1.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:968) ~[spring-kafka-2.4.1.RELEASE.jar:2.4.1.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:905) ~[spring-kafka-2.4.1.RELEASE.jar:2.4.1.RELEASE]
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
    at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]

我不理解这个例外。它与我的控制器类有什么关系?

您需要将控制器类添加到受信任包列表中

尝试将其添加到属性文件中

spring.kafka.consumer.properties.spring.json.trusted.packages=*


如果它有效,那么您可以找出您需要的特定软件包,并将*替换为用逗号分隔的软件包名称。

这是否回答了您的问题@拉克什曼不。。。我试着相信“*”。。。只是澄清一下,它似乎试图反序列化控制器或某个匿名类型。。。很奇怪,不。不起作用。您刚才在错误中看到了几个关键字:)。。。如果仔细查看错误:类“org.xxx.xxx.controllers.kafkancontroller$1”不在受信任的包中:[java.util、java.lang、org.xxx.xxx.dtos]。。。为什么它试图反序列化控制器?它应该序列化MessageBDto。经过数小时的搜索,我确实找到了另一个示例,其中他们使用了StringDeserializer而不是JsonDeserializer,并且该示例按预期工作。。。非常奇怪的是,JsonDeserializer试图反序列化控制器……我以前在受信任的包中遇到过这个错误,添加*几乎总是会修复它。你试过使用Avro反序列化器-KafkaAvroDeserializer吗?它们非常适合将有效负载映射到java对象,confluent插件将根据模式为你生成对象。你在生成器上使用JsonSerializer显然是的?我的生成器是带有JsonSerializer的字符串/对象。我发现works的示例在使用者端对键和值都使用StringDeserializer。然后在工厂中,它使用factory.setMessageConverter(newStringJSonMessageConverter());然后它就神奇地作用于卡夫卡列斯汀:)。仍然不知道为什么JsonSerializer试图对控制器做些什么。
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition dtms2-0 at offset 0. If needed, please seek past the record to continue consumption.
Caused by: java.lang.IllegalArgumentException: The class 'org.xxx.xxx.controllers.KafkaController$1' is not in the trusted packages: [java.util, java.lang, org.xxx.xxx.dtos]. If you believe this class is safe to deserialize, please provide its name. If the serialization is only done by a trusted source, you can also enable trust all (*).
    at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.getClassIdType(DefaultJackson2JavaTypeMapper.java:125) ~[spring-kafka-2.4.1.RELEASE.jar:2.4.1.RELEASE]
    at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.toJavaType(DefaultJackson2JavaTypeMapper.java:99) ~[spring-kafka-2.4.1.RELEASE.jar:2.4.1.RELEASE]
    at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:425) ~[spring-kafka-2.4.1.RELEASE.jar:2.4.1.RELEASE]
    at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:1268) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.Fetcher.access$3600(Fetcher.java:124) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1492) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1600(Fetcher.java:1332) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:645) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:606) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1263) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1225) ~[kafka-clients-2.3.1.jar:na]
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1201) ~[kafka-clients-2.3.1.jar:na]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1012) ~[spring-kafka-2.4.1.RELEASE.jar:2.4.1.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:968) ~[spring-kafka-2.4.1.RELEASE.jar:2.4.1.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:905) ~[spring-kafka-2.4.1.RELEASE.jar:2.4.1.RELEASE]
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
    at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]