java.lang.OutOfMemoryError:在收听卡夫卡主题时发生直接缓冲区内存错误,Spring卡夫卡消费者

java.lang.OutOfMemoryError:在收听卡夫卡主题时发生直接缓冲区内存错误,Spring卡夫卡消费者,java,spring,spring-boot,apache-kafka,Java,Spring,Spring Boot,Apache Kafka,以下是用于创建使用者的属性: Map<String, Object> props = new HashMap<>(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class); props.put(ConsumerConfig.

以下是用于创建使用者的属性:

Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
props.put(KafkaAvroSerializerConfig.VALUE_SUBJECT_NAME_STRATEGY,
TopicRecordNameStrategy.class.getName());
props.put("schema.registry.url", schemaRegistryUrl);
props.put(ConsumerConfig.ALLOW_AUTO_CREATE_TOPICS_CONFIG, true);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groipId);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
"org.apache.kafka.clients.consumer.RoundRobinAssignor");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
Map props=newhashmap();
put(ConsumerConfig.BOOTSTRAP\u server\u CONFIG,bootstrapserver);
put(ConsumerConfig.KEY\u反序列化程序\u类\u配置,StringDeserializer.CLASS);
put(ConsumerConfig.VALUE\反序列化器\类\配置,kafkavrodeserializer.CLASS);
props.put(KafkaAvroSerializerConfig.VALUE\u SUBJECT\u NAME\u策略,
TopicRecordNameStrategy.class.getName());
put(“schema.registry.url”,schemaRegistryUrl);
put(ConsumerConfig.ALLOW\u AUTO\u CREATE\u TOPICS\u CONFIG,true);
props.put(ConsumerConfig.GROUP\u ID\u CONFIG,groipId);
put(ConsumerConfig.ENABLE\u AUTO\u COMMIT\u CONFIG,false);
props.put(ConsumerConfig.PARTITION\u ASSIGNMENT\u STRATEGY\u CONFIG,
“org.apache.kafka.clients.consumer.RoundRobinAssignor”);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,“最早”);
.java.lang.OutOfMemoryError:java堆空间2020-11-05 12:07:31002
>错误[consumer-10-C-1][]org.springframework.core.log.LogAccessor:
>由于错误而停止容器java.lang.OutOfMemoryError:java
>堆空间2020-11-05 12:07:30999错误[consumer-5-C-1][]
>org.springframework.core.log.LogAccessor:由于
>错误java.lang.OutOfMemoryError:java堆空间位于
>java.base/java.nio.HeapByteBuffer.(未知源代码)位于
>位于的java.base/java.nio.ByteBuffer.allocate(未知源)
>org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
>在
>org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:113)
>在
>org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:447)
>在
>org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:397)
>在
>org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
>在
>org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
>位于org.apache.kafka.common.network.Selector.poll(Selector.java:485)
>在
>org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:550)
>在
>org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>在
>org.apache.kafka.clients.consumer.internal.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
>在
>org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
>在
>org.apache.kafka.clients.consumer.internals.AbstractCoordinator.EnsureRecoOrdinatorReady(AbstractCoordinator.java:236)
>在
>org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:469)
>在
>org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1274)
>在
>org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1238)
>在
>org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1173)
>在brave.kafka.clients.TracingConsumer.poll(TracingConsumer.java:89)上
>在brave.kafka.clients.TracingConsumer.poll(TracingConsumer.java:83)上
>在
>org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1109)
>在
>org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1065)
>在
>org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:990)
>在
>java.base/java.util.concurrent.Executors$RunnableAdapter.call(未知
>Source)位于java.base/java.util.concurrent.FutureTask.run(未知
>源代码)位于java.base/java.lang.Thread.run(未知源代码)
> 
> 
>2020-11-05 12:07:31276错误[consumer-5-C-1]
>org.springframework.core.log.LogAccessor:由于
>错误java.lang.OutOfMemoryError:将缓冲区内存直接指向
>位于的java.base/java.nio.Bits.reserveMemory(未知源)
>java.base/java.nio.DirectByteBuffer.(未知源代码)位于
>位于的java.base/java.nio.ByteBuffer.allocateDirect(未知源)
>java.base/sun.nio.ch.Util.getTemporaryDirectBuffer(未知源代码)位于
>java.base/sun.nio.ch.IOUtil.read(未知源代码)位于
>java.base/sun.nio.ch.IOUtil.read(未知源代码)位于
>java.base/sun.nio.ch.SocketChannelImpl.read(未知源代码)位于
>org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:103)
>在
>org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:118)
>在
>org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:447)
>在
>org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:397)
>在
>org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
>在
>org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
>位于org.apache.kafka.common.network.Selector.poll(Selector.java:485)
>在
>org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:550)
>在
>org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>在
>org.apache.kafka.clients.consumer.internal.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
>在
>org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
>在
>org.apache.kafka.clients.consumer.internals.AbstractCoordinator.EnsureRecoOrdinatorReady(AbstractCoordinator.java:236)
>在
>org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:469)
>在
>org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1274)
>在
>org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1238)
>在
>org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1173)
>在brave.kafka.clients.TracingConsumer
> java.lang.OutOfMemoryError: Java heap space 2020-11-05 12:07:31,002
> ERROR [consumer-10-C-1] [] org.springframework.core.log.LogAccessor:
> Stopping container due to an Error java.lang.OutOfMemoryError: Java
> heap space 2020-11-05 12:07:30,999 ERROR [consumer-5-C-1] []
> org.springframework.core.log.LogAccessor: Stopping container due to an
> Error java.lang.OutOfMemoryError: Java heap space     at
> java.base/java.nio.HeapByteBuffer.<init>(Unknown Source)  at
> java.base/java.nio.ByteBuffer.allocate(Unknown Source)    at
> org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
>   at
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:113)
>   at
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:447)
>   at
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:397)
>   at
> org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
>   at
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
>   at
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:550)
>   at
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>   at
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
>   at
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
>   at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:236)
>   at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:469)
>   at
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1274)
>   at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1238)
>   at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1173)
>   at brave.kafka.clients.TracingConsumer.poll(TracingConsumer.java:89)
>   at brave.kafka.clients.TracingConsumer.poll(TracingConsumer.java:83)
>   at
> org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1109)
>   at
> org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1065)
>   at
> org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:990)
>   at
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown
> Source)   at java.base/java.util.concurrent.FutureTask.run(Unknown
> Source)   at java.base/java.lang.Thread.run(Unknown Source)
> 
> 
> 2020-11-05 12:07:31,276 ERROR [consumer-5-C-1] []
> org.springframework.core.log.LogAccessor: Stopping container due to an
> Error java.lang.OutOfMemoryError: Direct buffer memory    at
> java.base/java.nio.Bits.reserveMemory(Unknown Source)     at
> java.base/java.nio.DirectByteBuffer.<init>(Unknown Source)    at
> java.base/java.nio.ByteBuffer.allocateDirect(Unknown Source)  at
> java.base/sun.nio.ch.Util.getTemporaryDirectBuffer(Unknown Source)    at
> java.base/sun.nio.ch.IOUtil.read(Unknown Source)  at
> java.base/sun.nio.ch.IOUtil.read(Unknown Source)  at
> java.base/sun.nio.ch.SocketChannelImpl.read(Unknown Source)   at
> org.apache.kafka.common.network.PlaintextTransportLayer.read(PlaintextTransportLayer.java:103)
>   at
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:118)
>   at
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:447)
>   at
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:397)
>   at
> org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
>   at
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
>   at
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:550)
>   at
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
>   at
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
>   at
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
>   at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:236)
>   at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:469)
>   at
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1274)
>   at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1238)
>   at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1173)
>   at brave.kafka.clients.TracingConsumer.poll(TracingConsumer.java:89)