Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ssl/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/video/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java堆空间-内存不足错误-使用SASL_SSL的Kafka代理_Java_Ssl_Apache Kafka_Out Of Memory_Sasl - Fatal编程技术网

Java堆空间-内存不足错误-使用SASL_SSL的Kafka代理

Java堆空间-内存不足错误-使用SASL_SSL的Kafka代理,java,ssl,apache-kafka,out-of-memory,sasl,Java,Ssl,Apache Kafka,Out Of Memory,Sasl,当我在kafka broker的纯文本端口9092中使用下面的“/usr/bin/kafka delete records”命令时,该命令工作正常,但当我使用SASL_SSL端口9094时,该命令抛出以下错误。有人知道将Kafka代理端口9094与SASL_SSL一起使用的解决方案吗 $ssh **** ****@<IP address> /usr/bin/kafka-delete-records --bootstrap-server localhost:9094 --offset-

当我在kafka broker的纯文本端口9092中使用下面的“/usr/bin/kafka delete records”命令时,该命令工作正常,但当我使用SASL_SSL端口9094时,该命令抛出以下错误。有人知道将Kafka代理端口9094与SASL_SSL一起使用的解决方案吗

$ssh **** ****@<IP address> /usr/bin/kafka-delete-records --bootstrap-server localhost:9094 --offset-json-file /kafka/records.json`

[2019-10-14 04:15:49,891] ERROR Uncaught exception in thread 'kafka-admin-client-thread | adminclient-1': (org.apache.kafka.common.utils.KafkaThread)

java.lang.OutOfMemoryError: Java heap space
    at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
    at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
    at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:390)
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:351)
    at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:467)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
    at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1125)
    at java.lang.Thread.run(Thread.java:748)
Executing records delete operation
Records delete operation completed:


最有可能的是,OOM例外只是一种转移注意力的手段,见JIRA。真正的问题是SASL-SSL连接,您的客户端无法正确建立该连接。在客户端启用SSL调试,然后继续:

$ export KAFKA_OPTS="-Djavax.net.debug=handshake"
$ /usr/bin/kafka-delete-records ...

此错误消息表示部署应用程序的JVM内存不足。你应该试着搜索一下。。8 GB的内存分配给了JVM,它也与上面提到的端口9092一起工作。您是否在
KAFKA_HEAP\u OPTS
中增加了
-Xmx
?@ASR知道它为什么在端口9092中工作吗?-Xmx分配为8 GB,服务器的总内存也是16 GB。在我们的情况下,当我们忘记添加凭据或使用错误的凭据时,通常会发生这种情况。“这是一个很好的观点。我假设OP使用kerberos缓存登录,但可能他只是省略了jaas.conf以避免问题过载。
$ export KAFKA_OPTS="-Djavax.net.debug=handshake"
$ /usr/bin/kafka-delete-records ...