Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kubernetes Flink应用程序接收器KafkaProducer正在抛出java堆空间错误(outofmemory)_Kubernetes_Apache Kafka_Jvm_Heap_Apache Flink - Fatal编程技术网

Kubernetes Flink应用程序接收器KafkaProducer正在抛出java堆空间错误(outofmemory)

Kubernetes Flink应用程序接收器KafkaProducer正在抛出java堆空间错误(outofmemory),kubernetes,apache-kafka,jvm,heap,apache-flink,Kubernetes,Apache Kafka,Jvm,Heap,Apache Flink,我已经创建了flink应用程序,它接受字符串的数据流,并使用Kafka将其接收。字符串的数据流是来自集合的简单字符串 List listofstring=new ArrayList(); 添加(“testkafka1”); 添加(“testkafka2”); 添加(“testkafka3”); 添加(“testkafka4”); DataStream testStringStream=env.fromCollection(listofstring); flink使用parllelism 1和1

我已经创建了flink应用程序,它接受字符串的数据流,并使用Kafka将其接收。字符串的数据流是来自集合的简单字符串

List listofstring=new ArrayList();
添加(“testkafka1”);
添加(“testkafka2”);
添加(“testkafka3”);
添加(“testkafka4”);
DataStream testStringStream=env.fromCollection(listofstring);
flink使用parllelism 1和1任务管理器在Kubernetes上运行。flink作业一开始就失败,出现以下错误

ERROR org.apache.kafka.common.utils.KafkaThread-kafka生产者网络线程中未捕获异常|生产者-1:
java.lang.OutOfMemoryError:java堆空间
位于java.nio.HeapByteBuffer。(HeapByteBuffer.java:57)
位于java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
位于org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:97)
位于org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:75)
位于org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:203)
位于org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:167)
位于org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:381)
位于org.apache.kafka.common.network.Selector.poll(Selector.java:326)
位于org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:433)
位于org.apache.kafka.clients.NetworkClientUtils.waitready(NetworkClientUtils.java:71)
在org.apache.kafka.clients.producer.internal.Sender.awaitLeastLoadedNodeReady(Sender.java:409)上
位于org.apache.kafka.clients.producer.internals.Sender.maybeSendTransactionalRequest(Sender.java:337)
位于org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:204)
位于org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
运行(Thread.java:748)
我拥有的taskmanager配置是(取自taskmanager日志)

启动任务管理器 配置文件:

jobmanager.rpc.address: component-app-adb71002-tm-5c6f4d58bd-rtblz
jobmanager.rpc.port: 6123
jobmanager.heap.size: 1024m
taskmanager.heap.size: 1024m
taskmanager.numberOfTaskSlots: 2
parallelism.default: 1
jobmanager.execution.failover-strategy: region
blob.server.port: 6124
query.server.port: 6125
blob.server.port: 6125
fs.s3a.aws.credentials.provider: org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.DefaultAWSCredentialsProviderChain
jobmanager.heap.size: 524288k
jobmanager.rpc.port: 6123
jobmanager.web.port: 8081
metrics.internal.query-service.port: 50101
metrics.reporter.dghttp.apikey: f52362263f032f2ebc3622cafc0171cd
metrics.reporter.dghttp.class: org.apache.flink.metrics.datadog.DatadogHttpReporter
metrics.reporter.dghttp.tags: componentingestion,dev
query.server.port: 6124
taskmanager.heap.size: 1048576k
taskmanager.numberOfTaskSlots: 1
web.upload.dir: /opt/flink
jobmanager.rpc.address: component-app-adb71002
taskmanager.host: 10.42.6.6
Starting taskexecutor as a console application on host component-app-adb71002-tm-5c6f4d58bd-rtblz.
2020-02-11 15:19:20,519 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       - --------------------------------------------------------------------------------
2020-02-11 15:19:20,520 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -  Starting TaskManager (Version: 1.9.2, Rev:c9d2c90, Date:24.01.2020 @ 08:44:30 CST)
2020-02-11 15:19:20,520 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -  OS current user: flink
2020-02-11 15:19:20,520 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -  Current Hadoop/Kerberos user: <no hadoop dependency found>
2020-02-11 15:19:20,520 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -  JVM: OpenJDK 64-Bit Server VM - Oracle Corporation - 1.8/25.242-b08
2020-02-11 15:19:20,521 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -  Maximum heap size: 922 MiBytes
2020-02-11 15:19:20,521 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -  JAVA_HOME: /usr/local/openjdk-8
2020-02-11 15:19:20,521 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -  No Hadoop Dependency available
2020-02-11 15:19:20,521 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -  JVM Options:
2020-02-11 15:19:20,521 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -     -XX:+UseG1GC
2020-02-11 15:19:20,521 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -     -Xms922M
2020-02-11 15:19:20,521 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -     -Xmx922M
2020-02-11 15:19:20,521 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -     -XX:MaxDirectMemorySize=8388607T
2020-02-11 15:19:20,521 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -     -Dlog4j.configuration=file:/opt/flink/conf/log4j-console.properties
2020-02-11 15:19:20,522 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -     -Dlogback.configurationFile=file:/opt/flink/conf/logback-console.xml
2020-02-11 15:19:20,522 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -  Program Arguments:
2020-02-11 15:19:20,522 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -     --configDir
2020-02-11 15:19:20,522 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -     /opt/flink/conf
2020-02-11 15:19:20,522 INFO  org.apache.flink.runtime.taskexecutor.TaskManagerRunner       -  Classpath: /opt/flink/lib/flink-metrics-datadog-1.9.2.jar:/opt/flink/lib/flink-table-blink_2.12-1.9.2.jar:/opt/flink/lib/flink-table_2.12-1.9.2.jar:/opt/flink/lib/log4j-1.2.17.jar:/opt/flink/lib/slf4j-log4j12-1.7.15.jar:/opt/flink/lib/flink-dist_2.12-1.9.2.jar:::

大多数producer配置都是默认的,我在这里缺少了什么,或者配置有什么问题吗?

正如Dominik所建议的,这个问题与堆无关

如果代理是使用ssl身份验证设置的,而客户端不是为ssl身份验证设置的,则会引发此异常

这是卡夫卡的一个错误


这解决了问题吗?这就解决了问题。多谢多明尼克!我一直在磕头。
acks = 1
batch.size = 16384
bootstrap.servers = [XXXXXXXXXXXXXXXX] ---I masked it intentionally
buffer.memory = 33554432
client.id = 
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 3
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = Source: Collection Source -> Sink: Unnamed-eb99017e0f9125fa6648bf56123bdcf7-19
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer