Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka 卡夫卡消费者再平衡需要太长时间_Apache Kafka_Apache Kafka Streams - Fatal编程技术网

Apache kafka 卡夫卡消费者再平衡需要太长时间

Apache kafka 卡夫卡消费者再平衡需要太长时间,apache-kafka,apache-kafka-streams,Apache Kafka,Apache Kafka Streams,我有一个Kafka Streams应用程序,它从几个主题中获取数据,并将数据合并到另一个主题中 卡夫卡配置: 5 kafka brokers Kafka Topics - 15 partitions and 3 replication factor. 注意:我在运行Kafka代理的同一台机器上运行Kafka Streams应用程序。 每小时消耗/产生数百万条记录。 每当我让卡夫卡经纪人破产时,都会进入再平衡阶段,再平衡大约需要30分钟,有时甚至更长时间 有人知道如何解决卡夫卡消费者的再平衡问

我有一个Kafka Streams应用程序,它从几个主题中获取数据,并将数据合并到另一个主题中

卡夫卡配置:

5 kafka brokers
Kafka Topics - 15 partitions and 3 replication factor. 

注意:我在运行Kafka代理的同一台机器上运行Kafka Streams应用程序。

每小时消耗/产生数百万条记录。 每当我让卡夫卡经纪人破产时,都会进入再平衡阶段,再平衡大约需要30分钟,有时甚至更长时间

有人知道如何解决卡夫卡消费者的再平衡问题吗? 而且,在重新平衡时,它多次抛出异常

这将阻止我们使用此设置在生产环境中使用。任何帮助都将不胜感激

Caused by: org.apache.kafka.clients.consumer.CommitFailedException: ?
Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:725)

at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:604)
at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1173)
at org.apache.kafka.streams.processor.internals.StreamTask.commitOffsets(StreamTask.java:307)
at org.apache.kafka.streams.processor.internals.StreamTask.access$000(StreamTask.java:49)
at org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:268)
at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:187)
at org.apache.kafka.streams.processor.internals.StreamTask.commitImpl(StreamTask.java:259)
at org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:362)
at org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:346)
at org.apache.kafka.streams.processor.internals.StreamThread$3.apply(StreamThread.java:1118)
at org.apache.kafka.streams.processor.internals.StreamThread.performOnStreamTasks(StreamThread.java:1448)
at org.apache.kafka.streams.processor.internals.StreamThread.suspendTasksAndState(StreamThread.java:1110)
卡夫卡流配置:

bootstrap.servers=kafka-1:9092,kafka-2:9092,kafka-3:9092,kafka-4:9092,kafka-5:9092
max.poll.records = 100
request.timeout.ms=40000
它内部创建的ConsumerConfig是:

    auto.commit.interval.ms = 5000
    auto.offset.reset = earliest
    bootstrap.servers = [kafka-1:9092, kafka-2:9092, kafka-3:9092, kafka-4:9092, kafka-5:9092]
    check.crcs = true
    client.id = conversion-live-StreamThread-1-restore-consumer
    connections.max.idle.ms = 540000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = 
    heartbeat.interval.ms = 3000
    interceptor.classes = null
    internal.leave.group.on.close = false
    isolation.level = read_uncommitted
    key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 2147483647
    max.poll.records = 100
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 40000
    retry.backoff.ms = 100
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer

我建议通过参数
num.standby.replications=1
(默认值为
0
)配置
StandbyTasks
。这将有助于显著缩短再平衡时间

此外,我建议将您的应用程序升级到Kafka 0.11。注意,Streams API 0.11向后兼容0.10.1和0.10.2代理,因此,您不需要为此升级代理。重新平衡行为在0.11中得到了极大改进,并将在即将发布的1.0版本中得到进一步改进(参见),因此,将应用程序升级到最新版本始终是重新平衡的改进。

以我的经验, 第一 考虑到您的工作量,max.poll.records太小:每小时消耗/生成数百万条记录

因此,如果max.poll.records太小(比如1),那么重新平衡需要很长时间。我不知道原因

其次,确保流应用程序的输入主题的分区数一致。 e、 g.如果APP-1有两个输入主题A和B。如果A有4个分区,B有2个分区,那么重新平衡需要很长时间。但是,如果A和B都有4个分区,并且一些分区处于空闲状态,那么重新平衡时间是好的。
希望对您有所帮助

我已经在0.11上安装了Kafka Broker和流应用程序。我正在与Kafka Broker相同的计算机上运行Kafka Streams应用程序。在任何情况下,它都会影响性能吗?预计会受到性能惩罚。不建议在代理上运行您的应用程序:@MatthiasJ.Sax:我的应用程序也面临着类似的问题,它是高度有状态的。请告知。我们使用的是kafka 2.2.0 sse,您对此索赔有任何参考资料吗?