Apache kafka 在同步副本中更改Kafka

Apache kafka 在同步副本中更改Kafka,apache-kafka,replication,partition,Apache Kafka,Replication,Partition,我们正在运行开源的Kafka Confluent 5.2.1,使用Avro对消息进行编码/解码。当我们构建一个新集群并向其发布模式时,我们的\uu consumer\u offset主题具有以下配置: $shell> kafka-topics --zookeeper localhost:2181/apps/kafka_cluster --describe --topic __consumer_offsets Topic:__consumer_offsets Partition

我们正在运行开源的Kafka Confluent 5.2.1,使用Avro对消息进行编码/解码。当我们构建一个新集群并向其发布模式时,我们的
\uu consumer\u offset
主题具有以下配置:

$shell> kafka-topics --zookeeper localhost:2181/apps/kafka_cluster --describe --topic __consumer_offsets
Topic:__consumer_offsets        PartitionCount:50       ReplicationFactor:1     Configs:compression.type=producer,cleanup.policy=compact,segment.bytes=104857600
        Topic: __consumer_offsets       Partition: 0    Leader: 101     Replicas: 101   Isr: 101
        Topic: __consumer_offsets       Partition: 1    Leader: 102     Replicas: 102   Isr: 102
        Topic: __consumer_offsets       Partition: 2    Leader: 101     Replicas: 101   Isr: 101
        Topic: __consumer_offsets       Partition: 3    Leader: 102     Replicas: 102   Isr: 102
...
        Topic: __consumer_offsets       Partition: 48   Leader: 101     Replicas: 101   Isr: 101
        Topic: __consumer_offsets       Partition: 49   Leader: 102     Replicas: 102   Isr: 102
然后,我使用以下JSON文件重新分配分区:

{"version":1, "partitions":[
  {"topic":"__consumer_offsets","partition":0,"replicas":[101,102,103]},
  {"topic":"__consumer_offsets","partition":1,"replicas":[102,103,101]},
  {"topic":"__consumer_offsets","partition":2,"replicas":[103,101,102]},
...
  {"topic":"__consumer_offsets","partition":45,"replicas":[101,102,103]},
  {"topic":"__consumer_offsets","partition":46,"replicas":[102,103,101]},
  {"topic":"__consumer_offsets","partition":47,"replicas":[103,101,102]},
  {"topic":"__consumer_offsets","partition":48,"replicas":[101,102,103]},
  {"topic":"__consumer_offsets","partition":49,"replicas":[102,103,101]}
]}
最终结果是副本发生更改,但同步副本有时会更改,有时不会更改:

$shell> kafka-topics --zookeeper bigdevmq02c:2181/apps/kafka_cluster --describe --topic __consumer_offsets                Topic:__consumer_offsets        PartitionCount:50       ReplicationFactor:3     Configs:compression.type=producer,cleanup.policy=compact,segment.bytes=104857600
        Topic: __consumer_offsets       Partition: 0    Leader: 101     Replicas: 101,102,103   Isr: 101
        Topic: __consumer_offsets       Partition: 1    Leader: 102     Replicas: 102,103,101   Isr: 102,103,101
        Topic: __consumer_offsets       Partition: 2    Leader: 101     Replicas: 103,101,102   Isr: 101
        Topic: __consumer_offsets       Partition: 3    Leader: 102     Replicas: 101,102,103   Isr: 102,103,101
...
        Topic: __consumer_offsets       Partition: 48   Leader: 101     Replicas: 101,102,103   Isr: 101
        Topic: __consumer_offsets       Partition: 49   Leader: 102     Replicas: 102,103,101   Isr: 102,103,101
我希望同步副本与副本匹配,然后根据副本的第一个成员运行领导人选举,类似于以下内容:

$shell>kafka首选副本选择--引导服务器localhost:9092

但目前这种做法失败了。我做错了什么,如何纠正

非常感谢你的帮助

更新

我运行了一次验证,2.5小时后在一个空集群上,它仍然显示不完整:

$shell> kafka-reassign-partitions --zookeeper localhost:2181/apps/kafka_cluster --reassignment-json-file dev.json --verify
Status of partition reassignment:
Reassignment of partition __consumer_offsets-22 is still in progress
Reassignment of partition __consumer_offsets-30 is still in progress
Reassignment of partition __consumer_offsets-8 is still in progress
Reassignment of partition __consumer_offsets-21 completed successfully

环顾四周一段时间后,我们注意到集群中有一个节点运行的是Kafka 5.2.1版,其余节点运行的是5.3.1版。将该节点上运行的版本更改为5.3.1解决了此问题。

重新分配--verify命令是否显示它已完成?@cricket\u 007感谢您的建议,我不知道有这样的选项。请查看更新--它显示仍在进行中,但是集群只有一个主题,为什么要花费几个小时以上?在一个空集群上,我不确定。代理日志中是否有任何错误表明复制不起作用?@cricket_007只是一个错误,行为不端的节点运行的是较旧版本的Kafka。我不知道为什么它不是真正的向后兼容,但我从中吸取了教训。非常感谢您的帮助。可能是因为消息格式在不同版本之间发生了变化。