Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 如何使Kafka broker故障切换在消费者方面起作用?_Java_Apache Kafka - Fatal编程技术网

Java 如何使Kafka broker故障切换在消费者方面起作用?

Java 如何使Kafka broker故障切换在消费者方面起作用?,java,apache-kafka,Java,Apache Kafka,对于消费者来说,让复制代理工作似乎非常复杂:似乎在停止某些代理时,一些消费者不再工作,而当特定代理再次启动时,那些不工作的消费者会收到所有“丢失”的消息 我使用的是2个代理的场景。创建了一个复制的主题,如下所示: $KAFKA_HOME/bin/kafka-topics.sh --create \ --zookeeper localhost:2181 \ --replication-factor 2 \ --partitions 3 \ --topic replicated_

对于消费者来说,让复制代理工作似乎非常复杂:似乎在停止某些代理时,一些消费者不再工作,而当特定代理再次启动时,那些不工作的消费者会收到所有“丢失”的消息

我使用的是2个代理的场景。创建了一个复制的主题,如下所示:

  $KAFKA_HOME/bin/kafka-topics.sh --create \
  --zookeeper localhost:2181 \
  --replication-factor 2 \
  --partitions 3 \
  --topic replicated_topic
服务器配置的摘录如下所示(请注意,除了端口、代理id和日志目录之外,这两个服务器都是相同的):

让我们使用两个代理来描述我的主题:

Topic:replicated_topic  PartitionCount:3    ReplicationFactor:2 Configs:
    Topic: replicated_topic Partition: 0    Leader: 1   Replicas: 1,0   Isr: 1,0
    Topic: replicated_topic Partition: 1    Leader: 0   Replicas: 0,1   Isr: 1,0
    Topic: replicated_topic Partition: 2    Leader: 1   Replicas: 1,0   Isr: 1,0
让我们看看消费者的代码: 消费者(impl可调用)

程序输出:

12:52:30.460 DEBUG Main - Please enter 'k v' on the command line to send to Kafka or 'quit' to exit
1 u
12:52:35.555 DEBUG ConsumerCallable - ConsumerCallable@1241910294 consumed from topic replicated_topic, partition 0 pair (1,u)
12:52:35.559 DEBUG ConsumerCallable - ConsumerCallable@1361430455 consumed from topic replicated_topic, partition 0 pair (1,u)
12:52:35.559 DEBUG ConsumerCallable - ConsumerCallable@186743616 consumed from topic replicated_topic, partition 0 pair (1,u)
2 d
12:52:38.096 DEBUG ConsumerCallable - ConsumerCallable@186743616 consumed from topic replicated_topic, partition 2 pair (2,d)
12:52:38.098 DEBUG ConsumerCallable - ConsumerCallable@1361430455 consumed from topic replicated_topic, partition 2 pair (2,d)
12:52:38.100 DEBUG ConsumerCallable - ConsumerCallable@1241910294 consumed from topic replicated_topic, partition 2 pair (2,d)
3 t
12:57:03.898 DEBUG ConsumerCallable - ConsumerCallable@186743616 consumed from topic replicated_topic, partition 1 pair (3,t)
4 p
12:57:06.058 DEBUG ConsumerCallable - ConsumerCallable@186743616 consumed from topic replicated_topic, partition 1 pair (4,p)
由于消费者属于不同的群体,所有消息都会广播给他们,因此一切正常

2打倒经纪人2:

描述主题:

Topic:replicated_topic  PartitionCount:3    ReplicationFactor:2 Configs:
    Topic: replicated_topic Partition: 0    Leader: 0   Replicas: 1,0   Isr: 0
    Topic: replicated_topic Partition: 1    Leader: 0   Replicas: 0,1   Isr: 0
    Topic: replicated_topic Partition: 2    Leader: 0   Replicas: 1,0   Isr: 0
程序输出:

12:52:30.460 DEBUG Main - Please enter 'k v' on the command line to send to Kafka or 'quit' to exit
1 u
12:52:35.555 DEBUG ConsumerCallable - ConsumerCallable@1241910294 consumed from topic replicated_topic, partition 0 pair (1,u)
12:52:35.559 DEBUG ConsumerCallable - ConsumerCallable@1361430455 consumed from topic replicated_topic, partition 0 pair (1,u)
12:52:35.559 DEBUG ConsumerCallable - ConsumerCallable@186743616 consumed from topic replicated_topic, partition 0 pair (1,u)
2 d
12:52:38.096 DEBUG ConsumerCallable - ConsumerCallable@186743616 consumed from topic replicated_topic, partition 2 pair (2,d)
12:52:38.098 DEBUG ConsumerCallable - ConsumerCallable@1361430455 consumed from topic replicated_topic, partition 2 pair (2,d)
12:52:38.100 DEBUG ConsumerCallable - ConsumerCallable@1241910294 consumed from topic replicated_topic, partition 2 pair (2,d)
3 t
12:57:03.898 DEBUG ConsumerCallable - ConsumerCallable@186743616 consumed from topic replicated_topic, partition 1 pair (3,t)
4 p
12:57:06.058 DEBUG ConsumerCallable - ConsumerCallable@186743616 consumed from topic replicated_topic, partition 1 pair (4,p)
现在只有1个消费者接收数据。让我们再次提起broker 2: 现在,其他2个消费者收到丢失的数据:

12:57:50.863 DEBUG ConsumerCallable - ConsumerCallable@1241910294 consumed from topic replicated_topic, partition 1 pair (3,t)
12:57:50.863 DEBUG ConsumerCallable - ConsumerCallable@1241910294 consumed from topic replicated_topic, partition 1 pair (4,p)
12:57:50.870 DEBUG ConsumerCallable - ConsumerCallable@1361430455 consumed from topic replicated_topic, partition 1 pair (3,t)
12:57:50.870 DEBUG ConsumerCallable - ConsumerCallable@1361430455 consumed from topic replicated_topic, partition 1 pair (4,p)
  • 打倒经纪人1:
  • 现在只有2个消费者收到数据:

    5 c
    12:59:13.718 DEBUG ConsumerCallable - ConsumerCallable@1361430455 consumed from topic replicated_topic, partition 2 pair (5,c)
    12:59:13.737 DEBUG ConsumerCallable - ConsumerCallable@1241910294 consumed from topic replicated_topic, partition 2 pair (5,c)
    6 s
    12:59:16.437 DEBUG ConsumerCallable - ConsumerCallable@1361430455 consumed from topic replicated_topic, partition 2 pair (6,s)
    12:59:16.438 DEBUG ConsumerCallable - ConsumerCallable@1241910294 consumed from topic replicated_topic, partition 2 pair (6,s)
    
    如果我把它带到另一个消费者身上,我也会收到丢失的数据

    我的观点是(很抱歉写了这么多文章,但我正试图抓住上下文),如何确保无论我停止哪家经纪公司,消费者都能正常工作?(正常接收所有消息)


    PS:我尝试设置offset.topic.replication.factor=2或3,但没有任何效果。

    如果活动代理的数量小于配置的副本,则不会忽略发送给该代理的消息。每当新的Kafka代理加入集群时,数据就会被复制到该节点

    因此,当您的broker 2宕机时,消息仍然会被推送到另一个活动的broker,因为有1个活动的broker,而复制因子为2。由于您的其他2个消费者已订阅broker 2(已关闭),因此他们无法消费


    当您的broker 2再次启动时,数据将复制到此新节点,因此连接到此节点的使用者将收到消息(您称之为“丢失”消息)。

    请确保您已将名为
    offset.topic.replication.factor的属性更改为至少3

    此属性用于管理偏移和使用者交互。当kafka服务器启动时,它会自动创建一个名为
    \uu消费者\u偏移量
    的主题。因此,如果没有在本主题中创建复制副本,则消费者无法确定是否有内容被推送到了其正在收听的主题


    链接到此属性的详细信息:

    但基本上,我如何让订阅了死节点的用户再次订阅新节点?我必须在代码中做些什么还是驱动程序应该做些什么?尝试增加2代理场景的副本数。因为根据您的数据,当broker 2宕机时,将有更多副本与leader副本同步,因此您的消费者可以从中消费。可能是!!!卡夫卡可以容忍RF-1故障,因此在我的例子中,系统应该能够在2-1=1节点的情况下正常工作。既然我有2个节点,这就是为什么我把RF设为2。增加副本数是什么意思?我只有2个节点,2是最大值。或者我遗漏了什么?但是,当代理不干净地关闭时(例如kill-9),观察到的不可用性可能与分区的数量成正比。。。。读一下@adragomir你能找到解决方法吗?
    12:57:50.863 DEBUG ConsumerCallable - ConsumerCallable@1241910294 consumed from topic replicated_topic, partition 1 pair (3,t)
    12:57:50.863 DEBUG ConsumerCallable - ConsumerCallable@1241910294 consumed from topic replicated_topic, partition 1 pair (4,p)
    12:57:50.870 DEBUG ConsumerCallable - ConsumerCallable@1361430455 consumed from topic replicated_topic, partition 1 pair (3,t)
    12:57:50.870 DEBUG ConsumerCallable - ConsumerCallable@1361430455 consumed from topic replicated_topic, partition 1 pair (4,p)
    
    5 c
    12:59:13.718 DEBUG ConsumerCallable - ConsumerCallable@1361430455 consumed from topic replicated_topic, partition 2 pair (5,c)
    12:59:13.737 DEBUG ConsumerCallable - ConsumerCallable@1241910294 consumed from topic replicated_topic, partition 2 pair (5,c)
    6 s
    12:59:16.437 DEBUG ConsumerCallable - ConsumerCallable@1361430455 consumed from topic replicated_topic, partition 2 pair (6,s)
    12:59:16.438 DEBUG ConsumerCallable - ConsumerCallable@1241910294 consumed from topic replicated_topic, partition 2 pair (6,s)