Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/vim/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka Kafka Producer-在群集中重新启动代理后,无法下载/刷新元数据_Apache Kafka_Kafka Producer Api - Fatal编程技术网

Apache kafka Kafka Producer-在群集中重新启动代理后,无法下载/刷新元数据

Apache kafka Kafka Producer-在群集中重新启动代理后,无法下载/刷新元数据,apache-kafka,kafka-producer-api,Apache Kafka,Kafka Producer Api,我们有一个卡夫卡集群,有5个节点和3个动物园管理员,所有主题的复制因子都是3。目前正在使用卡夫卡和卡夫卡客户端(2.2.0)和Zookeeper版本(5.2.1) 当两个代理停机时,生产商无法发送带有以下错误的消息 org.apache.kafka.common.errors.TimeoutException:主题testTopic在120000毫秒后不在元数据中 客户端似乎通过比较最新数据跳过元数据更新 群集配置: --override num.network.threads=3 --ove

我们有一个卡夫卡集群,有5个节点和3个动物园管理员,所有主题的复制因子都是3。目前正在使用卡夫卡和卡夫卡客户端(2.2.0)和Zookeeper版本(5.2.1)

当两个代理停机时,生产商无法发送带有以下错误的消息

org.apache.kafka.common.errors.TimeoutException:主题testTopic在120000毫秒后不在元数据中

客户端似乎通过比较最新数据跳过元数据更新

群集配置:

--override num.network.threads=3 --override num.io.threads=8 --override default.replication.factor=3 --override auto.create.topics.enable=true --override delete.topic.enable=true --override socket.send.buffer.bytes=102400 --override socket.receive.buffer.bytes=102400 --override socket.request.max.bytes=104857600 --override num.partitions=30 --override num.recovery.threads.per.data.dir=1 --override offsets.topic.replication.factor=3 --override transaction.state.log.replication.factor=3 --override transaction.state.log.min.isr=1 --override log.retention.hours=48 --override log.segment.bytes=1073741824 --override log.retention.check.interval.ms=300000 --override zookeeper.connection.timeout.ms=6000 --override confluent.support.metrics.enable=true --override group.initial.rebalance.delay.ms=0 --override confluent.support.customer.id=anonymous 
acks = 1
    batch.size = 8192
    bootstrap.servers = []
    buffer.memory = 33554432
    client.dns.lookup = default
    client.id = C02Z93MPLVCH
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 120000
    enable.idempotence = false
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 0
    max.block.ms = 120000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 20000
    reconnect.backoff.ms = 20000
    request.timeout.ms = 300000
    retries = 2
    retry.backoff.ms = 500
制作人配置:

--override num.network.threads=3 --override num.io.threads=8 --override default.replication.factor=3 --override auto.create.topics.enable=true --override delete.topic.enable=true --override socket.send.buffer.bytes=102400 --override socket.receive.buffer.bytes=102400 --override socket.request.max.bytes=104857600 --override num.partitions=30 --override num.recovery.threads.per.data.dir=1 --override offsets.topic.replication.factor=3 --override transaction.state.log.replication.factor=3 --override transaction.state.log.min.isr=1 --override log.retention.hours=48 --override log.segment.bytes=1073741824 --override log.retention.check.interval.ms=300000 --override zookeeper.connection.timeout.ms=6000 --override confluent.support.metrics.enable=true --override group.initial.rebalance.delay.ms=0 --override confluent.support.customer.id=anonymous 
acks = 1
    batch.size = 8192
    bootstrap.servers = []
    buffer.memory = 33554432
    client.dns.lookup = default
    client.id = C02Z93MPLVCH
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 120000
    enable.idempotence = false
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 0
    max.block.ms = 120000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 20000
    reconnect.backoff.ms = 20000
    request.timeout.ms = 300000
    retries = 2
    retry.backoff.ms = 500
有没有人面临同样的问题我们通常希望kafka客户端下载元数据,以防代理在重试几次后关闭。我们必须在等待几个小时后重新启动服务器以再次初始化连接

这是预期的行为吗