Kafka:使用java更改特定主题的分区数
我是卡夫卡新手,与新卡夫卡制作人和卡夫卡消费者合作,版本:0.9.0.1 java中是否有任何方法可以在创建特定主题后更改/更新该主题的分区数 我没有使用zookeeper创建主题。 当发布请求到达时,我的KafkaProducer会自动创建主题Kafka:使用java更改特定主题的分区数,java,apache-kafka,kafka-consumer-api,kafka-producer-api,Java,Apache Kafka,Kafka Consumer Api,Kafka Producer Api,我是卡夫卡新手,与新卡夫卡制作人和卡夫卡消费者合作,版本:0.9.0.1 java中是否有任何方法可以在创建特定主题后更改/更新该主题的分区数 我没有使用zookeeper创建主题。 当发布请求到达时,我的KafkaProducer会自动创建主题 如果这些还不够的话,我还可以提供更多细节是的,这是可能的。您必须访问kafka_2.11-0.9.0.1.jar中的AdminUtilsscala类才能添加分区 AdminUtils支持只能增加主题中的分区数。类路径中可能需要kafka_2.11-0.
如果这些还不够的话,我还可以提供更多细节是的,这是可能的。您必须访问
kafka_2.11-0.9.0.1.jar
中的AdminUtils
scala类才能添加分区
AdminUtils
支持只能增加主题中的分区数。类路径中可能需要kafka_2.11-0.9.0.1.jar
,zk-client-0.8.jar
,scala-library-2.11.8.jar
和scala-parser-combinats_2.11-1.0.4.jar
jar
以下部分代码是从kafka cloudera示例中借用/启发而来的
package org.apache.kafka.examples;
import java.io.Closeable;
import org.I0Itec.zkclient.ZkClient;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import kafka.admin.AdminOperationException;
import kafka.admin.AdminUtils;
import kafka.admin.RackAwareMode.Enforced$;
import kafka.utils.ZKStringSerializer$;
import kafka.utils.ZkUtils;
public class Test {
static final Logger logger = LogManager.getLogger();
public Test() {
// TODO Auto-generated constructor stub
}
public static void addPartitions(String zkServers, String topic, int partitions) {
try (AutoZkClient zkClient = new AutoZkClient(zkServers)) {
ZkUtils zkUtils = ZkUtils.apply(zkClient, false);
if (AdminUtils.topicExists(zkUtils, topic)) {
logger.info("Altering topic {}", topic);
try {
AdminUtils.addPartitions(zkUtils, topic, partitions, "", true, Enforced$.MODULE$);
logger.info("Topic {} altered with partitions : {}", topic, partitions);
} catch (AdminOperationException aoe) {
logger.info("Error while altering partitions for topic : {}", topic, aoe);
}
} else {
logger.info("Topic {} doesn't exists", topic);
}
}
}
// Just exists for Closeable convenience
private static final class AutoZkClient extends ZkClient implements Closeable {
static int sessionTimeout = 30_000;
static int connectionTimeout = 6_000;
AutoZkClient(String zkServers) {
super(zkServers, sessionTimeout, connectionTimeout, ZKStringSerializer$.MODULE$);
}
}
public static void main(String[] args) {
addPartitions("localhost:2181", "hello", 20);
}
}
是的,有可能。您必须访问
kafka_2.11-0.9.0.1.jar
中的AdminUtils
scala类才能添加分区
AdminUtils
支持只能增加主题中的分区数。类路径中可能需要kafka_2.11-0.9.0.1.jar
,zk-client-0.8.jar
,scala-library-2.11.8.jar
和scala-parser-combinats_2.11-1.0.4.jar
jar
以下部分代码是从kafka cloudera示例中借用/启发而来的
package org.apache.kafka.examples;
import java.io.Closeable;
import org.I0Itec.zkclient.ZkClient;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import kafka.admin.AdminOperationException;
import kafka.admin.AdminUtils;
import kafka.admin.RackAwareMode.Enforced$;
import kafka.utils.ZKStringSerializer$;
import kafka.utils.ZkUtils;
public class Test {
static final Logger logger = LogManager.getLogger();
public Test() {
// TODO Auto-generated constructor stub
}
public static void addPartitions(String zkServers, String topic, int partitions) {
try (AutoZkClient zkClient = new AutoZkClient(zkServers)) {
ZkUtils zkUtils = ZkUtils.apply(zkClient, false);
if (AdminUtils.topicExists(zkUtils, topic)) {
logger.info("Altering topic {}", topic);
try {
AdminUtils.addPartitions(zkUtils, topic, partitions, "", true, Enforced$.MODULE$);
logger.info("Topic {} altered with partitions : {}", topic, partitions);
} catch (AdminOperationException aoe) {
logger.info("Error while altering partitions for topic : {}", topic, aoe);
}
} else {
logger.info("Topic {} doesn't exists", topic);
}
}
}
// Just exists for Closeable convenience
private static final class AutoZkClient extends ZkClient implements Closeable {
static int sessionTimeout = 30_000;
static int connectionTimeout = 6_000;
AutoZkClient(String zkServers) {
super(zkServers, sessionTimeout, connectionTimeout, ZKStringSerializer$.MODULE$);
}
}
public static void main(String[] args) {
addPartitions("localhost:2181", "hello", 20);
}
}
谢谢。这是我在找的汉克斯。这是我在找的