Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/357.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
使用Java kafka客户端进行自定义分区_Java_Apache Kafka - Fatal编程技术网

使用Java kafka客户端进行自定义分区

使用Java kafka客户端进行自定义分区,java,apache-kafka,Java,Apache Kafka,我能够用Java编写一个卡夫卡示例应用程序。它有3个主题,酒吧/酒吧运作良好。但无法将这些主题分配到不同的分区 我的消费者 public class Consumers extends Thread { private static final List<String> TOPIC_LIST = Arrays.asList("topic1", "topic2", "topic3"); private static final List<TopicPartitio

我能够用Java编写一个卡夫卡示例应用程序。它有3个主题,酒吧/酒吧运作良好。但无法将这些主题分配到不同的分区

我的消费者

public class Consumers extends Thread {
    private static final List<String> TOPIC_LIST = Arrays.asList("topic1", "topic2", "topic3");
    private static final List<TopicPartition> PARTITION_LIST = 
Arrays.asList(new TopicPartition(TOPIC_LIST.get(0), 1), new TopicPartition(TOPIC_LIST.get(1), 2));

    private void message() {
        Properties consumerProperties = KafkaProperties.getConsumerProperties();
        org.apache.kafka.clients.consumer.KafkaConsumer<String, String> consumer = new KafkaConsumer<>(consumerProperties);
        consumer.assign(PARTITION_LIST);
        Logger.debug("Kafka IP : " + consumerProperties.getProperty("bootstrap.servers"));
        try {
            while (true) {
                ConsumerRecords<String, String> records = consumer.poll(100);
                for (ConsumerRecord<String, String> record : records) {
                    process(record.topic(), record.value());
                }
            }
        } catch (Exception e) {
            Logger.error("error while consuming : " + e.getMessage());
            e.printStackTrace();
        } finally {
            consumer.close();
        }
    }

    private void process(String topic, String value) {
        KafkaProcessor.process(topic, value);
    }

    @Override
    public void run() {
        message();
    }
}
public class CustomPartitioner implements Partitioner {
private static Map<String, Integer> partitionMap;

@Override
public void configure(Map<String, ?> configs) {
    System.out.println("Inside CustomPartitioner.configure " + configs);
    partitionMap = new HashMap<>();
    for (Map.Entry<String, ?> entry : configs.entrySet()) {
        if (entry.getKey().startsWith("partitions.")) {
            String keyName = entry.getKey();
            String value = (String) entry.getValue();
            int partitionId = Integer.parseInt(keyName.substring(11));
            partitionMap.put(value, partitionId);
        }
    }
}

@Override
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
    List partitions = cluster.availablePartitionsForTopic(topic);
    String valueStr = (String) value;
    String name = ((String) value).split(":")[0];
    if (partitionMap.containsKey(name)) {
        //If the country is mapped to particular partition return it
        return partitionMap.get(name);
    } else {
        //If no country is mapped to particular partition distribute between remaining partitions
        int noOfPartitions = cluster.topics().size();
        return value.hashCode() % noOfPartitions + partitionMap.size();
    }
}

public void close() {
}}
public void producer(String topic, String message) {
    Producer<String, String> producer = new KafkaProducer<>(KafkaProperties.getProducerProperties());
    try {
        ProducerRecord<String, String> producerRecord = new ProducerRecord<>(topic, null, message);
        producer.send(producerRecord);
        producer.close();
    } catch (Exception e) {
        Logger.error("kafka message publish error: ", e);
    }
}
我的制作人财产如下:

 properties.put("bootstrap.servers", "127.0.0.1:9092");
 properties.put("acks", "all");
 properties.put("retries", 0);
 properties.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, CustomPartitioner.class.getCanonicalName());
 properties.put("partitions.1", "partition1");
 properties.put("partitions.2", "partition2");
 properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArraySerializer");
 properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

使用这些属性和代码,我无法发送或接收消息。如何解决此问题。

创建分区是通过主题配置而不是生产者配置进行配置的。对于所需的现有主题:

bin/kafka-topics.sh --zookeeper <ZK_HOST> --alter --topic <TOPIC_NAME> --partitions <NUM_PARTITIONS>

然后,您可以保证来自同一国家/地区的所有数据将进入同一分区,并且您可以删除整个
CustomPartitioner
类。同时删除
consumer.assign(分区列表);同样,Kafka会为您管理此分区。

cluster.topics().size()
不会为您提供分区。但是,如果一个国家的分区发生频率高于其他国家,则您需要说明“热”分区,或者某些代理获取所有数据。在这种情况下,需要一个自定义分区器,例如,您可以在多个代理之间循环该特定密钥,并分发其余的密钥。当然,在这种特殊情况下,似乎我们离需要调整性能还有几步之遥。“在你能跑之前先走,等等。”@cricket_007我应该编辑我的答案以包括你刚才说的内容,还是你的评论是正确的?我认为作为一个评论很好。但欢迎你澄清一下。我只是提供了一个示例,说明何时可以使用自定义分区器来感谢支持。现在我能够正确地发送和接收消息。
bin/kafka-topics.sh --zookeeper <ZK_HOST> --alter --topic <TOPIC_NAME> --partitions <NUM_PARTITIONS>
ProducerRecord<String, String> producerRecord = new ProducerRecord<>(topic, message.split(":")[0], message);