Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/343.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 简单的卡夫卡消费者示例不起作用_Java_Apache Kafka_Kafka Consumer Api - Fatal编程技术网

Java 简单的卡夫卡消费者示例不起作用

Java 简单的卡夫卡消费者示例不起作用,java,apache-kafka,kafka-consumer-api,Java,Apache Kafka,Kafka Consumer Api,我有一个简单的类来使用来自kafka服务器的消息。大多数代码都是从org.apache.kafka.clients.consumer.KafkaConsumer.java的注释中复制的 public class Demo { public static void main(String[] args) { Properties props = new Properties(); props.put("metadata.broker.list", "192

我有一个简单的类来使用来自kafka服务器的消息。大多数代码都是从org.apache.kafka.clients.consumer.KafkaConsumer.java的注释中复制的

public class Demo {

    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("metadata.broker.list", "192.168.144.10:29092");
        props.put("group.id", "test");
        props.put("session.timeout.ms", "1000");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "10000");
        KafkaConsumer<byte[], byte[]> consumer = new KafkaConsumer<byte[], byte[]>(props);
        consumer.subscribe("voltdbexportAUDIT", "voltdbexportTEST");
        boolean isRunning = true;
        while (isRunning) {
            Map<String, ConsumerRecords<byte[], byte[]>> records = consumer.poll(100);
            process(records);
        }
        consumer.close();
    }

    private static Map<TopicPartition, Long> process(Map<String, ConsumerRecords<byte[], byte[]>> records) {
        Map<TopicPartition, Long> processedOffsets = new HashMap<>();
        for (Map.Entry<String, ConsumerRecords<byte[], byte[]>> recordMetadata : records.entrySet()) {
            List<ConsumerRecord<byte[], byte[]>> recordsPerTopic = recordMetadata.getValue().records();
            for (int i = 0; i < recordsPerTopic.size(); i++) {
                ConsumerRecord<byte[], byte[]> record = recordsPerTopic.get(i);
                // process record
                try {
                    processedOffsets.put(record.topicAndPartition(), record.offset());
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        }
        return processedOffsets;
    }
}
公共类演示{
公共静态void main(字符串[]args){
Properties props=新属性();
props.put(“metadata.broker.list”,“192.168.144.10:29092”);
props.put(“group.id”、“test”);
props.put(“session.timeout.ms”,“1000”);
props.put(“enable.auto.commit”、“true”);
props.put(“auto.commit.interval.ms”、“10000”);
卡夫卡消费者=新卡夫卡消费者(道具);
consumer.subscribe(“voltdexportaudit”、“voltdexporttest”);
布尔值isRunning=true;
同时(正在运行){
地图记录=消费者投票(100);
过程(记录);
}
consumer.close();
}
私有静态映射进程(映射记录){
Map processedOffsets=newhashmap();
对于(Map.Entry recordMetadata:records.entrySet()){
List recordsPerTopic=recordMetadata.getValue().records();
对于(int i=0;i
我使用的是“org.apache.kafka:kafka客户端:0.8.2.0”。它抛出异常

Exception in thread "main" org.apache.kafka.common.config.ConfigException: Missing required configuration "key.deserializer" which has no default value.
    at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:124)
    at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:48)
    at org.apache.kafka.clients.consumer.ConsumerConfig.<init>(ConsumerConfig.java:194)
    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:430)
    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:413)
    at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:400)
    at kafka.integration.Demo.main(Demo.java:26)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
线程“main”org.apache.kafka.common.config.ConfigException中的异常:缺少没有默认值的必需配置“key.deserializer”。 位于org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:124) 位于org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:48) 位于org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:194) 位于org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:430) 在org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:413) 在org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:400) 位于kafka.integration.Demo.main(Demo.java:26) 在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处 位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中 位于java.lang.reflect.Method.invoke(Method.java:497) 位于com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)

我应该如何配置key.deserializer

您需要设置属性:

props.put("serializer.class","my.own.serializer.StringSupport");
props.put("key.serializer.class","my.own.serializer.LongSupport");

在main方法中,以便将它们传递给生产者的构造函数。当然,您必须指定正确的编码器。serializer类将消息转换为字节数组,key.serializer类将键对象转换为字节数组。通常情况下,您还可以让他们反转该过程。

这是一种开箱即用的方法,无需实现您自己的序列化程序

Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer");  
props.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
props.put("partition.assignment.strategy", "range");

您正在处理键和值参数的字节数组。 所以需要字节序列化程序和反序列化程序

您可以添加属性

用于反序列化

props.put("key.deserializer","org.apache.kafka.common.serialization.ByteArrayDeserializer");  
连载

props.put("value.deserializer","org.apache.kafka.common.serialization.ByteArraySerializer");

确保传递的是反序列化类的字符串值,而不是类对象(这是我的错误)


当您忘记了
.getName()
时,您将得到相同的异常,在这种情况下会产生误导性。

对于键,请使用以下选项之一

字符串键

properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
JSON密钥

properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
Avro键

properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
ByteArray键

properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);

同样,对值反序列化器使用以下选项之一:

字符串值

properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
JSON值

properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
Avro值

properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
字节数组值

properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);

请注意,对于Avro反序列化程序,您将需要以下依赖项:

<dependency> 
    <groupId>io.confluent</groupId> 
    <artifactId>kafka-avro-serializer</artifactId> 
    <version>${confluent.version}</version> 
</dependency> 

<dependency> 
    <groupId>org.apache.avro</groupId> 
    <artifactId>avro</artifactId> 
    <version>${avro.version}</version> 
</dependency>

合流的
卡夫卡avro序列化程序
${confluent.version}
org.apache.avro
阿夫罗
${avro.version}

仔细查看您从中复制的示例。就在那里。如果你能指出那个地方会更有帮助。