Apache kafka 无法将Avro消息发布到卡夫卡主题

Apache kafka 无法将Avro消息发布到卡夫卡主题,apache-kafka,kafka-producer-api,Apache Kafka,Kafka Producer Api,我使用以下命令启动了卡夫卡 docker run -p 2181:2181 -p 9092:9092 -p 8081:8081 --env ADVERTISED_HOST=`docker-machine ip \`docker-machine active\`` --env ADVERTISED_PORT=9092 spotify/kafka 现在,我编写了一个简单的程序,将一个字符串发布到卡夫卡主题中。它工作没有任何问题 props.put(ProducerConfig.BOOTSTRAP_

我使用以下命令启动了卡夫卡

docker run -p 2181:2181 -p 9092:9092 -p 8081:8081 --env ADVERTISED_HOST=`docker-machine ip \`docker-machine active\`` --env ADVERTISED_PORT=9092 spotify/kafka
现在,我编写了一个简单的程序,将一个字符串发布到卡夫卡主题中。它工作没有任何问题

props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.99.100:9092")
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
val producer = new KafkaProducer[String, String](props)
val inputRecord = new ProducerRecord[String, String]("test", "key2", "Hello World")
producer.send(inputRecord)
producer.close()
所以现在我修改了这个程序,并尝试向卡夫卡主题发送一条avro消息

val props = new Properties()
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.99.100:9092")
props.put("schema.registry.url", "http://192.168.99.100:8081")
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer")
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "io.confluent.kafka.serializers.KafkaAvroSerializer")
val producer = new KafkaProducer[String, Object](props)
val inputRecord = createAvroRecord(schemaStr, "test1", "test1")
val producer: KafkaProducer[String, Object] = CreateProducerAvro
val producerAvroRecord = new ProducerRecord[String, Object]("test", "key1", inputRecord)
producer.send(producerAvroRecord)
producer.close()
但我有一个错误

[error] (run-main-0) org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
org.apache.kafka.common.errors.SerializationException: Error serializing Avro message
Caused by: java.net.ConnectException: Connection refused
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:579)
    at java.net.Socket.connect(Socket.java:528)
    at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
    at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
    at sun.net.www.http.HttpClient.New(HttpClient.java:308)
    at sun.net.www.http.HttpClient.New(HttpClient.java:326)
    at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:997)
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:933)
    at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:851)
    at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1092)
    at io.confluent.kafka.schemaregistry.client.rest.utils.RestUtils.httpRequest(RestUtils.java:128)
    at io.confluent.kafka.schemaregistry.client.rest.utils.RestUtils.registerSchema(RestUtils.java:174)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.registerAndGetId(CachedSchemaRegistryClient.java:51)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:89)
    at io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:49)
    at io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:67)
    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:424)
    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
    at KafkaPublisher$.SendAvroMessage(KafkaPublisher.scala:35)
    at KafkaPublisher$.main(KafkaPublisher.scala:20)
    at KafkaPublisher.main(KafkaPublisher.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error](run-main-0)org.apache.kafka.common.errors.SerializationException:序列化Avro消息时出错
org.apache.kafka.common.errors.SerializationException:序列化Avro消息时出错
原因:java.net.ConnectException:连接被拒绝
位于java.net.PlainSocketImpl.socketConnect(本机方法)
位于java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
位于java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
位于java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
位于java.net.socksocketimpl.connect(socksocketimpl.java:392)
位于java.net.Socket.connect(Socket.java:579)
位于java.net.Socket.connect(Socket.java:528)
位于sun.net.NetworkClient.doConnect(NetworkClient.java:180)
位于sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
位于sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
http.HttpClient.(HttpClient.java:211)
http.HttpClient.New(HttpClient.java:308)
http.HttpClient.New(HttpClient.java:326)
位于sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:997)
位于sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:933)
位于sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:851)
位于sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1092)
位于io.confluent.kafka.schemaregistry.client.rest.utils.RestUtils.httpRequest(RestUtils.java:128)
位于io.confluent.kafka.schemaregistry.client.rest.utils.RestUtils.registerSchema(RestUtils.java:174)
位于io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.RegisterAndEdge(CachedSchemaRegistryClient.java:51)
位于io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.register(CachedSchemaRegistryClient.java:89)
位于io.confluent.kafka.serializers.AbstractKafkaAvroSerializer.serializeImpl(AbstractKafkaAvroSerializer.java:49)
位于io.confluent.kafka.serializers.KafkaAvroSerializer.serialize(KafkaAvroSerializer.java:67)
位于org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:424)
位于org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:339)
在KafkaPublisher$.SendAvroMessage(KafkaPublisher.scala:35)
在kafkapulisher$.main(kafkapulisher.scala:20)
在kafkapulisher.main(kafkapulisher.scala)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)中
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:606)
[trace]堆栈跟踪被抑制:运行上次编译:运行完整输出。

将key.serializer和value.serializer更改为avro,如下所示

props.put("key.serializer", io.confluent.kafka.serializers.KafkaAvroSerializer.class);
props.put("value.serializer", io.confluent.kafka.serializers.KafkaAvroSerializer.class);
如果您使用的是架构注册表,请设置架构注册表url

props.put("schema.registry.url", "http://localhost:8081");
注: 将架构注册表url设置为运行主题名为“Kafka value”的架构注册表的服务器
如果不使用上述命令,可以放弃此命令。

将key.serializer和value.serializer更改为avro,如下所示

props.put("key.serializer", io.confluent.kafka.serializers.KafkaAvroSerializer.class);
props.put("value.serializer", io.confluent.kafka.serializers.KafkaAvroSerializer.class);
如果您使用的是架构注册表,请设置架构注册表url

props.put("schema.registry.url", "http://localhost:8081");
注: 将架构注册表url设置为运行主题名为“Kafka value”的架构注册表的服务器
如果您不使用上述内容,则可以放弃此功能。

我假设您使用的是windows或至少是docker机器以及您在
引导服务器上设置的ip地址
schema.registry.url
是docker机器ip地址


尝试删除两个选项:
播发的\u主机
播发的\u端口
,通常在这种情况下,端口绑定就足够了

我假设您在windows上工作,或者至少使用docker机器和您在
引导\u服务器上设置的ip地址
schema.registry.url
是docker机器的ip地址

尝试删除两个选项:
播发的\u主机
播发的\u端口
,通常在这种情况下,端口绑定就足够了