Docker 未能反序列化主题的数据
我在这里使用confluent cp all-in-one项目配置: 我正在给你留言http://localhost:8082/topics/zuum-职位 具有以下AVRO主体:Docker 未能反序列化主题的数据,docker,apache-kafka,confluent-platform,confluent-schema-registry,kafka-rest,Docker,Apache Kafka,Confluent Platform,Confluent Schema Registry,Kafka Rest,我在这里使用confluent cp all-in-one项目配置: 我正在给你留言http://localhost:8082/topics/zuum-职位 具有以下AVRO主体: { "key_schema": "{\"type\":\"string\"}", "value_schema":"{ \"type\":\"record\",\"name\":\"Position\",\"fields\":[ { \"name\":\"loadId\",\"type\":\"d
{
"key_schema": "{\"type\":\"string\"}",
"value_schema":"{ \"type\":\"record\",\"name\":\"Position\",\"fields\":[ { \"name\":\"loadId\",\"type\":\"double\"},{\"name\":\"lat\",\"type\":\"double\"},{ \"name\":\"lon\",\"type\":\"double\"}]}",
"records":[
{
"key":"22",
"value":{
"lat":43.33,
"lon":43.33,
"loadId":22
}
}
]
}
我已将以下标题正确添加到上述POST请求中:
内容类型:application/vnd.kafka.avro.v2+json
接受:application/vnd.kafka.v2+json
执行此请求时,我在docker日志中看到以下异常:
Error encountered in task zuum-sink-positions-0. Executing stage 'VALUE_CONVERTER' with class 'io.confluent.connect.avro.AvroConverter', where consumed record is {topic='zuum-positions', partition=0, offset=25, timestamp=1563480487456, timestampType=CreateTime}. org.apache.kafka.connect.errors.DataException: Failed to deserialize data for topic zuum-positions to Avro:
connect | at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:107)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:487)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:487)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:464)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:320)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
connect | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
connect | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
connect | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
connect | at java.lang.Thread.run(Thread.java:748)
connect | Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 61
connect | Caused by: java.net.ConnectException: Connection refused (Connection refused)
connect | at java.net.PlainSocketImpl.socketConnect(Native Method)
connect | at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
connect | at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
connect | at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
connect | at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
connect | at java.net.Socket.connect(Socket.java:589)
connect | at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
connect | at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
connect | at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
connect | at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
connect | at sun.net.www.http.HttpClient.New(HttpClient.java:339)
connect | at sun.net.www.http.HttpClient.New(HttpClient.java:357)
connect | at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
connect | at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
connect | at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
connect | at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
connect | at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1564)
connect | at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
connect | at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
connect | at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:208)
connect | at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:252)
connect | at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:482)
connect | at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:475)
connect | at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:153)
connect | at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getBySubjectAndId(CachedSchemaRegistryClient.java:232)
connect | at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getById(CachedSchemaRegistryClient.java:211)
connect | at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:116)
connect | at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserializeWithSchemaAndVersion(AbstractKafkaAvroDeserializer.java:215)
connect | at io.confluent.connect.avro.AvroConverter$Deserializer.deserialize(AvroConverter.java:145)
connect | at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:90)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:487)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
connect | at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:487)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:464)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:320)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
connect | at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
connect | at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
connect | at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
connect | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
connect | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
connect | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
connect | at java.lang.Thread.run(Thread.java:748)
我花了好几个小时在这上面,找不到原因。通常,当connect无法连接到架构注册表时会发生此错误,但我保留了它们的配置:
你能帮忙吗 问题已修复
基本上,kafka消息已成功地持久化到主题,但当我的JDBC接收器连接器试图解析它并复制到MySQL DB时,它无法连接到模式注册表URL
以前的连接器配置:
{
"name": "zuum-sink-positions",
"key.converter.schema.registry.url": "http://localhost:8081",
"value.converter.schema.registry.url": "http://localhost:8081",
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter.schemas.enable":"false",
"value.converter.schemas.enable":"true",
"config.action.reload": "restart",
"errors.log.enable": "true",
"errors.log.include.messages": "true",
"print.key": "true",
"errors.tolerance": "all",
"topics": "zuum-positions",
"connection.url": "jdbc:mysql://ip:3306/zuum_tracking",
"connection.user": "user",
"connection.password": "password",
"auto.create": "true"
}
已使用正确的主机更新架构注册表url:
{
"name": "zuum-sink-positions",
"key.converter.schema.registry.url": "http://schema-registry:8081",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"key.converter.schemas.enable":"false",
"value.converter.schemas.enable":"true",
"config.action.reload": "restart",
"errors.log.enable": "true",
"errors.log.include.messages": "true",
"print.key": "true",
"errors.tolerance": "all",
"topics": "zuum-positions",
"connection.url": "jdbc:mysql://ip:3306/zuum_tracking",
"connection.user": "user",
"connection.password": "password",
"auto.create": "true"
}
假设问题是消息的关键,另一种解决方案是将connectcontainer env vars更改为change
CONNECT_KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http://schema-registry:8081
否则,如果问题只是值,那么如果要覆盖Compose/connect-distributed.properties文件中设置的默认值,则只需在JSON中指定转换器设置。换句话说,您可以完全删除localhost值 这是整个stacktrace吗?@cricket\u 007刚刚更新了stacktrace@Robin莫法特,你能告诉我吗?在这里你基本上改变了模式注册url。您能告诉我您的模式注册表属性中的url是什么吗?我也在尝试做同样的事情,但遇到了多个问题,最终到达了这里。你能在这里发布你的接收器和源配置,以及你的源数据库表吗。这是我问题的链接-