Apache kafka Kafka Connect从同一主题导出多个事件类型

Apache kafka Kafka Connect从同一主题导出多个事件类型,apache-kafka,apache-kafka-connect,confluent-schema-registry,Apache Kafka,Apache Kafka Connect,Confluent Schema Registry,我正在尝试使用一个新功能()来存储两个 关于同一主题的不同类型的事件。实际上,我正在使用Confluent版本4.1.0,并在下面设置这些属性 为了实现这一点 properties.put(KafkaAvroSerializerConfig.VALUE_SUBJECT_NAME_STRATEGY,TopicRecordNameStrategy.class.getName()); properties.put("value.multi.type", true); 数据写入主题时不会出现问题,并且

我正在尝试使用一个新功能()来存储两个 关于同一主题的不同类型的事件。实际上,我正在使用Confluent版本4.1.0,并在下面设置这些属性 为了实现这一点

properties.put(KafkaAvroSerializerConfig.VALUE_SUBJECT_NAME_STRATEGY,TopicRecordNameStrategy.class.getName());
properties.put("value.multi.type", true);
数据写入主题时不会出现问题,并且可以从Kafka Streams应用程序中看到通用Avro记录。阿尔索 在Kafka模式注册表上,为该特定主题上的每个事件创建了两个新条目

我面临的问题是,我无法使用Kafka Connect从此主题导出这些数据。在最简单的情况下 我使用一个文件接收器连接器,如下所示

{
  "name": "sink-connector",
  "config": {
      "topics": "source-topic",
      "connector.class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
      "tasks.max": 1,
      "key.converter": "org.apache.kafka.connect.storage.StringConverter",
      "key.converter.schema.registry.url":"http://kafka-schema-registry:8081",
      "value.converter":"io.confluent.connect.avro.AvroConverter",
      "value.converter.schema.registry.url":"http://kafka-schema-registry:8081",
      "value.subject.name.strategy":"io.confluent.kafka.serializers.subject.TopicRecordNameStrategy",
      "file": "/tmp/sink-file.txt"
    }
}
我从连接器中得到一个错误,它似乎是基于类似AvroConverter的序列化错误 这里显示的那个

org.apache.kafka.connect.errors.DataException: source-topic
    at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:95)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:468)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:301)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro schema for id 2
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Subject not found.; error code: 40401
    at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:202)
    at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:229)
    at io.confluent.kafka.schemaregistry.client.rest.RestService.lookUpSubjectVersion(RestService.java:296)
    at io.confluent.kafka.schemaregistry.client.rest.RestService.lookUpSubjectVersion(RestService.java:284)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getVersionFromRegistry(CachedSchemaRegistryClient.java:125)
    at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getVersion(CachedSchemaRegistryClient.java:236)
    at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:152)
    at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserializeWithSchemaAndVersion(AbstractKafkaAvroDeserializer.java:194)
    at io.confluent.connect.avro.AvroConverter$Deserializer.deserialize(AvroConverter.java:120)
    at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:83)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:468)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:301)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
请注意,schema registry有一个id为2的Avro模式和另一个id为3的模式,用于描述托管在上的两个事件 同样的话题。使用JDBC连接器时也会出现同样的问题

那么,为了从我的Kafka集群导出数据,我该如何处理这种情况呢 到外部系统。我的配置中是否缺少某些内容?有可能有一个主题包含多种类型的事件吗
并通过Kafka Connect将其导出?

找到了解决方案。我的代码以字符串形式传递键,以avro形式传递值。读取时,配置单元接收器尝试查找密钥的avro架构,但无法找到它。 添加属性 key.converter=org.apache.kafka.connect.storage.StringConverter key.converter.schema.registry.url=
帮助解决问题。

出于兴趣,connect创建的Avro文件的架构是什么<代码>联合[typeA,typeB]?或者:
记录包装器{{null,typeA}a,{null,typeB}b}