Kafka JDBC接收器连接器未按预期工作

Kafka JDBC接收器连接器未按预期工作,jdbc,apache-kafka,apache-kafka-connect,confluent-platform,Jdbc,Apache Kafka,Apache Kafka Connect,Confluent Platform,我试图使用JDBC接收器连接器将数据放入Postgres,但是,我没有看到在我的数据库中创建的任何数据。我正在使用的连接器配置 { "name": "Test-Insert", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector", "tasks.max": "1", "connection.url": "jdbc:postgres://<server_nam

我试图使用JDBC接收器连接器将数据放入Postgres,但是,我没有看到在我的数据库中创建的任何数据。我正在使用的连接器配置

{
  "name": "Test-Insert",
  "config": {
    "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
    "tasks.max": "1",
    "connection.url": "jdbc:postgres://<server_name>;databaseName=postgres;",
    "connection.user" : "username",
    "connection.password" : "password",
    "topics": "test",
    "name": "Test-Insert",
    "key.serializer":"org.apache.kafka.common.serialization.StringSerializer",
    "key.converter":"org.apache.kafka.connect.storage.StringConverter",
    "auto.create":"true"
  }
}
{
“名称”:“测试插入”,
“配置”:{
“connector.class”:“io.confluent.connect.jdbc.JdbcSinkConnector”,
“tasks.max”:“1”,
“connection.url”:“jdbc:postgres://;databaseName=postgres;”,
“connection.user”:“用户名”,
“连接.密码”:“密码”,
“主题”:“测试”,
“名称”:“测试插入”,
“key.serializer”:“org.apache.kafka.common.serialization.StringSerializer”,
“key.converter”:“org.apache.kafka.connect.storage.StringConverter”,
“自动创建”:“真”
}
}
这个主题是我在KSQL中创建的,我可以看到它的数据

这是我的日志,但我看不到任何指示

[2018-07-25 14:16:55,169] INFO Starting task (io.confluent.connect.jdbc.sink.JdbcSinkTask:43)
[2018-07-25 14:16:55,172] INFO JdbcSinkConfig values:
        auto.create = true
        auto.evolve = false
        batch.size = 3000
        connection.password = [hidden]
        connection.url = jdbc:postgres://<ip>;databaseName=<db>;
        connection.user = <username>
        fields.whitelist = []
        insert.mode = insert
        max.retries = 10
        pk.fields = []
        pk.mode = none
        retry.backoff.ms = 3000
        table.name.format = ${topic}
 (io.confluent.connect.jdbc.sink.JdbcSinkConfig:279)
[2018-07-25 14:16:55,172] INFO Initializing writer using SQL dialect: GenericDialect (io.confluent.connect.jdbc.sink.JdbcSinkTask:52)
[2018-07-25 14:16:55,172] INFO WorkerSinkTask{id=Test-Insert-0} Sink task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:282)
[2018-07-25 14:16:55,168] INFO EnrichedConnectorConfig values:
        connector.class = io.confluent.connect.jdbc.JdbcSinkConnector
        header.converter = null
        key.converter = class org.apache.kafka.connect.storage.StringConverter
        name = Test-Insert
        tasks.max = 1
        topics = [test]
        topics.regex =
        transforms = []
        value.converter = null
 (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:279)
[2018-07-25 14:16:55,176] INFO Setting task configurations for 1 workers. (io.confluent.connect.jdbc.JdbcSinkConnector:45)
[2018-07-25 14:16:55,170] INFO Loading template 'schema.namespace.format' (io.confluent.connect.cdc.SchemaGenerator:129)
[2018-07-25 14:16:55,176] INFO Kafka version : 1.1.1-cp1 (org.apache.kafka.common.utils.AppInfoParser:109)
[2018-07-25 14:16:55,176] INFO Kafka commitId : 0a5db4d59ee15a47 (org.apache.kafka.common.utils.AppInfoParser:110)
[2018-07-25 14:16:55,176] INFO Loading template 'schema.key.name.format' (io.confluent.connect.cdc.SchemaGenerator:129)
[2018-07-25 14:16:55,176] INFO Loading template 'schema.value.name.format' (io.confluent.connect.cdc.SchemaGenerator:129)
[2018-07-25 14:16:55,177] INFO Loading template 'topicFormat.format' (io.confluent.connect.cdc.SchemaGenerator:129)
[2018-07-25 14:16:55,177] INFO Starting Services (io.confluent.connect.cdc.BaseServiceTask:44)
[2018-07-25 14:16:55,177] INFO Cluster ID: gi8ubA8UTEa4vzN5T6QDJw (org.apache.kafka.clients.Metadata:265)
[2018-07-25 14:16:55,178] INFO [Consumer clientId=consumer-9, groupId=connect-Test-Insert] Discovered group coordinator eu-west-2.compute.internal:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:605)
[2018-07-25 14:16:55,179] INFO [Consumer clientId=consumer-9, groupId=connect-Test-Insert] Revoking previously assigned partitions [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:411)
[2018-07-25 14:16:55,179] INFO [Consumer clientId=consumer-9, groupId=connect-Test-Insert] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:442)
[2018-07-25 14:16:55,182] INFO [Consumer clientId=consumer-9, groupId=connect-Test-Insert] Successfully joined group with generation 11 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:409)
[2018-07-25 14:16:55,182] INFO [Consumer clientId=consumer-9, groupId=connect-Test-Insert] Setting newly assigned partitions [test-0] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:256)
[2018-07-25 14:16:55169]信息启动任务(io.confluent.connect.jdbc.sink.JdbcSinkTask:43)
[2018-07-25 14:16:55172]信息JdbcSinkConfig值:
auto.create=true
auto.evolve=false
批次大小=3000
connection.password=[隐藏]
connection.url=jdbc:postgres://;数据库名=;
connection.user=
fields.whitelist=[]
insert.mode=insert
最大重试次数=10次
pk.fields=[]
pk.mode=none
retry.backoff.ms=3000
table.name.format=${topic}
(io.confluent.connect.jdbc.sink.JdbcSinkConfig:279)
[2018-07-25 14:16:55172]使用SQL方言初始化写入程序的信息:genericquantial(io.confluent.connect.jdbc.sink.JdbcSinkTask:52)
[2018-07-25 14:16:55172]信息WorkerSinkTask{id=Test-Insert-0}接收器任务完成初始化并启动(org.apache.kafka.connect.runtime.WorkerSinkTask:282)
[2018-07-25 14:16:55168]信息丰富的连接器配置值:
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
header.converter=null
key.converter=class org.apache.kafka.connect.storage.StringConverter
名称=测试插入
tasks.max=1
主题=[测试]
topics.regex=
转换=[]
value.converter=null
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:279)
[2018-07-25 14:16:55176]1名工人的信息设置任务配置。(io.confluent.connect.jdbc.JdbcSinkConnector:45)
[2018-07-25 14:16:55170]信息加载模板“schema.namespace.format”(io.confluent.connect.cdc.SchemaGenerator:129)
[2018-07-25 14:16:55176]卡夫卡信息版本:1.1.1-cp1(org.apache.Kafka.common.utils.AppInfoParser:109)
[2018-07-25 14:16:55176]卡夫卡委员会信息:0a5db4d59ee15a47(org.apache.Kafka.common.utils.AppInfoParser:110)
[2018-07-25 14:16:55176]信息加载模板'schema.key.name.format'(io.confluent.connect.cdc.SchemaGenerator:129)
[2018-07-25 14:16:55176]信息加载模板'schema.value.name.format'(io.confluent.connect.cdc.SchemaGenerator:129)
[2018-07-25 14:16:55177]信息加载模板“topicFormat.format”(io.confluent.connect.cdc.SchemaGenerator:129)
[2018-07-25 14:16:55177]信息启动服务(io.confluent.connect.cdc.BaseServiceTask:44)
[2018-07-25 14:16:55177]信息集群ID:gi8ubA8UTEa4vzN5T6QDJw(org.apache.kafka.clients.Metadata:265)
[2018-07-25 14:16:55178]信息[Consumer clientId=Consumer-9,groupId=connect Test Insert]发现的组协调器eu-west-2.compute.internal:9092(id:2147483647 rack:null)(org.apache.kafka.clients.Consumer.internals.AbstractCoordinator:605)
[2018-07-25 14:16:55179]信息[Consumer clientId=Consumer-9,groupId=connect Test Insert]撤销以前分配的分区[](org.apache.kafka.clients.Consumer.internals.consumercorordinator:411)
[2018-07-25 14:16:55179]信息[Consumer clientId=Consumer-9,groupId=connect Test Insert](重新)加入组(org.apache.kafka.clients.Consumer.internals.AbstractCoordinator:442)
[2018-07-25 14:16:55182]信息[Consumer clientId=Consumer-9,groupId=connect Test Insert]已成功加入第11代的集团(org.apache.kafka.clients.Consumer.internals.AbstractCoordinator:409)
[2018-07-25 14:16:55182]信息[Consumer clientId=Consumer-9,groupId=connect Test Insert]设置新分配的分区[Test-0](org.apache.kafka.clients.Consumer.internals.ConsumerCoordinator:256)

有没有其他人遇到过这个问题,或者有人能发现我做错了什么?

您确定
测试
主题中有数据?您以前配置过此连接器吗?也许它已经读到了主题的末尾,正在等待新的数据?更改连接器的名称将确保它从本主题的开头开始。您好@ChrisMatta,是的,我确定那里有数据,我已将我的产品运行了几次,在运行
打印测试时可以看到数据出现。这是自您启动连接器以来,我第一次配置您为此主题生成的连接器?您确定接头处于良好状态吗?您可以使用
curl connect worker:8083/connectors/Test Insert/status