Apache kafka Debezium-MySQL连接器-运动-服务未启动

Apache kafka Debezium-MySQL连接器-运动-服务未启动,apache-kafka,mysql-connector,amazon-kinesis,debezium,mysql,Apache Kafka,Mysql Connector,Amazon Kinesis,Debezium,Mysql,我正试图让Debezium将我的CDC事件发送到Kinesis,服务似乎启动了,但出现了错误,而且它似乎没有向Kinesis发送任何内容。我在Debezium网站上遵循此安装指南: 这是我的配置文件: debezium.sink.type=kinesis debezium.sink.kinesis.region=us-east-1 debezium.source.connector.class=io.debezium.connector.mysql.MySqlConnector debeziu

我正试图让Debezium将我的CDC事件发送到Kinesis,服务似乎启动了,但出现了错误,而且它似乎没有向Kinesis发送任何内容。我在Debezium网站上遵循此安装指南:

这是我的配置文件:

debezium.sink.type=kinesis
debezium.sink.kinesis.region=us-east-1
debezium.source.connector.class=io.debezium.connector.mysql.MySqlConnector
debezium.source.offset.storage.file.filename=data/offsets.dat
debezium.source.offset.flush.interval.ms=0
debezium.source.database.hostname=127.0.0.1
debezium.source.database.id=12345
debezium.source.database.port=3306
debezium.source.database.user=*****
debezium.source.database.password=******
debezium.source.database.dbname=inventory
debezium.source.database.server.name=127.0.0.1
debezium.source.schema.whitelist=inventory
当我使用
/run.sh
运行服务器时,服务器似乎开始启动,然后出现错误:

2020-07-17 19:11:22,380 INFO  [io.deb.ser.BaseChangeConsumer] (main) Using 'io.debezium.server.BaseChangeConsumer$$Lambda$81/694452085@2abf4075' stream name mapper
2020-07-17 19:11:22,754 INFO  [io.deb.ser.kin.KinesisChangeConsumer] (main) Using default KinesisClient 'software.amazon.awssdk.services.kinesis.DefaultKinesisClient@5d5f10b2'
2020-07-17 19:11:22,755 INFO  [io.deb.ser.DebeziumServer] (main) Consumer 'io.debezium.server.kinesis.KinesisChangeConsumer' instantiated
2020-07-17 19:11:22,926 INFO  [org.apa.kaf.con.jso.JsonConverterConfig] (main) JsonConverterConfig values:
    converter.type = key
    decimal.format = BASE64
    schemas.cache.size = 1000
    schemas.enable = true

2020-07-17 19:11:22,928 INFO  [org.apa.kaf.con.jso.JsonConverterConfig] (main) JsonConverterConfig values:
    converter.type = value
    decimal.format = BASE64
    schemas.cache.size = 1000
    schemas.enable = false

2020-07-17 19:11:22,936 INFO  [io.deb.emb.EmbeddedEngine$EmbeddedConfig] (main) EmbeddedConfig values:
    access.control.allow.methods =
    access.control.allow.origin =
    admin.listeners = null
    bootstrap.servers = [localhost:9092]
    client.dns.lookup = default
    config.providers = []
    connector.client.config.override.policy = None
    header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
    internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
    internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
    key.converter = class org.apache.kafka.connect.json.JsonConverter
    listeners = null
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    offset.flush.interval.ms = 0
    offset.flush.timeout.ms = 5000
    offset.storage.file.filename = data/offsets.dat
    offset.storage.partitions = null
    offset.storage.replication.factor = null
    offset.storage.topic =
    plugin.path = null
    rest.advertised.host.name = null
    rest.advertised.listener = null
    rest.advertised.port = null
    rest.extension.classes = []
    rest.host.name = null
    rest.port = 8083
    ssl.client.auth = none
    task.shutdown.graceful.timeout.ms = 5000
    topic.tracking.allow.reset = true
    topic.tracking.enable = true
    value.converter = class org.apache.kafka.connect.json.JsonConverter

2020-07-17 19:11:22,937 INFO  [org.apa.kaf.con.run.WorkerConfig] (main) Worker configuration property 'internal.key.converter' is deprecated and may be removed in an upcoming release. The specified value 'org.apache.kafka.connect.json.JsonConverter' matches the default, so this property can be safely removed from the worker configuration.
2020-07-17 19:11:22,937 INFO  [org.apa.kaf.con.run.WorkerConfig] (main) Worker configuration property 'internal.value.converter' is deprecated and may be removed in an upcoming release. The specified value 'org.apache.kafka.connect.json.JsonConverter' matches the default, so this property can be safely removed from the worker configuration.
2020-07-17 19:11:22,939 INFO  [org.apa.kaf.con.jso.JsonConverterConfig] (main) JsonConverterConfig values:
    converter.type = key
    decimal.format = BASE64
    schemas.cache.size = 1000
    schemas.enable = true

2020-07-17 19:11:22,940 INFO  [org.apa.kaf.con.jso.JsonConverterConfig] (main) JsonConverterConfig values:
    converter.type = value
    decimal.format = BASE64
    schemas.cache.size = 1000
    schemas.enable = true

2020-07-17 19:11:22,942 INFO  [io.deb.ser.DebeziumServer] (main) Engine executor started
2020-07-17 19:11:22,948 INFO  [org.apa.kaf.con.sto.FileOffsetBackingStore] (pool-3-thread-1) Starting FileOffsetBackingStore with file data/offsets.dat
2020-07-17 19:11:22,989 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1) Starting MySqlConnectorTask with configuration:
2020-07-17 19:11:22,990 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1)    connector.class = io.debezium.connector.mysql.MySqlConnector
2020-07-17 19:11:22,990 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1)    offset.flush.interval.ms = 0
2020-07-17 19:11:22,990 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1)    database.user = mysqluser
2020-07-17 19:11:22,990 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1)    database.dbname = inventory
2020-07-17 19:11:22,990 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1)    offset.storage.file.filename = data/offsets.dat
2020-07-17 19:11:22,990 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1)    database.hostname = 127.0.0.1
2020-07-17 19:11:22,990 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1)    database.id = 12345
2020-07-17 19:11:22,990 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1)    database.password = ********
2020-07-17 19:11:22,990 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1)    name = kinesis
2020-07-17 19:11:22,990 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1)    database.server.name = 127.0.0.1
2020-07-17 19:11:22,990 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1)    database.port = 3306
2020-07-17 19:11:22,991 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1)    schema.whitelist = inventory
2020-07-17 19:11:23,063 INFO  [io.quarkus] (main) debezium-server-dist 1.2.0.Final on JVM (powered by Quarkus 1.5.0.Final) started in 1.177s. Listening on: http://0.0.0.0:8080
2020-07-17 19:11:23,065 INFO  [io.quarkus] (main) Profile prod activated.
2020-07-17 19:11:23,065 INFO  [io.quarkus] (main) Installed features: [cdi, smallrye-health]
2020-07-17 19:11:23,276 ERROR [io.deb.rel.his.KafkaDatabaseHistory] (pool-3-thread-1) The 'database.history.kafka.topic' value is invalid: A value is required
2020-07-17 19:11:23,276 ERROR [io.deb.rel.his.KafkaDatabaseHistory] (pool-3-thread-1) The 'database.history.kafka.bootstrap.servers' value is invalid: A value is required**
2020-07-17 19:11:23,277 INFO  [io.deb.con.com.BaseSourceTask] (pool-3-thread-1) Stopping down connector
2020-07-17 19:11:23,277 INFO  [io.deb.con.mys.MySqlConnectorTask] (pool-3-thread-1) Stopping MySQL connector task
2020-07-17 19:11:23,278 INFO  [org.apa.kaf.con.sto.FileOffsetBackingStore] (pool-3-thread-1) Stopped FileOffsetBackingStore
2020-07-17 19:11:23,278 INFO  [io.deb.ser.ConnectorLifecycle] (pool-3-thread-1) Connector completed: success = 'false', message = 'Unable to initialize and start connector's task class 'io.debezium.connector.mysql.MySqlConnectorTask' with config: {name=kinesis, connector.class=io.debezium.connector.mysql.MySqlConnector, database.id=12345, schema.whitelist=inventory, database.port=3306, database.user=username, database.hostname=127.0.0.1, offset.storage.file.filename=data/offsets.dat, database.password=********, offset.flush.interval.ms=0, database.server.name=127.0.0.1, database.dbname=inventory}', error = '{}': org.apache.kafka.connect.errors.ConnectException: Error configuring an instance of KafkaDatabaseHistory; check the logs for details
    at io.debezium.relational.history.KafkaDatabaseHistory.configure(KafkaDatabaseHistory.java:180)
    at io.debezium.connector.mysql.MySqlSchema.<init>(MySqlSchema.java:139)
    at io.debezium.connector.mysql.MySqlTaskContext.<init>(MySqlTaskContext.java:86)
    at io.debezium.connector.mysql.MySqlTaskContext.<init>(MySqlTaskContext.java:52)
    at io.debezium.connector.mysql.MySqlConnectorTask.createAndStartTaskContext(MySqlConnectorTask.java:357)
    at io.debezium.connector.mysql.MySqlConnectorTask.start(MySqlConnectorTask.java:143)
    at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:101)
    at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:756)
    at io.debezium.embedded.ConvertingEngineBuilder$2.run(ConvertingEngineBuilder.java:170)
    at io.debezium.server.DebeziumServer.lambda$start$1(DebeziumServer.java:133)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

^C2020-07-18 00:00:44,245 INFO  [io.deb.ser.DebeziumServer] (main) Received request to stop the engine
2020-07-18 00:00:44,245 INFO  [io.deb.emb.EmbeddedEngine] (main) Stopping the embedded engine
2020-07-18 00:00:44,254 INFO  [io.quarkus] (main) debezium-server-dist stopped in 0.028s
2020-07-17 19:11:22380信息[io.deb.ser.BaseChangeConsumer](主)使用'io.debezium.server.BaseChangeConsumer$$Lambda$81/694452085@2abf4075'流名称映射器
2020-07-17 19:11:22754信息[io.deb.ser.kin.kin.kinisChangeConsumer](主)使用默认的KinesClient软件。amazon.awssdk.services.kinesis。DefaultKinesisClient@5d5f10b2'
2020-07-17 19:11:22755信息[io.deb.ser.DebeziumServer](主)使用者“io.debezium.server.kinesis.kinesChangeConsumer”已实例化
2020-07-17 19:11:22926信息[org.apa.kaf.con.jso.JsonConverterConfig](main)JsonConverterConfig值:
converter.type=键
decimal.format=BASE64
schemas.cache.size=1000
schemas.enable=true
2020-07-17 19:11:22928信息[org.apa.kaf.con.jso.JsonConverterConfig](main)JsonConverterConfig值:
converter.type=值
decimal.format=BASE64
schemas.cache.size=1000
schemas.enable=false
2020-07-17 19:11:22936信息[io.deb.emb.EmbeddedEngine$EmbeddedConfig](主)EmbeddedConfig值:
access.control.allow.methods=
access.control.allow.origin=
admin.listeners=null
bootstrap.servers=[localhost:9092]
client.dns.lookup=默认值
config.providers=[]
connector.client.config.override.policy=None
header.converter=class org.apache.kafka.connect.storage.SimpleHeaderConverter
internal.key.converter=class org.apache.kafka.connect.json.JsonConverter
internal.value.converter=class org.apache.kafka.connect.json.JsonConverter
key.converter=class org.apache.kafka.connect.json.JsonConverter
侦听器=null
metric.reporters=[]
metrics.num.samples=2
metrics.recording.level=INFO
metrics.sample.window.ms=30000
offset.flush.interval.ms=0
offset.flush.timeout.ms=5000
offset.storage.file.filename=数据/offset.dat
offset.storage.partitions=null
offset.storage.replication.factor=null
offset.storage.topic=
plugin.path=null
rest.advised.host.name=null
rest.advised.listener=null
rest.advised.port=null
rest.extension.classes=[]
rest.host.name=null
rest.port=8083
ssl.client.auth=none
task.shutdown.gratence.timeout.ms=5000
topic.tracking.allow.reset=true
topic.tracking.enable=true
value.converter=class org.apache.kafka.connect.json.JsonConverter
2020-07-17 19:11:22937信息[org.apa.kaf.con.run.WorkerConfig](主)工作配置属性“internal.key.converter”已被弃用,可能会在即将发布的版本中删除。指定的值“org.apache.kafka.connect.json.JsonConverter”与默认值匹配,因此可以从工作配置中安全删除此属性。
2020-07-17 19:11:22937信息[org.apa.kaf.con.run.WorkerConfig](主)工作配置属性“internal.value.converter”已被弃用,可能会在即将发布的版本中删除。指定的值“org.apache.kafka.connect.json.JsonConverter”与默认值匹配,因此可以从工作配置中安全删除此属性。
2020-07-17 19:11:22939信息[org.apa.kaf.con.jso.JsonConverterConfig](main)JsonConverterConfig值:
converter.type=键
decimal.format=BASE64
schemas.cache.size=1000
schemas.enable=true
2020-07-17 19:11:22940信息[org.apa.kaf.con.jso.JsonConverterConfig](main)JsonConverterConfig值:
converter.type=值
decimal.format=BASE64
schemas.cache.size=1000
schemas.enable=true
2020-07-17 19:11:22942信息[io.deb.ser.DebeziumServer](主)引擎执行器已启动
2020-07-17 19:11:22948信息[org.apa.kaf.con.sto.FileOffsetBackingStore](pool-3-thread-1)使用文件数据/offsets.dat启动FileOffsetBackingStore
2020-07-17 19:11:22989信息[io.deb.con.com.BaseSourceTask](池-3-thread-1)使用配置启动MySqlConnectorTask:
2020-07-17 19:11:22990信息[io.deb.con.com.BaseSourceTask](pool-3-thread-1)connector.class=io.debezium.connector.mysql.MySqlConnector
2020-07-17 19:11:22990信息[io.deb.con.com.BaseSourceTask](池-3-thread-1)offset.flush.interval.ms=0
2020-07-17 19:11:22990信息[io.deb.con.com.BaseSourceTask](pool-3-thread-1)database.user=mysqluser
2020-07-17 19:11:22990信息[io.deb.con.com.BaseSourceTask](pool-3-thread-1)database.dbname=inventory
2020-07-17 19:11:22990信息[io.deb.con.com.BaseSourceTask](池-3-thread-1)offset.storage.file.filename=data/offset.dat
2020-07-17 19:11:22990信息[io.deb.con.com.BaseSourceTask](pool-3-thread-1)database.hostname=127.0.0.1
2020-07-17 19:11:22990信息[io.deb.con.com.BaseSourceTask](pool-3-thread-1)database.id=12345
2020-07-17 19:11:22990信息[io.deb.con.com.BaseSourceTask](pool-3-thread-1)database.password=********
2020-07-17 19:11:22990信息[io.deb.con.com.BaseSourceTask](pool-3-thread-1)name=kinesis
2020-07-17 19:11:22990信息[io.deb.con.com.BaseSourceTask](pool-3-thread-1)database.server.name=127.0.0.1
2020-07-17 19:11:22990信息[io.deb.con.com.BaseSourceTask](pool-3-thread-1)database.port=3306
2020-07-17 19:11:22991信息[io.deb.con.com.BaseSourceTask](pool-3-thread-1)schema.whitelist=inventory
2020-07-17 19:11:23063信息[io.quarkus](主)JVM上的debezium服务器dist 1.2.0.Final(由quarkus 1.5.0.Final提供动力)从1.177s开始。收听:http://0.0.0.0:8080
2020-07-17 19:11:23065信息[io.quarkus](主)配置文件产品已激活。
2020-07-17 19:11:23065信息[io.quarkus](主要)安装功能:[cdi]