Postgresql 带有WSO2流式积分器和Postgres DB的CDC

Postgresql 带有WSO2流式积分器和Postgres DB的CDC,postgresql,siddhi,cdc,debezium,Postgresql,Siddhi,Cdc,Debezium,我正在尝试在WSO2流媒体集成商和本地Postgres DB之间设置更改数据捕获(CDC) 我已经将Postgres驱动程序(v42.2.5)添加到SI_HOME/lib中,并且我能够从Siddhi应用程序中读取数据库中的数据 我遵循这个示例来实现CDC,并使用pgoutput作为逻辑解码插件。但当我运行应用程序时,我会得到以下日志 [2020-04-23_19-02-37_460] INFO {org.apache.kafka.connect.json.JsonConverterCon

我正在尝试在WSO2流媒体集成商和本地Postgres DB之间设置更改数据捕获(CDC)

我已经将Postgres驱动程序(v42.2.5)添加到SI_HOME/lib中,并且我能够从Siddhi应用程序中读取数据库中的数据

我遵循这个示例来实现CDC,并使用pgoutput作为逻辑解码插件。但当我运行应用程序时,我会得到以下日志

    [2020-04-23_19-02-37_460] INFO {org.apache.kafka.connect.json.JsonConverterConfig} - JsonConverterConfig values: 
    converter.type = key
    schemas.cache.size = 1000
    schemas.enable = true

[2020-04-23_19-02-37_461] INFO {org.apache.kafka.connect.json.JsonConverterConfig} - JsonConverterConfig values: 
    converter.type = value
    schemas.cache.size = 1000
    schemas.enable = false

[2020-04-23_19-02-37_461] INFO {io.debezium.embedded.EmbeddedEngine$EmbeddedConfig} - EmbeddedConfig values: 
    access.control.allow.methods = 
    access.control.allow.origin = 
    bootstrap.servers = [localhost:9092]
    header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
    internal.key.converter = class org.apache.kafka.connect.json.JsonConverter
    internal.value.converter = class org.apache.kafka.connect.json.JsonConverter
    key.converter = class org.apache.kafka.connect.json.JsonConverter
    listeners = null
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    offset.flush.interval.ms = 60000
    offset.flush.timeout.ms = 5000
    offset.storage.file.filename = 
    offset.storage.partitions = null
    offset.storage.replication.factor = null
    offset.storage.topic = 
    plugin.path = null
    rest.advertised.host.name = null
    rest.advertised.listener = null
    rest.advertised.port = null
    rest.host.name = null
    rest.port = 8083
    ssl.client.auth = none
    task.shutdown.graceful.timeout.ms = 5000
    value.converter = class org.apache.kafka.connect.json.JsonConverter

[2020-04-23_19-02-37_516] INFO {io.debezium.connector.common.BaseSourceTask} -    offset.storage = io.siddhi.extension.io.cdc.source.listening.InMemoryOffsetBackingStore 
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} -    database.server.name = localhost_5432 
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} -    database.port = 5432 
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} -    table.whitelist = SweetProductionTable 
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} -    cdc.source.object = 1716717434 
[2020-04-23_19-02-37_517] INFO {io.debezium.connector.common.BaseSourceTask} -    database.hostname = localhost 
[2020-04-23_19-02-37_518] INFO {io.debezium.connector.common.BaseSourceTask} -    database.password = ******** 
[2020-04-23_19-02-37_518] INFO {io.debezium.connector.common.BaseSourceTask} -    name = CDCWithListeningModeinsertSweetProductionStream 
[2020-04-23_19-02-37_518] INFO {io.debezium.connector.common.BaseSourceTask} -    server.id = 6140 
[2020-04-23_19-02-37_519] INFO {io.debezium.connector.common.BaseSourceTask} -    database.history = io.debezium.relational.history.FileDatabaseHistory 
[2020-04-23_19-02-38_103] INFO {io.debezium.connector.postgresql.PostgresConnectorTask} - user 'user_name' connected to database 'db_name' on PostgreSQL 11.5, compiled by Visual C++ build 1914, 64-bit with roles:
    role 'user_name' [superuser: false, replication: true, inherit: true, create role: false, create db: false, can log in: true] (Encoded) 
[2020-04-23_19-02-38_104] INFO {io.debezium.connector.postgresql.PostgresConnectorTask} - No previous offset found 
[2020-04-23_19-02-38_104] INFO {io.debezium.connector.postgresql.PostgresConnectorTask} - Taking a new snapshot of the DB and streaming logical changes once the snapshot is finished... 
[2020-04-23_19-02-38_105] INFO {io.debezium.util.Threads} - Requested thread factory for connector PostgresConnector, id = localhost_5432 named = records-snapshot-producer 
[2020-04-23_19-02-38_105] INFO {io.debezium.util.Threads} - Requested thread factory for connector PostgresConnector, id = localhost_5432 named = records-stream-producer 
[2020-04-23_19-02-38_293] INFO {io.debezium.connector.postgresql.connection.PostgresConnection} - Obtained valid replication slot ReplicationSlot [active=false, latestFlushedLSN=null] 
[2020-04-23_19-02-38_704] ERROR {io.siddhi.core.stream.input.source.Source} - Error on 'CDCWithListeningMode'. Connection to the database lost. Error while connecting at Source 'cdc' at 'insertSweetProductionStream'. Will retry in '5 sec'. (Encoded) 
io.siddhi.core.exception.ConnectionUnavailableException: Connection to the database lost.
    at io.siddhi.extension.io.cdc.source.CDCSource.lambda$connect$1(CDCSource.java:424)
    at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:793)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.connect.errors.ConnectException: Cannot create replication connection
    at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.(PostgresReplicationConnection.java:87)
    at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.(PostgresReplicationConnection.java:38)
    at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$ReplicationConnectionBuilder.build(PostgresReplicationConnection.java:362)
    at io.debezium.connector.postgresql.PostgresTaskContext.createReplicationConnection(PostgresTaskContext.java:65)
    at io.debezium.connector.postgresql.RecordsStreamProducer.(RecordsStreamProducer.java:81)
    at io.debezium.connector.postgresql.RecordsSnapshotProducer.(RecordsSnapshotProducer.java:70)
    at io.debezium.connector.postgresql.PostgresConnectorTask.createSnapshotProducer(PostgresConnectorTask.java:133)
    at io.debezium.connector.postgresql.PostgresConnectorTask.start(PostgresConnectorTask.java:86)
    at io.debezium.connector.common.BaseSourceTask.start(BaseSourceTask.java:45)
    at io.debezium.embedded.EmbeddedEngine.run(EmbeddedEngine.java:677)
    ... 3 more
Caused by: io.debezium.jdbc.JdbcConnectionException: ERROR: could not access file "decoderbufs": No such file or directory
    at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.initReplicationSlot(PostgresReplicationConnection.java:145)
    at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.(PostgresReplicationConnection.java:79)
    ... 12 more
Caused by: org.postgresql.util.PSQLException: ERROR: could not access file "decoderbufs": No such file or directory
    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
    at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
    at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
    at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
    at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
    at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
    at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
    at org.postgresql.replication.fluent.logical.LogicalCreateSlotBuilder.make(LogicalCreateSlotBuilder.java:48)
    at io.debezium.connector.postgresql.connection.PostgresReplicationConnection.initReplicationSlot(PostgresReplicationConnection.java:108)
    ... 13 more
Debezium默认为decoderbufs插件-“无法访问文件“decoderbufs”:没有这样的文件或目录”

据介绍,问题是由于decoderbufs插件的配置

详细信息

  • 博士后-11.4
  • siddhi cdc io-2.0.3
  • 德贝齐姆-0.8.3
  • 如何配置嵌入式debezium引擎以使用pgoutput插件?更改此配置会修复错误吗


    请帮我解决这个问题。我还没有找到任何可以帮助我的资源。

    您需要将Debezium更新到最新的1.1版本-这将使您能够使用
    pgoutput
    插件,使用
    plugin.name
    config选项,或者您需要部署(或者构建)
    decoderbufs.so
    库到您的PostgreSQL数据库


    我推荐前者,因为0.8.3是非常旧的版本。

    您需要将Debezium更新到最新的1.1版本-这将使您能够使用
    pgoutput
    插件,使用
    plugin.name
    config选项,或者您需要部署(或者构建)
    decoderbufs.so
    库到您的PostgreSQL数据库


    我建议使用前者,因为0.8.3是非常旧的版本。

    当我尝试使用pgoutput逻辑解码输出插件进行CDC时,我在PostgreSQL 12中观察到了这种行为。看起来,即使我使用pgoutput配置了数据库,siddhi扩展仍试图使用“decoderbufs”作为解码插件建立连接

    当我尝试将decoderbufs配置为数据库级别的逻辑解码输出插件时,我能够毫无问题地使用siddhi io扩展


    现在看来,Siddhi io CDC只支持PostgreSQL的逻辑解码输出插件。

    当我尝试使用pgoutput逻辑解码输出插件进行CDC时,我在PostgreSQL 12中观察到了这种行为。看起来,即使我使用pgoutput配置了数据库,siddhi扩展仍试图使用“decoderbufs”作为解码插件建立连接

    当我尝试将decoderbufs配置为数据库级别的逻辑解码输出插件时,我能够毫无问题地使用siddhi io扩展

    现在看来,Siddhi io CDC只支持带有PostgreSQL的逻辑解码输出插件