Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/postgresql/10.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Postgresql Debezium Postgres Kafka连接器检测信号未提交LSN_Postgresql_Apache Kafka_Apache Kafka Connect_Debezium - Fatal编程技术网

Postgresql Debezium Postgres Kafka连接器检测信号未提交LSN

Postgresql Debezium Postgres Kafka连接器检测信号未提交LSN,postgresql,apache-kafka,apache-kafka-connect,debezium,Postgresql,Apache Kafka,Apache Kafka Connect,Debezium,我有一个AWS RDS上的Postgres Db和一个kafka连接连接器(Debezium Postgres)在桌上监听。连接器的配置: { "name": "my-connector", "config": { "connector.class": "io.debezium.connector.postgresql.PostgresConnector", "database.dbname": "my_db", "database.user": "my_user

我有一个AWS RDS上的Postgres Db和一个kafka连接连接器(Debezium Postgres)在桌上监听。连接器的配置:

{
  "name": "my-connector",
  "config": {
    "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
    "database.dbname": "my_db",
    "database.user": "my_user",
    "max.queue.size": "32000",
    "slot.name": "my_slot",
    "tasks.max": "1",
    "publication.name": "my_publication",
    "database.server.name": "postgres",
    "heartbeat.interval.ms": "1000",
    "database.port": "my_port",
    "include.schema.changes": "false",
    "plugin.name": "pgoutput",
    "table.whitelist": "public.my_table",
    "tombstones.on.delete": "false",
    "database.hostname": "my_host",
    "database.password": "my_password",
    "name": "my-connector",
    "max.batch.size": "10000",
    "database.whitelist": "my_db",
    "snapshot.mode": "never"
  },
  "tasks": [
    {
      "connector": "my-connector",
      "task": 0
    }
  ],
  "type": "source"
}
该表的更新频率不如其他表,这最初导致复制延迟,如下所示:

SELECT slot_name,
  pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) as replicationSlotLag,
  pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), confirmed_flush_lsn)) as confirmedLag,
  active
FROM pg_replication_slots;
           slot_name           | replicationslotlag | confirmedlag | active
-------------------------------+--------------------+--------------+--------
 my_slot                       | 1664 MB            | 1664 MB      | t
它将变得如此之大,以至于可能会耗尽所有磁盘空间

我添加了一个心跳,如果我登录到一个kafka代理并像这样设置一个控制台使用者:
/kafka-console-consumer.sh--引导服务器my.broker.address:9092--主题uu debezium-heartbeat.postgres--从一开始--consumer.config=/etc/kafka/consumer.properties
它将转储所有心跳消息,然后每1000毫秒显示一条新消息

但是,插槽的大小仍在不断增长。如果我在表中插入一条虚拟记录,它会将插槽设置回一个小的延迟,这样就可以了

不过我想让你心跳一下。我不想插入定期消息,因为这听起来会增加复杂性。为什么心跳没有减少插槽大小?

请查看


您确实需要发出定期消息,但现在有了帮助-

感谢您提供了该问题的链接。我很高兴知道这是团队的雷达。那么,我将使用周期性消息解决方案。那么,我们不必再做周期性消息了?我在这期杂志上看到了Jiri的评论,该功能已经发布。。