Sql server 增加脱苄菌的产量
我最近开始使用Debezium来处理实时捕获数据的更改,并将其放入目标数据库 我使用Azure Event Hub和Kafka Connect来连接SQL Server,并使用confluent JDBC将更改后的数据接收到目标数据库,即SQL Server,而不是Kafka 我知道Debezium使用异步对数据库性能的影响较小,但有没有办法提高流媒体的吞吐量 最近,我启动了活动中心,最小吞吐量单位为10,自动充气至20。因此,我预计Debezium+Kafka Connect+事件集线器可以每秒传输10MB-20MB的数据,而出口应该是每秒20-40MB 然而,实际表现最差。我手动将10k记录导入源数据库,该数据库小于6MB。因此,我希望Debezium和sink connector能够在几秒钟内捕获更改并接收到目标数据库 接收器连接器不会一次获取数据,而是手动将数据更新到目标数据库 下面是我的配置。如果需要更改配置以提高性能,请告知我。任何帮助都将不胜感激 卡夫卡连接:卡夫卡 bootstrap.servers=sqldbcdc.servicebus.windows.net:9093Sql server 增加脱苄菌的产量,sql-server,apache-kafka,apache-kafka-connect,azure-eventhub,debezium,Sql Server,Apache Kafka,Apache Kafka Connect,Azure Eventhub,Debezium,我最近开始使用Debezium来处理实时捕获数据的更改,并将其放入目标数据库 我使用Azure Event Hub和Kafka Connect来连接SQL Server,并使用confluent JDBC将更改后的数据接收到目标数据库,即SQL Server,而不是Kafka 我知道Debezium使用异步对数据库性能的影响较小,但有没有办法提高流媒体的吞吐量 最近,我启动了活动中心,最小吞吐量单位为10,自动充气至20。因此,我预计Debezium+Kafka Connect+事件集线器可以每
group.id=connect-cluster-group
# connect internal topic names, auto-created if not exists
config.storage.topic=connect-cluster-configs
offset.storage.topic=connect-cluster-offsets
status.storage.topic=connect-cluster-status
# internal topic replication factors - auto 3x replication in Azure Storage
config.storage.replication.factor=1
offset.storage.replication.factor=1
status.storage.replication.factor=1
rest.advertised.host.name=connect
offset.flush.interval.ms=10000
connections.max.idle.ms=180000
metadata.max.age.ms=180000
auto.register.schemas=false
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
# required EH Kafka security settings
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://sqldbcdc.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=**************************=";
producer.security.protocol=SASL_SSL
producer.sasl.mechanism=PLAIN
producer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://sqldbcdc.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=**************************=";
consumer.security.protocol=SASL_SSL
consumer.sasl.mechanism=PLAIN
consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://sqldbcdc.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=**************************=";
plugin.path=C:\kafka\libs
SQL连接器:
{
"name": "sql-server-connection",
"config": {
"connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
"tasks.max": "1",
"database.hostname": "localhost",
"database.port": "1433",
"database.user": "sa",
"database.password": "******",
"database.dbname": "demodb",
"database.server.name": "dbservername",
"table.whitelist": "dbo.portfolios",
"database.history":"io.debezium.relational.history.MemoryDatabaseHistory",
"transforms": "route",
"transforms.route.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.route.regex": "([^.]+)\\.([^.]+)\\.([^.]+)",
"transforms.route.replacement": "$3"
}
}
{
"name": "jdbc-sink",
"config":{
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"topics": "portfolios",
"connection.url": "jdbc:sqlserver://localhost:1433;instance=NEWMSSQLSERVER;databaseName=demodb",
"connection.user":"sa",
"connection.password":"*****",
"batch.size":2000,
"transforms": "unwrap",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.unwrap.drop.tombstones": false,
"transforms.unwrap.delete.handling.mode": "none",
"auto.create": "true",
"insert.mode": "upsert",
"delete.enabled":true,
"pk.fields": "portfolio_id",
"pk.mode": "record_key",
"table.name.format": "replicated_portfolios"
}
}
水槽连接器:
{
"name": "sql-server-connection",
"config": {
"connector.class": "io.debezium.connector.sqlserver.SqlServerConnector",
"tasks.max": "1",
"database.hostname": "localhost",
"database.port": "1433",
"database.user": "sa",
"database.password": "******",
"database.dbname": "demodb",
"database.server.name": "dbservername",
"table.whitelist": "dbo.portfolios",
"database.history":"io.debezium.relational.history.MemoryDatabaseHistory",
"transforms": "route",
"transforms.route.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.route.regex": "([^.]+)\\.([^.]+)\\.([^.]+)",
"transforms.route.replacement": "$3"
}
}
{
"name": "jdbc-sink",
"config":{
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"tasks.max": "1",
"topics": "portfolios",
"connection.url": "jdbc:sqlserver://localhost:1433;instance=NEWMSSQLSERVER;databaseName=demodb",
"connection.user":"sa",
"connection.password":"*****",
"batch.size":2000,
"transforms": "unwrap",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.unwrap.drop.tombstones": false,
"transforms.unwrap.delete.handling.mode": "none",
"auto.create": "true",
"insert.mode": "upsert",
"delete.enabled":true,
"pk.fields": "portfolio_id",
"pk.mode": "record_key",
"table.name.format": "replicated_portfolios"
}
}