Amazon s3 kafka connect转换RegExRouter退出时出现不可恢复的异常

Amazon s3 kafka connect转换RegExRouter退出时出现不可恢复的异常,amazon-s3,apache-kafka,apache-kafka-connect,confluent-platform,minio,Amazon S3,Apache Kafka,Apache Kafka Connect,Confluent Platform,Minio,我制作了一个kafka管道,将sqlserver表复制到s3 在接收过程中,我尝试使用regexrouter函数转换主题名称的前缀: "transforms":"dropPrefix", "transforms.dropPrefix.type":"org.apache.kafka.connect.transforms.RegexRouter", "transforms.dropPrefix.regex":"SQLSERVER-TEST-(.*)",

我制作了一个kafka管道,将sqlserver表复制到s3

在接收过程中,我尝试使用regexrouter函数转换主题名称的前缀:

    "transforms":"dropPrefix",      
    "transforms.dropPrefix.type":"org.apache.kafka.connect.transforms.RegexRouter",  
    "transforms.dropPrefix.regex":"SQLSERVER-TEST-(.*)",  
    "transforms.dropPrefix.replacement":"$1"
接收器出现故障,并显示以下消息:

org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
    at io.confluent.connect.s3.S3SinkTask.put(S3SinkTask.java:188)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:564)
    ... 10 more
如果删除该变换,管道工作正常

此docker compose可重现问题:

version: '2'
services:

  smtproblem-zookeeper:
    image: zookeeper
    container_name: smtproblem-zookeeper
    ports:
      - "2181:2181"

  smtproblem-kafka:
    image: confluentinc/cp-kafka:5.0.0
    container_name: smtproblem-kafka
    ports:
      - "9092:9092"
    links:
      - smtproblem-zookeeper
      - smtproblem-minio
    environment:
      KAFKA_ADVERTISED_HOST_NAME : localhost
      KAFKA_ZOOKEEPER_CONNECT: smtproblem-zookeeper:2181/kafka
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://smtproblem-kafka:9092
      KAFKA_CREATE_TOPICS: "_schemas:3:1:compact"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

  smtproblem-schema_registry:
    image: confluentinc/cp-schema-registry:5.0.0
    container_name: smtproblem-schema-registry
    ports:
      - "8081:8081"
    links:
      - smtproblem-kafka
      - smtproblem-zookeeper
    environment:
      SCHEMA_REGISTRY_HOST_NAME: http://smtproblem-schema_registry:8081
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://smtproblem-kafka:9092
      SCHEMA_REGISTRY_GROUP_ID: schema_group

  smtproblem-kafka-connect:
    image: confluentinc/cp-kafka-connect:5.0.0
    container_name: smtproblem-kafka-connect
    command: bash -c "wget -P /usr/share/java/kafka-connect-jdbc http://central.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/6.4.0.jre8/mssql-jdbc-6.4.0.jre8.jar && /etc/confluent/docker/run"
    ports:
      - "8083:8083"
    links:
      - smtproblem-zookeeper
      - smtproblem-kafka
      - smtproblem-schema_registry
      - smtproblem-minio
    environment:
      CONNECT_BOOTSTRAP_SERVERS: smtproblem-kafka:9092
      CONNECT_REST_PORT: 8083
      CONNECT_GROUP_ID: "connect_group"
      CONNECT_OFFSET_FLUSH_INTERVAL_MS: 1000
      CONNECT_CONFIG_STORAGE_TOPIC: "connect_config"
      CONNECT_OFFSET_STORAGE_TOPIC: "connect_offsets"
      CONNECT_STATUS_STORAGE_TOPIC: "connect_status"

      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1

      CONNECT_KEY_CONVERTER: "io.confluent.connect.avro.AvroConverter"
      CONNECT_VALUE_CONVERTER: "io.confluent.connect.avro.AvroConverter"

      CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: "http://smtproblem-schema_registry:8081"
      CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: "http://smtproblem-schema_registry:8081"

      CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
      CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"

      CONNECT_REST_ADVERTISED_HOST_NAME: "smtproblem-kafka_connect"

      CONNECT_LOG4J_ROOT_LOGLEVEL: INFO
      CONNECT_LOG4J_LOGGERS: org.reflections=ERROR
      CONNECT_PLUGIN_PATH: "/usr/share/java"

      AWS_ACCESS_KEY_ID: localKey
      AWS_SECRET_ACCESS_KEY: localSecret

  smtproblem-minio:
    image: minio/minio:edge
    container_name: smtproblem-minio
    ports:
      - "9000:9000"
    entrypoint: sh
    command: -c 'mkdir -p /data/datalake && minio server /data'
    environment:
      MINIO_ACCESS_KEY: localKey
      MINIO_SECRET_KEY: localSecret
    volumes:
      - "./minioData:/data"

  smtproblem-sqlserver:
    image: microsoft/mssql-server-linux:2017-GA
    container_name: smtproblem-sqlserver
    environment:
      ACCEPT_EULA: "Y"
      SA_PASSWORD: "Azertyu&"
    ports:
      - "1433:1433"
在sqlserver容器中创建数据库:

$ sudo docker exec -it smtproblem-sqlserver bash
# /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Azertyu&'
创建测试数据库:

create database TEST
GO
use TEST
GO
CREATE TABLE TABLE_TEST (id INT, name NVARCHAR(50), quantity INT, cbMarq INT NOT NULL IDENTITY(1,1), cbModification smalldatetime DEFAULT (getdate()))
GO
INSERT INTO TABLE_TEST VALUES (1, 'banana', 150, 1); INSERT INTO TABLE_TEST VALUES (2, 'orange', 154, 2);
GO

exit
exit
创建源连接器:

curl -X PUT http://localhost:8083/connectors/sqlserver-TEST-source-bulk/config -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.password": "Azertyu&",
"validate.non.null": "false",
"tasks.max": "3",
"table.whitelist": "TABLE_TEST",
"mode": "bulk",
"topic.prefix": "SQLSERVER-TEST-",
"connection.user": "SA",
"connection.url": "jdbc:sqlserver://smtproblem-sqlserver:1433;database=TEST"
}'
curl -X PUT http://localhost:8083/connectors/sqlserver-TEST-sink/config -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{
"topics": "SQLSERVER-TEST-TABLE_TEST",
"topics.dir": "TABLE_TEST",
"s3.part.size": 5242880,
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"tasks.max": 3,
"schema.compatibility": "NONE",
"s3.region": "us-east-1",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
"s3.bucket.name": "datalake",
"store.url": "http://smtproblem-minio:9000",
"flush.size": 1,
"transforms":"dropPrefix",      
"transforms.dropPrefix.type":"org.apache.kafka.connect.transforms.RegexRouter",  
"transforms.dropPrefix.regex":"SQLSERVER-TEST-(.*)",  
"transforms.dropPrefix.replacement":"$1"
}'
创建接收器连接器:

curl -X PUT http://localhost:8083/connectors/sqlserver-TEST-source-bulk/config -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"connection.password": "Azertyu&",
"validate.non.null": "false",
"tasks.max": "3",
"table.whitelist": "TABLE_TEST",
"mode": "bulk",
"topic.prefix": "SQLSERVER-TEST-",
"connection.user": "SA",
"connection.url": "jdbc:sqlserver://smtproblem-sqlserver:1433;database=TEST"
}'
curl -X PUT http://localhost:8083/connectors/sqlserver-TEST-sink/config -H 'Content-Type: application/json' -H 'Accept: application/json' -d '{
"topics": "SQLSERVER-TEST-TABLE_TEST",
"topics.dir": "TABLE_TEST",
"s3.part.size": 5242880,
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"tasks.max": 3,
"schema.compatibility": "NONE",
"s3.region": "us-east-1",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"format.class": "io.confluent.connect.s3.format.avro.AvroFormat",
"s3.bucket.name": "datalake",
"store.url": "http://smtproblem-minio:9000",
"flush.size": 1,
"transforms":"dropPrefix",      
"transforms.dropPrefix.type":"org.apache.kafka.connect.transforms.RegexRouter",  
"transforms.dropPrefix.regex":"SQLSERVER-TEST-(.*)",  
"transforms.dropPrefix.replacement":"$1"
}'
错误可以在Kafka connect UI中显示,也可以使用curl status命令显示:

curl -X GET http://localhost:8083/connectors/sqlserver-TEST-sink/status

感谢您的帮助

因此,如果我们进行调试,我们可以看到它正在尝试做什么

有一个具有原始主题名(
SQLSERVER\u TEST\u TABLE\u TEST-0
)的HashMap,并且已经应用了转换(
TABLE-TEST-0
),因此如果我们查找“新”主题名,它将找不到主题分区的S3编写器

因此,映射返回null,随后的
.buffer(record)
抛出一个NPE


我以前也有过类似的用例--将多个主题写入一个S3路径,最后我不得不编写一个自定义分区器,例如
类MyPartitioner扩展了DefaultPartitioner

如果您使用这样的自定义代码构建JAR,请将其置于
usr/share/java/kafka connect storage common
下,然后编辑
partitioner.class
的连接器配置,它应该可以正常工作

比如说,我不确定这是否是一个“bug”,因为备份调用堆栈时,在用源主题名称声明
TopicPartitionWriter
时,无法获取对regex转换的引用


如果有的话,存储连接器配置应该允许单独的正则表达式转换,该转换可以编辑
encodedPartition
(它写入文件的路径)

请指定您对kafka和connect使用的合并版本Hello,5.0.0。我的Docker compose也使用这些版本。谢谢