Apache kafka 启动新的基于Kafka Connect SSL的连接器时出现问题

Apache kafka 启动新的基于Kafka Connect SSL的连接器时出现问题,apache-kafka,apache-kafka-connect,Apache Kafka,Apache Kafka Connect,我正在尝试在Kafkanconnect集群上设置一个新的ElasticSearchSink作业。集群已经平稳运行了几个月,通过SASL-SSL安全连接到Kafka,HTTPS安全连接到主机a上的弹性实例。KC集群通常在Kubernetes中运行,但出于测试目的,我还使用docker在本地运行它(基于Confluent的KC image v6.0.0的映像),Kafka驻留在测试环境中,作业使用REST调用启动 用于在本地运行它的docker合成文件如下所示 version: '3.7' serv

我正在尝试在Kafkanconnect集群上设置一个新的ElasticSearchSink作业。集群已经平稳运行了几个月,通过SASL-SSL安全连接到Kafka,HTTPS安全连接到主机a上的弹性实例。

KC集群通常在Kubernetes中运行,但出于测试目的,我还使用docker在本地运行它(基于Confluent的KC image v6.0.0的映像),Kafka驻留在测试环境中,作业使用REST调用启动

用于在本地运行它的docker合成文件如下所示

version: '3.7'
services:
  connect:
    build:
      dockerfile: Dockerfile.local
      context: ./
    container_name: kafka-connect
    ports:
      - "8083:8083"
    environment:
      KAFKA_OPTS: -Djava.security.krb5.conf=/<path-to>/secrets/krb5.conf 
                  -Djava.security.auth.login.config=/<path-to>/rest-basicauth-jaas.conf
      CONNECT_BOOTSTRAP_SERVERS: <KAFKA-INSTANCE-1>:2181,<KAFKA-INSTANCE-2>:2181,<KAFKA-INSTANCE-3>:2181
      CONNECT_REST_ADVERTISED_HOST_NAME: kafka-connect
      CONNECT_REST_PORT: 8083
      CONNECT_REST_EXTENSION_CLASSES: org.apache.kafka.connect.rest.basic.auth.extension.BasicAuthSecurityRestExtension
      CONNECT_GROUP_ID: <kc-group>
      CONNECT_CONFIG_STORAGE_TOPIC: service-assurance.test.internal.connect.configs
      CONNECT_OFFSET_STORAGE_TOPIC: service-assurance.test.internal.connect.offsets
      CONNECT_STATUS_STORAGE_TOPIC: service-assurance.test.internal.connect.status
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.converters.IntegerConverter
      CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_ZOOKEEPER_CONNECT: <KAFKA-INSTANCE-1>:2181,<KAFKA-INSTANCE-2>:2181,<KAFKA-INSTANCE-3>:2181
      CONNECT_SECURITY_PROTOCOL: SASL_SSL
      CONNECT_SASL_KERBEROS_SERVICE_NAME: "kafka"
      CONNECT_SASL_JAAS_CONFIG: com.sun.security.auth.module.Krb5LoginModule required \
                                useKeyTab=true \
                                storeKey=true \
                                keyTab="/<path-to>/kafka-connect.keytab" \
                                principal="<AD-USER>";
      CONNECT_SASL_MECHANISM: GSSAPI
      CONNECT_SSL_TRUSTSTORE_LOCATION: "/<path-to>/truststore.jks"
      CONNECT_SSL_TRUSTSTORE_PASSWORD: <pwd>
      CONNECT_CONSUMER_SECURITY_PROTOCOL: SASL_SSL
      CONNECT_CONSUMER_SASL_KERBEROS_SERVICE_NAME: "kafka"
      CONNECT_CONSUMER_SASL_JAAS_CONFIG: com.sun.security.auth.module.Krb5LoginModule required \
                                useKeyTab=true \
                                storeKey=true \
                                keyTab="/<path-to>/kafka-connect.keytab" \
                                principal="<AD-USER>";
      CONNECT_CONSUMER_SASL_MECHANISM: GSSAPI
      CONNECT_CONSUMER_SSL_TRUSTSTORE_LOCATION: "/<path-to>/truststore.jks"
      CONNECT_CONSUMER_SSL_TRUSTSTORE_PASSWORD: <pwd>
      CONNECT_PLUGIN_PATH: "/usr/share/java,/etc/kafka-connect/jars"
我已经修改了工作的信任库,以包括主机B的CA根证书。我相信信任库正在工作,因为我能够从java代码片段(实际上可以在Atlassian页面SSLPoke.class上找到)使用它成功地连接到a和B

连接到主机A的连接器仍然可以使用新更新的信任库,但连接到主机B的连接器不能使用

我已经浏览了互联网,寻找如何解决这一问题的线索,并找到了明确添加以下内容的建议:

"elastic.https.ssl.truststore.location": "/<pathto>/truststore.jks",
"elastic.https.ssl.truststore.password": "<pwd>",
“elastic.https.ssl.truststore.location”:“//truststore.jks”,
“elastic.https.ssl.truststore.password”:“”,
连接到连接器配置。其他一些页面建议将信任库添加到KC配置KAFKA_选项中,如下所示:

  KAFKA_OPTS: -Djava.security.krb5.conf=/<path-to>/secrets/krb5.conf 
              -Djava.security.auth.login.config=/<path-to>/rest-basicauth-jaas.conf
              -Djavax.net.ssl.trustStore=/<path-to>/truststore.jks
KAFKA_选项:-Djava.security.krb5.conf=//secrets/krb5.conf
-Djava.security.auth.login.config=//rest-basicauth-jaas.conf
-Djavax.net.ssl.trustStore=//trustStore.jks
按照这些建议,我实际上可以让连接到主机B的连接器成功启动。但现在是受膏部分。将额外的参数添加到KAFKA_会选择连接到A的旧连接器停止工作!!-同样的错误!所以现在我有一个例子,要么连接器连接到a,要么连接器连接到B,但不能同时工作

请,如果有人能给我一些关于如何解决这个问题的建议或想法,我将不胜感激,因为这让我发疯

"elastic.https.ssl.truststore.location": "/<pathto>/truststore.jks",
"elastic.https.ssl.truststore.password": "<pwd>",
  KAFKA_OPTS: -Djava.security.krb5.conf=/<path-to>/secrets/krb5.conf 
              -Djava.security.auth.login.config=/<path-to>/rest-basicauth-jaas.conf
              -Djavax.net.ssl.trustStore=/<path-to>/truststore.jks