Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kafka Connect无法通过SSL读取Kafka主题_Ssl_Apache Kafka_Apache Kafka Connect - Fatal编程技术网

Kafka Connect无法通过SSL读取Kafka主题

Kafka Connect无法通过SSL读取Kafka主题,ssl,apache-kafka,apache-kafka-connect,Ssl,Apache Kafka,Apache Kafka Connect,在docker swarm中运行kafka connect,并使用以下合成文件: cp-kafka-connect-node: image: confluentinc/cp-kafka-connect:5.1.0 ports: - 28085:28085 secrets: - kafka.truststore.jks - source: kafka-connect-aws-credentials target: /roo

在docker swarm中运行kafka connect,并使用以下合成文件:

cp-kafka-connect-node:
    image: confluentinc/cp-kafka-connect:5.1.0
    ports:
      - 28085:28085
    secrets:
      - kafka.truststore.jks
      - source: kafka-connect-aws-credentials
        target: /root/.aws/credentials
    environment:
      CONNECT_BOOTSTRAP_SERVERS: kafka01:9093,kafka02:9093,kafka03:9093
      CONNECT_LOG4J_ROOT_LEVEL: TRACE
      CONNECT_REST_PORT: 28085
      CONNECT_GROUP_ID: cp-kafka-connect
      CONNECT_CONFIG_STORAGE_TOPIC: dev_cp-kafka-connect-config
      CONNECT_OFFSET_STORAGE_TOPIC: dev_cp-kafka-connect-offsets
      CONNECT_STATUS_STORAGE_TOPIC: dev_cp-kafka-connect-status
      CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3
      CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: 'false'
      CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: 'false'
      CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONNECT_REST_ADVERTISED_HOST_NAME: localhost
      CONNECT_PLUGIN_PATH: /usr/share/java/
      CONNECT_SECURITY_PROTOCOL: SSL
      CONNECT_SSL_TRUSTSTORE_LOCATION: /run/secrets/kafka.truststore.jks
      CONNECT_SSL_TRUSTSTORE_PASSWORD: ********
      KAFKA_HEAP_OPTS: '-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2'
    deploy:
      replicas: 1
      resources:
        limits:
          cpus: '0.50'
          memory: 4gb
      restart_policy:
        condition: on-failure
        delay: 10s
        max_attempts: 3
        window: 2000s

secrets:
  kafka.truststore.jks:
    external: true
  kafka-connect-aws-credentials:
    external: true
kafka connect节点已成功启动,我可以设置任务并查看这些任务的状态

我设置的连接器称为kafka接收器,我使用以下配置创建它:

"config": {
    "connector.class": "io.confluent.connect.s3.S3SinkConnector",
    "s3.region": "eu-central-1",
    "flush.size": "1",
    "schema.compatibility": "NONE",
    "tasks.max": "1",
    "topics": "input-topic-name",
    "s3.part.size": "5242880",
    "timezone": "UTC",
    "directory.delim": "/",
    "locale": "UK",
    "s3.compression.type": "gzip",
    "format.class": "io.confluent.connect.s3.format.bytearray.ByteArrayFormat",
    "partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
    "schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
    "name": "kafka-sink",
    "value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
    "storage.class": "io.confluent.connect.s3.storage.S3Storage",
    "s3.bucket.name": "my-s3-bucket",
    "rotate.schedule.interval.ms": "60000"
  }
此任务现在表示它正在运行

当我没有包括SSL配置时,特别是:

  CONNECT_BOOTSTRAP_SERVERS: kafka01:9093,kafka02:9093,kafka03:9093
  CONNECT_SECURITY_PROTOCOL: SSL
  CONNECT_SSL_TRUSTSTORE_LOCATION: /run/secrets/kafka.truststore.jks
  CONNECT_SSL_TRUSTSTORE_PASSWORD: ********
而是指向一个没有安全性的引导服务器:

  CONNECT_BOOTSTRAP_SERVERS: insecurekafka:9092
它工作得很好,读取适当的输入主题,并使用默认分区输出到S3 bucket

但是,当我对我的安全卡夫卡主题使用SSL配置运行它时,它不会记录任何错误,不会抛出任何异常,但不会执行任何操作,尽管数据不断被推送到输入主题

我做错什么了吗

这是我第一次使用Kafka Connect,通常情况下,我使用Spring引导应用程序连接到Kafka,您只需在配置中指定truststore位置和密码


我的撰写文件或任务配置中是否缺少一些配置?

我认为您需要为使用者和生产者添加SSL配置。检查这里 像这样的

security.protocol=SSL
ssl.truststore.location=~/kafka.truststore.jks
ssl.truststore.password=<password>
ssl.keystore.location=~/kafka.client.keystore.jks
ssl.keystore.password=<password>
ssl.key.password=<password>

producer.security.protocol=SSL
producer.ssl.truststore.location=~/kafka.truststore.jks
producer.ssl.truststore.password=<password>
producer.ssl.keystore.location=~/kafka.client.keystore.jks
producer.ssl.keystore.password=<password>
producer.ssl.key.password=<password>
security.protocol=SSL
ssl.truststore.location=~/kafka.truststore.jks
ssl.truststore.password=
ssl.keystore.location=~/kafka.client.keystore.jks
ssl.keystore.password=
ssl.key.password=
producer.security.protocol=SSL
producer.ssl.truststore.location=~/kafka.truststore.jks
producer.ssl.truststore.password=
producer.ssl.keystore.location=~/kafka.client.keystore.jks
producer.ssl.keystore.password=
producer.ssl.key.password=

我认为您需要为消费者和生产者添加SSL配置。检查这里 像这样的

security.protocol=SSL
ssl.truststore.location=~/kafka.truststore.jks
ssl.truststore.password=<password>
ssl.keystore.location=~/kafka.client.keystore.jks
ssl.keystore.password=<password>
ssl.key.password=<password>

producer.security.protocol=SSL
producer.ssl.truststore.location=~/kafka.truststore.jks
producer.ssl.truststore.password=<password>
producer.ssl.keystore.location=~/kafka.client.keystore.jks
producer.ssl.keystore.password=<password>
producer.ssl.key.password=<password>
security.protocol=SSL
ssl.truststore.location=~/kafka.truststore.jks
ssl.truststore.password=
ssl.keystore.location=~/kafka.client.keystore.jks
ssl.keystore.password=
ssl.key.password=
producer.security.protocol=SSL
producer.ssl.truststore.location=~/kafka.truststore.jks
producer.ssl.truststore.password=
producer.ssl.keystore.location=~/kafka.client.keystore.jks
producer.ssl.keystore.password=
producer.ssl.key.password=

太棒了!这解决了我的问题!我添加了:
CONNECT\u-PRODUCER\u-SECURITY\u-SSL\u-TRUSTSTORE\u-LOCATION:/run/secrets/kafka.TRUSTSTORE.jks-CONNECT\u-PRODUCER\u-SSL\u-TRUSTSTORE\u-PASSWORD:*******CONNECT\u-CONSUMER\u-SECURITY\u-SECURITY\u-PROTOCOL:SSL-CONNECT\u-CONSUMER\u-SSL\u-SSL\u-TRUSTSTORE\u-LOCATION:/run/secretsCONNECT_CONSUMER_SSL_TRUSTSTORE_PASSWORD:*********
现在它可以工作了!:)明亮的这解决了我的问题!我添加了:
CONNECT\u-PRODUCER\u-SECURITY\u-SSL\u-TRUSTSTORE\u-LOCATION:/run/secrets/kafka.TRUSTSTORE.jks-CONNECT\u-PRODUCER\u-SSL\u-TRUSTSTORE\u-PASSWORD:*******CONNECT\u-CONSUMER\u-SECURITY\u-SECURITY\u-PROTOCOL:SSL-CONNECT\u-CONSUMER\u-SSL\u-SSL\u-TRUSTSTORE\u-LOCATION:/run/secretsCONNECT_CONSUMER_SSL_TRUSTSTORE_PASSWORD:*********
现在它可以工作了!:)