Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/spring/14.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Spring应用程序未使用SSL连接到Kafka_Spring_Ssl_Apache Kafka_Kafka Producer Api_Spring Kafka - Fatal编程技术网

Spring应用程序未使用SSL连接到Kafka

Spring应用程序未使用SSL连接到Kafka,spring,ssl,apache-kafka,kafka-producer-api,spring-kafka,Spring,Ssl,Apache Kafka,Kafka Producer Api,Spring Kafka,我有一个带有非常简单的卡夫卡制作人的Spring boot应用程序。如果我在没有加密的情况下连接到kafka群集,一切都会很好。但如果我尝试使用SSL连接到kafka群集,则超时。在producer中是否需要其他配置,或者需要定义其他属性,以允许spring正确使用所有配置 我设置了以下属性 spring.kafka.ssl.key-store-type=jks spring.kafka.ssl.trust-store-location=file:/home/ec2-user/t

我有一个带有非常简单的卡夫卡制作人的Spring boot应用程序。如果我在没有加密的情况下连接到kafka群集,一切都会很好。但如果我尝试使用SSL连接到kafka群集,则超时。在producer中是否需要其他配置,或者需要定义其他属性,以允许spring正确使用所有配置

我设置了以下属性

    spring.kafka.ssl.key-store-type=jks
    spring.kafka.ssl.trust-store-location=file:/home/ec2-user/truststore.jks
    spring.kafka.ssl.trust-store-password=test1234
    spring.kafka.ssl.key-store-location=file:/home/ec2-user/keystore.jks
    spring.kafka.ssl.key-store-password=test1234
    logging.level.org.apache.kafka=debug
    server.ssl.key-password=test1234
    spring.kafka.ssl.key-password=test1234
    spring.kafka.producer.client-id=sym
    spring.kafka.admin.ssl.protocol=ssl
应用程序启动时,将以下结果打印为ProducerConfig

    o.a.k.clients.producer.ProducerConfig    : ProducerConfig values:
    acks = 1
    batch.size = 16384
    bootstrap.servers = [broker1.kafka.allypoc.com:9093, broker3.kafka.allypoc.com:9093, broker4.kafka.allypoc.com:9093, broker5.kafka.allypoc.com:9093]
    buffer.memory = 33554432
    client.dns.lookup = default
    client.id = sym
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 120000
    enable.idempotence = false
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 0
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = [hidden]
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = /home/ec2-user/keystore.jks
    ssl.keystore.password = [hidden]
    ssl.keystore.type = jks
    ssl.protocol = ssl
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = /home/ec2-user/truststore.jks
    ssl.truststore.password = [hidden]
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer
我的制作人非常简单:

    @Service
    public class Producer {

        private final KafkaTemplate<String, String> kafkaTemplate;

        public Producer(KafkaTemplate<String, String> kafkaTemplate) {
            this.kafkaTemplate = kafkaTemplate;
        }

        void sendMessage(String topic, String message) {
            this.kafkaTemplate.send(topic, message);
        }

        void sendMessage(String topic, String key, String message) {
            this.kafkaTemplate.send(topic, key, message);
        }
    }

生产者配置中的security.protocol应设置为SSL。您还可以尝试设置ssl.endpoint.identification.algirithm=“”,以禁用证书的主机名验证,以防出现问题。除此之外,查看Kafka代理配置会很有用。

是的,安全协议就是它。必须将其设置为
spring.kafka.properties.security.protocol
而不是
spring.kafka.ssl.protocol=
。谢谢你的帮助!
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient   : [Producer clientId=sym] Completed connection to node -4. Fetching API versions.
    2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient   : [Producer clientId=sym] Initiating API versions fetch from node -4.
    2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient   : [Producer clientId=sym] Initialize connection to node 10.25.77.13:9093 (id: -3 rack: null) for sending metadata request
    2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient   : [Producer clientId=sym] Initiating connection to node 10.25.77.13:9093 (id: -3 rack: null) using address /10.25.77.13
    2019-05-29 20:10:25.994 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node--3.bytes-sent
    2019-05-29 20:10:25.996 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node--3.bytes-received
    2019-05-29 20:10:25.997 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node--3.latency
    2019-05-29 20:10:25.998 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector   : [Producer clientId=sym] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -3
    2019-05-29 20:10:26.107 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector   : [Producer clientId=sym] Connection with /10.25.75.151 disconnected

    java.io.EOFException: null
        at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:119) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.common.network.Selector.poll(Selector.java:467) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:311) ~[kafka-clients-2.1.1.jar!/:na]
        at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) ~[kafka-clients-2.1.1.jar!/:na]
        at java.base/java.lang.Thread.run(Thread.java:835) ~[na:na]

    2019-05-29 20:10:26.108 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient   : [Producer clientId=sym] Node -1 disconnected.
    2019-05-29 20:10:26.110 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient   : [Producer clientId=sym] Completed connection to node -3. Fetching API versions.