Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/371.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Openshift上带有Strimzi运算符的Apache Kafka-无法连接_Java_Kubernetes_Apache Kafka_Openshift - Fatal编程技术网

Java Openshift上带有Strimzi运算符的Apache Kafka-无法连接

Java Openshift上带有Strimzi运算符的Apache Kafka-无法连接,java,kubernetes,apache-kafka,openshift,Java,Kubernetes,Apache Kafka,Openshift,我已经按照本教程一步一步地使用strizi操作符在Openshift上设置Kafka: 但是我没有准备示例应用程序,而是准备了自己的非常简单的卡夫卡制作人。代码如下: @RestController @RequestMapping("/kafka") public class KafkaController { @GetMapping public void ok(){ final Properties props = new Properties();

我已经按照本教程一步一步地使用strizi操作符在Openshift上设置Kafka:

但是我没有准备示例应用程序,而是准备了自己的非常简单的卡夫卡制作人。代码如下:

@RestController
@RequestMapping("/kafka")
public class KafkaController {

    @GetMapping
    public void ok(){
        final Properties props = new Properties();
        props.put("bootstrap.servers", "my-cluster-kafka-bootstrap-kafka-test.ocapp-pg.domain.com:443");
        props.put("acks", "all");
        props.put("retries", 0);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        props.put("security.protocol", "SSL");
        props.put("ssl.keystore.location", "src/main/resources/keystore.jks");
        props.put("ssl.keystore.password", "password");
        props.put("ssl.truststore.location", "src/main/resources/keystore.jks");
        props.put("ssl.truststore.password", "password");

        try (final Producer<String, String> producer = new KafkaProducer<>(props)) {
            while (true) {
                final String date = new Date().toString();
                System.out.println("Sending message: " + date);
                producer.send(new ProducerRecord<>("tag-topic", "date", date));
                Thread.sleep(2000);
            }
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
}

好像是信托商店的事,也许?但是我下载了cacert并将其导入到信任库中,就像在博客文章中一样。我甚至尝试手动复制证书。还是一样。我在这里做错了什么?

当我的服务配置错误并且没有选择任何pod时,我遇到了相同的错误。检查您的服务是否列出了任何pod。

当我的服务配置错误且未选择任何pod时,我遇到了相同的错误。检查您的服务是否列出了任何POD。

该服务列出了所有3个POD,都有运行和接收状态流量。该服务列出了所有3个POD,都有运行和接收状态流量。您是否能够解决此错误。我现在面临着同样的问题,你是否能够解决这个错误。我现在面临同样的问题
2019-05-16 19:55:13.960 DEBUG 21476 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Initiating connection to node my-cluster-kafka-2-kafka-test.ocapp-pg.domain.com:443 (id: 2 rack: )
2019-05-16 19:55:14.037 DEBUG 21476 --- [ad | producer-1] o.apache.kafka.common.network.Selector   : [Producer clientId=producer-1] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2
2019-05-16 19:55:14.038 DEBUG 21476 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Completed connection to node 2. Fetching API versions.
2019-05-16 19:55:14.111 DEBUG 21476 --- [ad | producer-1] o.apache.kafka.common.network.Selector   : [Producer clientId=producer-1] Connection with my-cluster-kafka-2-kafka-test.ocapp-pg.domain.com/52.215.40.40 disconnected

java.io.EOFException: EOF during handshake, handshake status is NEED_UNWRAP
    at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:489) ~[kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:337) ~[kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:264) ~[kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:125) ~[kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:489) [kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.common.network.Selector.poll(Selector.java:427) [kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510) [kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239) [kafka-clients-2.0.1.jar:na]
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163) [kafka-clients-2.0.1.jar:na]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_201]

2019-05-16 19:55:14.112 DEBUG 21476 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Node 2 disconnected.
2019-05-16 19:55:14.112  WARN 21476 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Connection to node 2 terminated during authentication. This may indicate that authentication failed due to invalid credentials.
2019-05-16 19:55:14.112 DEBUG 21476 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Give up sending metadata request since no node is available
2019-05-16 19:55:14.162 DEBUG 21476 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Give up sending metadata request since no node is available