Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka spring云流kafka绑定配置_Apache Kafka_Spring Cloud Stream - Fatal编程技术网

Apache kafka spring云流kafka绑定配置

Apache kafka spring云流kafka绑定配置,apache-kafka,spring-cloud-stream,Apache Kafka,Spring Cloud Stream,尝试使用kafka开发Spring云应用程序 用于卡夫卡的配置为: spring: application: name: service-sample cloud: stream: bindings: output: destination: lrctms-cloud-dev content-type: application/json kafka: binder:

尝试使用kafka开发Spring云应用程序

用于卡夫卡的配置为:

spring:
  application:
    name: service-sample

  cloud:
    stream:
      bindings:
        output: 
          destination: lrctms-cloud-dev
          content-type: application/json

      kafka:
        binder:
          brokers: 192.168.11.153
          defaultBrokerPort: 9092
          zkNodes: 192.168.11.153
运行应用程序,我可以看到这些配置被选中

o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values:
      bootstrap.servers = [192.168.11.153:9092]
      client.id =
      connections.max.idle.ms = 300000
      metadata.max.age.ms = 300000
      metric.reporters = []
      metrics.num.samples = 2
      metrics.recording.level = INFO
      metrics.sample.window.ms = 30000
      receive.buffer.bytes = 65536
      reconnect.backoff.max.ms = 1000
      reconnect.backoff.ms = 50
      request.timeout.ms = 120000
      retries = 5
      retry.backoff.ms = 100
      sasl.jaas.config = null
      sasl.kerberos.kinit.cmd = /usr/bin/kinit
      sasl.kerberos.min.time.before.relogin = 60000
      sasl.kerberos.service.name = null
      sasl.kerberos.ticket.renew.jitter = 0.05
      sasl.kerberos.ticket.renew.window.factor = 0.8
      sasl.mechanism = GSSAPI
      security.protocol = PLAINTEXT
      send.buffer.bytes = 131072
      ssl.cipher.suites = null
      ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
      ssl.endpoint.identification.algorithm = null
      ssl.key.password = null
      ssl.keymanager.algorithm = SunX509
      ssl.keystore.location = null
      ssl.keystore.password = null
      ssl.keystore.type = JKS
      ssl.protocol = TLS
      ssl.provider = null
      ssl.secure.random.implementation = null
      ssl.trustmanager.algorithm = PKIX
      ssl.truststore.location = null
      ssl.truststore.password = null
      ssl.truststore.type = JKS
问题在于以下错误消息:

adminclient-1] o.apache.kafka.common.network.Selector   : [AdminClient clientId=adminclient-1] Connection with 127.0.0.1 disconnected
 java.net.ConnectException: Connection refused
         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
         at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
         at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
         at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:106)
         at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:458)
         at org.apache.kafka.common.network.Selector.poll(Selector.java:412)
         at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
         at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1006)
         at java.lang.Thread.run(Thread.java:748)
如何配置此AdminClient并向其传递正确的主机/ip信息?检查了答案,但找不到答案

移动评论以回答问题


因此,根据日志,设置了正确的配置,但是,这只是到代理的初始连接。然后,Kafka控制器会将集群中每个代理的advised.host.name/advised.listeners列表发送回您的客户机,在大多数情况下,需要将其配置为可由外部客户机解析的代理的外部地址。如果您的案例127.0.0.1中还有其他内容,则需要将此属性用作检查

AdminClientConfig值:bootstrap.servers=[192.168.11.153:9092]。。。看来对我有用。也许您实际的卡夫卡广告听众配置错误,返回127.0.0。1@cricket_007你说得对,卡夫卡大学的主播名被设置为127.0.0.1。谢谢