Java can';无法连接到spotify kafka容器,基本连接问题

Java can';无法连接到spotify kafka容器,基本连接问题,java,docker,groovy,apache-kafka,spotify,Java,Docker,Groovy,Apache Kafka,Spotify,在docker和kafka的基础上跌跌撞撞,无法获得客户端连接 到目前为止我做了什么 1) 已在windows 10上安装docker windows。 2) 打开kitematic,搜索卡夫卡,并选择spotify卡夫卡(wurstmeister图像无法启动)。 3) 容器启动,我可以在容器日志中看到正在运行的图像。 4) ip和端口将docker端口9092和访问端口报告为localhost:32768 docker ps展示了这一点 7bf9f9278e64 spotify/kafka:2

在docker和kafka的基础上跌跌撞撞,无法获得客户端连接

到目前为止我做了什么

1) 已在windows 10上安装docker windows。 2) 打开kitematic,搜索卡夫卡,并选择spotify卡夫卡(wurstmeister图像无法启动)。
3) 容器启动,我可以在容器日志中看到正在运行的图像。
4) ip和端口将docker端口9092和访问端口报告为localhost:32768

docker ps展示了这一点 7bf9f9278e64 spotify/kafka:2小时前的最新“supervisord-n”上升57分钟0.0.0.0:32769->2181/tcp,0.0.0.0:32768->9092/tcp kafka

docker机器处于活动状态,不返回活动主机

我的groovy类(类似于示例中的剪切粘贴)设置了如下连接

class KafkaProducer {

    String topicName = "wills topic"
    Producer<String, String> producer    
def init () {
    Properties props = new Properties()
    props.put("bootstrap.servers", "192.168.1.89:32768" )   //Assign localhost id and external port (9092 int)
    props.put("acks", "all")                            //Set acknowledgements for producer requests.
    props.put("retries", 0)                             //If the request fails, the producer can automatically retry,
    props.put("batch.size", 16384)                      //Specify buffer size in config
    props.put("linger.ms", 1)                           //Reduce the no of requests less than 0
    props.put("buffer.memory", 33554432)                //The buffer.memory controls the total amount of memory available to the producer for buffering.
    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")

    producer = new org.apache.kafka.clients.producer.KafkaProducer<String, String>(props)
}  ....
KafkaProducer类{
String topicName=“遗嘱主题”
制片人
def init(){
Properties props=新属性()
props.put(“bootstrap.servers”,“192.168.1.89:32768”)//分配本地主机id和外部端口(9092 int)
props.put(“acks”、“all”)//为生产者请求设置确认。
props.put(“retries”,0)//如果请求失败,生产者可以自动重试,
put(“batch.size”,16384)//在配置中指定缓冲区大小
props.put(“linger.ms”,1)//将请求数量减少到0以下
props.put(“buffer.memory”,33554432)//buffer.memory控制生产者可用于缓冲的内存总量。
put(“key.serializer”、“org.apache.kafka.common.serialization.StringSerializer”)
put(“value.serializer”、“org.apache.kafka.common.serialization.StringSerializer”)
put(“key.deserializer”、“org.apache.kafka.common.serialization.StringDeserializer”)
put(“value.deserializer”、“org.apache.kafka.common.serialization.StringDeserializer”)
producer=new org.apache.kafka.clients.producer.KafkaProducer(props)
}  ....
当我运行此初始化时,我收到错误消息说它无法解析连接,例如java.io.IOException:无法解析地址:7bf9f9278e64:9092,这是内部容器端口。(我的脚本正在从正常的IDE桌面环境调用)

kitmatic说这是映射。那么为什么我不能连接然后发送呢? 另外,正如我刚刚通过kitematic下载的那样,如果您想更改配置,那么应该将docker-compose.yml放在哪里呢

18:05:41.022 [main] INFO  o.a.k.c.p.ProducerConfig:[.logAll:] > ProducerConfig values: 
    acks = all
    batch.size = 16384
    block.on.buffer.full = false
    bootstrap.servers = [192.168.1.89:32768]
    buffer.memory = 33554432
    client.id = 
    compression.type = none
    connections.max.idle.ms = 540000
    interceptor.classes = null
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 1
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.fetch.timeout.ms = 60000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 0
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    timeout.ms = 30000
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer

18:05:41.076 [main] INFO  o.a.k.c.p.ProducerConfig:[.logAll:] > ProducerConfig values: 
    acks = all
    batch.size = 16384
    block.on.buffer.full = false
    bootstrap.servers = [192.168.1.89:32768]
    buffer.memory = 33554432
    client.id = producer-1
    compression.type = none
    connections.max.idle.ms = 540000
    interceptor.classes = null
    key.serializer = class org.apache.kafka.common.serialization.StringSerializer
    linger.ms = 1
    max.block.ms = 60000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.fetch.timeout.ms = 60000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 0
    retry.backoff.ms = 100
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    timeout.ms = 30000
    value.serializer = class org.apache.kafka.common.serialization.StringSerializer

18:05:41.079 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name bufferpool-wait-time
18:05:41.083 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name buffer-exhausted-records
18:05:41.085 [main] DEBUG o.a.k.c.Metadata:[.update:] > Updated cluster metadata version 1 to Cluster(id = null, nodes = [192.168.1.89:32768 (id: -1 rack: null)], partitions = [])
18:05:41.401 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name connections-closed:
18:05:41.401 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name connections-created:
18:05:41.402 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name bytes-sent-received:
18:05:41.402 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name bytes-sent:
18:05:41.406 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name bytes-received:
18:05:41.406 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name select-time:
18:05:41.407 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name io-time:
18:05:41.409 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name batch-size
18:05:41.410 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name compression-rate
18:05:41.410 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name queue-time
18:05:41.410 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name request-time
18:05:41.410 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name produce-throttle-time
18:05:41.411 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name records-per-request
18:05:41.412 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name record-retries
18:05:41.412 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name errors
18:05:41.412 [main] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name record-size-max
18:05:41.414 [main] WARN  o.a.k.c.p.ProducerConfig:[.logUnused:] > The configuration 'key.deserializer' was supplied but isn't a known config.
18:05:41.414 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.p.i.Sender:[.run:] > Starting Kafka producer I/O thread.
18:05:41.414 [main] WARN  o.a.k.c.p.ProducerConfig:[.logUnused:] > The configuration 'value.deserializer' was supplied but isn't a known config.
18:05:41.416 [main] INFO  o.a.k.c.u.AppInfoParser:[.<init>:] > Kafka version : 0.10.1.1
18:05:41.416 [main] INFO  o.a.k.c.u.AppInfoParser:[.<init>:] > Kafka commitId : f10ef2720b03b247
18:05:41.417 [main] DEBUG o.a.k.c.p.KafkaProducer:[.<init>:] > Kafka producer started
18:05:41.430 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.maybeUpdate:] > Initialize connection to node -1 for sending metadata request
18:05:41.430 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.initiateConnect:] > Initiating connection to node -1 at 192.168.1.89:32768.
18:05:41.434 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name node--1.bytes-sent
18:05:41.434 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name node--1.bytes-received
18:05:41.435 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.m.Metrics:[.sensor:] > Added sensor with name node--1.latency
18:05:41.435 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.n.Selector:[.pollSelectionKeys:] > Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
18:05:41.436 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.handleConnections:] > Completed connection to node -1
18:05:41.452 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.maybeUpdate:] > Sending metadata request {topics=[wills topic]} to node -1
18:05:41.476 [kafka-producer-network-thread | producer-1] WARN  o.a.k.c.NetworkClient:[.handleResponse:] > Error while fetching metadata with correlation id 0 : {wills topic=INVALID_TOPIC_EXCEPTION}
18:05:41.477 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.Metadata:[.update:] > Updated cluster metadata version 2 to Cluster(id = 8cjV2Ga6RB6bXfeDWWfTKA, nodes = [7bf9f9278e64:9092 (id: 0 rack: null)], partitions = [])
18:05:41.570 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.maybeUpdate:] > Initialize connection to node 0 for sending metadata request
18:05:41.570 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.initiateConnect:] > Initiating connection to node 0 at 7bf9f9278e64:9092.
18:05:43.826 [kafka-producer-network-thread | producer-1] DEBUG o.a.k.c.NetworkClient:[.initiateConnect:] > Error connecting to node 0 at 7bf9f9278e64:9092:
java.io.IOException: Can't resolve address: 7bf9f9278e64:9092
    at org.apache.kafka.common.network.Selector.connect(Selector.java:180)
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:498)
    at org.apache.kafka.clients.NetworkClient.access$400(NetworkClient.java:48)
    at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:645)
    at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:552)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:258)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:236)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:135)
    at java.lang.Thread.run(Thread.java:745)
18:05:41.022[main]信息o.a.k.c.p.ProducerConfig:[.logAll:]>ProducerConfig值:
acks=全部
batch.size=16384
block.on.buffer.full=false
bootstrap.servers=[192.168.1.89:32768]
buffer.memory=33554432
client.id=
compression.type=none
connections.max.idle.ms=540000
interceptor.classes=null
key.serializer=class org.apache.kafka.common.serialization.StringSerializer
linger.ms=1
最大block.ms=60000
最大in.flight.requests.per.connection=5
max.request.size=1048576
metadata.fetch.timeout.ms=60000
metadata.max.age.ms=300000
metric.reporters=[]
metrics.num.samples=2
metrics.sample.window.ms=30000
partitioner.class=class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes=32768
重新连接.backoff.ms=50
request.timeout.ms=30000
重试次数=0
retry.backoff.ms=100
sasl.kerberos.kinit.cmd=/usr/bin/kinit
sasl.kerberos.min.time.before.relogin=60000
sasl.kerberos.service.name=null
sasl.kerberos.ticket.renew.jitter=0.05
sasl.kerberos.ticket.renew.window.factor=0.8
sasl.mechanism=GSSAPI
security.protocol=明文
send.buffer.bytes=131072
ssl.cipher.suites=null
ssl.enabled.protocols=[TLSv1.2、TLSv1.1、TLSv1]
ssl.endpoint.identification.algorithm=null
ssl.key.password=null
ssl.keymanager.algorithm=SunX509
ssl.keystore.location=null
ssl.keystore.password=null
ssl.keystore.type=JKS
ssl.protocol=TLS
ssl.provider=null
ssl.secure.random.implementation=null
ssl.trustmanager.algorithm=PKIX
ssl.truststore.location=null
ssl.truststore.password=null
ssl.truststore.type=JKS
timeout.ms=30000
value.serializer=class org.apache.kafka.common.serialization.StringSerializer
18:05:41.076[main]信息o.a.k.c.p.ProducerConfig:[.logAll:]>ProducerConfig值:
acks=全部
batch.size=16384
block.on.buffer.full=false
bootstrap.servers=[192.168.1.89:32768]
buffer.memory=33554432
client.id=producer-1
compression.type=none
connections.max.idle.ms=540000
interceptor.classes=null
key.serializer=class org.apache.kafka.common.serialization.StringSerializer
linger.ms=1
最大block.ms=60000
最大in.flight.requests.per.connection=5
max.request.size=1048576
metadata.fetch.timeout.ms=60000
metadata.max.age.ms=300000
metric.reporters=[]
metrics.num.samples=2
metrics.sample.window.ms=30000
partitioner.class=class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes=32768
重新连接.backoff.ms=50
request.timeout.ms=30000
重试次数=0
retry.backoff.ms=100
sasl.kerberos.kinit.cmd=/usr/bin/kinit
sasl.kerberos.min.time.before.relogin=60000
sasl.kerberos.service.name=null
sasl.kerberos.ticket.renew.jitter=0.05
sasl.kerberos.ticket.renew.window.factor=0.8
sasl.mechanism=GSSAPI
security.protocol=明文
send.buffer.bytes=131072
ssl.cipher.suites=null
ssl.enabled.protocols=[TLSv1.2、TLSv1.1、TLSv1]
ssl.endpoint.identification.algorithm=null
ssl.key.password=null
ssl.keymanager.algorithm=SunX509
ssl.keystore.locatio