Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何使用汇合docker映像设置kafka群集_Docker_Apache Kafka_Confluent Platform - Fatal编程技术网

如何使用汇合docker映像设置kafka群集

如何使用汇合docker映像设置kafka群集,docker,apache-kafka,confluent-platform,Docker,Apache Kafka,Confluent Platform,我尝试使用融合的docker图像设置3节点kafka集群 但当我形成集群时,我看到的是1或2个代理,而不是我形成的所有3个代理 docker run-it--net host confluentinc/cp kafkacat kafkacat-b localhost:9092-L 我是否需要在形成集群时设置属性,如果在合流部分没有提到这种情况 日志 如您所见,bootstrap.servers自动填充为ip1和ip2。虽然缺少ip3。因此,首先,它已经存在于一个Compose文件中 然而,

我尝试使用融合的docker图像设置3节点kafka集群

但当我形成集群时,我看到的是1或2个代理,而不是我形成的所有3个代理

docker run-it--net host confluentinc/cp kafkacat kafkacat-b localhost:9092-L

我是否需要在形成集群时设置属性,如果在合流部分没有提到这种情况

日志


如您所见,
bootstrap.servers
自动填充为
ip1
ip2
。虽然缺少
ip3

因此,首先,它已经存在于一个Compose文件中

然而,我认为它假设您的主机是Linux,因为正如预期的那样,
network:host
只能在Linux上工作


无论如何,一些注释

  • 例如,
    ip1
    不存在。。。它必须是该容器的主机名
  • 从简单开始。运行一个代理,您可以使用一个Zookeeper运行多个代理
  • 无法在同一主机上映射相同的端口。。。参见
    -p 2181:2181-p 2888:2888-p 3888:3888的每个用法(2888和3888实际上不需要向主机公开),类似于
    -p 9092:9092
  • --net host
    如果将所有容器添加到同一个
    --network
    (就像Docker compose那样),则不必使用
    kafkacat
  • 在一台主机上启动多个代理不会增加很多“好处”

因此,首先,这已经存在于编写文件中

然而,我认为它假设您的主机是Linux,因为正如预期的那样,
network:host
只能在Linux上工作


无论如何,一些注释

  • 例如,
    ip1
    不存在。。。它必须是该容器的主机名
  • 从简单开始。运行一个代理,您可以使用一个Zookeeper运行多个代理
  • 无法在同一主机上映射相同的端口。。。参见
    -p 2181:2181-p 2888:2888-p 3888:3888的每个用法(2888和3888实际上不需要向主机公开),类似于
    -p 9092:9092
  • --net host
    如果将所有容器添加到同一个
    --network
    (就像Docker compose那样),则不必使用
    kafkacat
  • 在一台主机上启动多个代理不会增加很多“好处”

如果对多个容器使用docker-p 9092:9092,则无法使用,因为您试图将多个容器内部端口映射到同一主机端口。另外,我建议您使用docker compose来创建集群……如果您对多个容器使用docker-p 9092:9092,那么它将不起作用,因为您试图将多个容器内部端口映射到同一主机端口。另外,我建议您使用docker compose创建集群……我正在三台不同的ec2机器上尝试kakfa集群,而不是在一台机器上使用docker-compose。好的。你有没有试过在没有Docker的情况下让它工作?在这之后,只需使用容器中的端口映射设置正确的网络规则,不要忘记持久卷。我正在三台不同的ec2机器上尝试kakfa群集,而不是在一台机器上使用docker-compose。好的。你有没有试过在没有Docker的情况下让它工作?之后,只需通过容器中的端口映射设置正确的网络规则,不要忘记持久卷。
docker run -d --restart always --name zk-1 -e zk_id=1 -e zk_server.1=ip1:2888:3888 -e zk_server.2=ip2:2888:3888 -e zk_server.3=ip3:2888:3888 -e ZOOKEEPER_CLIENT_PORT=2181 -p 2181:2181 -p 2888:2888 -p 3888:3888 confluentinc/cp-zookeeper:5.2.1-1
docker run -d --restart always --name zk-2 -e zk_id=2 -e zk_server.1=ip1:2888:3888 -e zk_server.2=ip2:2888:3888 -e zk_server.3=ip3:2888:3888 -e ZOOKEEPER_CLIENT_PORT=2181 -p 2181:2181 -p 2888:2888 -p 3888:3888 confluentinc/cp-zookeeper:5.2.1-1
docker run -d --restart always --name zk-3 -e zk_id=3 -e zk_server.1=ip1:2888:3888 -e zk_server.2=ip2:2888:3888 -e zk_server.3=ip3:2888:3888 -e ZOOKEEPER_CLIENT_PORT=2181 -p 2181:2181 -p 2888:2888 -p 3888:3888 confluentinc/cp-zookeeper:5.2.1-1
docker run -d --restart always --name kafka-1 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip1:9092 -p 9092:9092 confluentinc/cp-kafka:5.2.1-1
docker run -d --restart always --name kafka-2 -e KAFKA_BROKER_ID=2 -e KAFKA_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -p 9092:9092 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip2:9092 confluentinc/cp-kafka:5.2.1-1
docker run -d --restart always --name kafka-3 -e KAFKA_BROKER_ID=3 -e KAFKA_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -p 9092:9092 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip3:9092 confluentinc/cp-kafka:5.2.1-1
[2019-06-06 06:12:06,788] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,793] INFO [ExpirationReaper-1-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,801] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-06-06 06:12:06,850] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)
[2019-06-06 06:12:06,868] INFO Stat of the created znode at /brokers/ids/1 is: 565,565,1559801526862,1559801526862,1,0,0,72057595854848009,196,0,565
 (kafka.zk.KafkaZkClient)
[2019-06-06 06:12:06,870] INFO Registered broker 1 at path /brokers/ids/1 with addresses: ArrayBuffer(EndPoint(ip1,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 565 (kafka.zk.KafkaZkClient)
[2019-06-06 06:12:06,871] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-06-06 06:12:06,934] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread)
[2019-06-06 06:12:06,943] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,949] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,958] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,960] DEBUG [Controller id=1] Broker 2 has been elected as the controller, so stopping the election process. (kafka.controller.KafkaController)
[2019-06-06 06:12:06,966] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2019-06-06 06:12:06,966] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2019-06-06 06:12:06,970] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-06-06 06:12:06,983] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:9000,blockEndProducerId:9999) by writing to Zk with path version 10 (kafka.coordinator.transaction.ProducerIdManager)
[2019-06-06 06:12:07,004] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-06-06 06:12:07,005] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-06-06 06:12:07,007] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2019-06-06 06:12:07,031] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2019-06-06 06:12:07,044] INFO [SocketServer brokerId=1] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2019-06-06 06:12:07,048] INFO Kafka version: 2.2.0-cp2 (org.apache.kafka.common.utils.AppInfoParser)
[2019-06-06 06:12:07,068] INFO Kafka commitId: 00d486623990ed9d (org.apache.kafka.common.utils.AppInfoParser)
[2019-06-06 06:12:07,072] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
[2019-06-06 06:12:07,078] INFO Waiting until monitored service is ready for metrics collection (io.confluent.support.metrics.BaseMetricsReporter)
[2019-06-06 06:12:07,082] INFO Monitored service is now ready (io.confluent.support.metrics.BaseMetricsReporter)
[2019-06-06 06:12:07,082] INFO Attempting to collect and submit metrics (io.confluent.support.metrics.BaseMetricsReporter)
[2019-06-06 06:12:07,108] TRACE [Broker id=1] Cached leader info PartitionState(controllerEpoch=5, leader=2, leaderEpoch=1, isr=[2], zkVersion=1, replicas=[2], offlineReplicas=[]) for partition __confluent.support.metrics-0 in response to UpdateMetadata request sent by controller 2 epoch 6 with correlation id 0 (state.change.logger)
[2019-06-06 06:12:07,322] WARN The replication factor of topic __confluent.support.metrics is 1, which is less than the desired replication factor of 3.  If you happen to add more brokers to this cluster, then it is important to increase the replication factor of the topic to eventually 3 to ensure reliable and durable metrics collection. (io.confluent.support.metrics.common.kafka.KafkaUtilities)
[2019-06-06 06:12:07,334] INFO ProducerConfig values: 
    acks = 1
    batch.size = 16384
    bootstrap.servers = [PLAINTEXT://ip1:9092, PLAINTEXT://ip2:9092]
    buffer.memory = 33554432
    client.dns.lookup = default
    client.id = 
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 120000
    enable.idempotence = false
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
    linger.ms = 0
    max.block.ms = 10000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig)
[2019-06-06 06:12:07,366] INFO Kafka version: 2.2.0-cp2 (org.apache.kafka.common.utils.AppInfoParser)
[2019-06-06 06:12:07,366] INFO Kafka commitId: 00d486623990ed9d (org.apache.kafka.common.utils.AppInfoParser)
[2019-06-06 06:12:07,399] INFO Cluster ID: 0DmmTlnXQMGD52urD7rxuA (org.apache.kafka.clients.Metadata)
[2019-06-06 06:12:07,449] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2019-06-06 06:12:07,459] INFO Successfully submitted metrics to Kafka topic __confluent.support.metrics (io.confluent.support.metrics.submitters.KafkaSubmitter)
[2019-06-06 06:12:08,470] INFO Successfully submitted metrics to Confluent via secure endpoint (io.confluent.support.metrics.submitters.ConfluentSubmitter)