Docker compose 云CI服务器(如GitLab或Travis)上docker compose堆栈中Confluent Kafka客户端的正确用法是什么?

Docker compose 云CI服务器(如GitLab或Travis)上docker compose堆栈中Confluent Kafka客户端的正确用法是什么?,docker-compose,gitlab-ci,confluent-platform,confluent-kafka-dotnet,Docker Compose,Gitlab Ci,Confluent Platform,Confluent Kafka Dotnet,我是Kafka的新用户,已设法使docker compose堆栈在本地工作,以便从ASP.NET Core 3.1测试服务成功运行功能测试。这与Kafa、Zookeeper和Rest代理服务位于同一网络上的同一docker compose堆栈中 如果主题尚不存在,SUT和测试将在启动时使用创建主题 当我尝试在远程GitLab.com CI服务器上运行这个docker compose堆栈时,测试将在创建主题时挂起。日志(见下文)显示.NET客户端正在连接到docker compose堆栈中正确的内

我是Kafka的新用户,已设法使docker compose堆栈在本地工作,以便从ASP.NET Core 3.1测试服务成功运行功能测试。这与Kafa、Zookeeper和Rest代理服务位于同一网络上的同一docker compose堆栈中

如果主题尚不存在,SUT和测试将在启动时使用创建主题

当我尝试在远程GitLab.com CI服务器上运行这个docker compose堆栈时,测试将在创建主题时挂起。日志(见下文)显示.NET客户端正在连接到docker compose堆栈中正确的内部服务
kafka:19092
。卡夫卡服务中有一些活动开始创建主题,然后它将被阻止。我应该会在日志中看到一条消息,确认主题创建

.NET客户端创建卡夫卡主题

        /// <summary>Dispatch request to Kafka Broker to create Kafka topic from config</summary>
        /// <param name="client">Kafka admin client</param>
        /// <exception cref="">Thrown for errors except topic already exists</exception>
        private async Task CreateTopicAsync(IAdminClient client)
        {
            try
            {
                _Logger.LogInformation("Admin service trying to create Kafka Topic...");
                _Logger.LogInformation($"Topic::{_Config.Topic.Name}, ReplicationCount::{_Config.Topic.ReplicationCount}, PartitionCount::{_Config.Topic.PartitionCount}");
                _Logger.LogInformation($"Bootstrap Servers::{_Config.Consumer.BootstrapServers}");

                await client.CreateTopicsAsync(new TopicSpecification[] {
                        new TopicSpecification {
                            Name = _Config.Topic.Name,
                            NumPartitions = _Config.Topic.PartitionCount,
                            ReplicationFactor = _Config.Topic.ReplicationCount
                        }
                    }, null);

                _Logger.LogInformation($"Admin service successfully created topic {_Config.Topic.Name}");
            }
            catch (CreateTopicsException e)
            {
                if (e.Results[0].Error.Code != ErrorCode.TopicAlreadyExists)
                {
                    _Logger.LogInformation($"An error occured creating topic {_Config.Topic.Name}: {e.Results[0].Error.Reason}");
                    throw e;
                }
                else
                {
                    _Logger.LogInformation($"Topic {_Config.Topic.Name} already exists");
                }
            }
        }
docker使用zookeeper、kafka、rest代理和ASP.NET Core 3.1为集成测试服务组合堆栈

---
版本:“3.8”
服务:
动物园管理员:
图片:confluentinc/cp zookeeper:6.0.0
主机名:zookeeper
容器名称:动物园管理员
端口:
- "2181:2181"
网络:
camnet:
ipv4地址:172.19.0.11
环境:
ZOOKEEPER_客户端_端口:2181
动物园管理员时间:2000
卡夫卡:
图片:confluentinc/cp卡夫卡:6.0.0
主机名:卡夫卡
容器名称:卡夫卡
取决于:
-动物园管理员
端口:
- "9092:9092"
- "9101:9101"
网络:
camnet:
ipv4地址:172.19.0.21
环境:
卡夫卡广告听众:听众DOCKER内部://KAFKA:19092
卡夫卡侦听器安全协议映射:侦听器DOCKER内部:明文
卡夫卡·国际经纪人·听众·姓名:听众·码头工人·内部
卡夫卡•动物园管理员•连接:“动物园管理员:2181”
卡夫卡偏移量主题复制系数:1
卡夫卡经纪人编号:1
卡夫卡默认复制系数:1
KAFKA_NUM_分区:3
rest代理:
图片:confluentinc/cp卡夫卡rest:6.0.0
主机名:rest代理
容器名称:rest代理
取决于:
-卡夫卡
端口:
- 8082:8082
网络:
camnet:
ipv4地址:172.19.0.31
环境:
KAFKA_REST_主机名称:REST代理
KAFKA_REST_引导_服务器:“KAFKA:19092”
卡夫卡的听众:http://0.0.0.0:8082"
卡夫卡\u REST\u模式\u注册表\u URL:'http://schema-registry:8081'
#ASP.NET核心3集成测试
#使用kafka dotnet客户端https://docs.confluent.io/current/clients/dotnet.html
#从ASP.NET核心测试服务器创建主题的步骤
网络应用程序:
建造:
背景:/
dockerfile:Docker/Test/dockerfile
目标:测试
主机名:webapp
图片:dcs3spp/webapp
容器名称:webapp
取决于:
-卡夫卡
-rest代理
网络:
camnet:
ipv4地址:172.19.0.61
环境:
-ASPNETCORE_环境=Docker
网络:
camnet:
外部:
姓名:经纪
GitLab.com CI Docker网络环境

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
2b3286d21fee        bridge              bridge              local
a17bf57d1a86        host                host                local
0252525b2ca4        none                null                local
$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "2b3286d21fee076047e78188b67c2912dfd388a170de3e3cf2ba8d5238e1c6c7",
        "Created": "2020-11-16T14:53:35.574299006Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
$ docker network inspect host
[
    {
        "Name": "host",
        "Id": "a17bf57d1a865512bebd3f7f73e0fd761d40b1d4f87765edeac6099e86b94339",
        "Created": "2020-11-16T14:53:35.551372286Z",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
$ docker network inspect none
[
    {
        "Name": "none",
        "Id": "0252525b2ca4b28ddc0f950b472485167cfe18e003c62f3d09ce2a856880362a",
        "Created": "2020-11-16T14:53:35.536741983Z",
        "Scope": "local",
        "Driver": "null",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
$ docker network create --gateway 172.19.0.1 --subnet 172.19.0.0/16 broker
dbd923b4caacca225f52e8a82dfcad184a1652bde1b5976aa07bbddb2919126c
Gitab.com CI服务器日志

webapp |总共有1个测试文件与指定的模式匹配。
webapp | warn:Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
webapp |将密钥存储在目录“/root/.aspnet/DataProtection keys”中,该目录可能不会在容器外部持久化。当容器被销毁时,受保护的数据将不可用。
webapp | info:webapp.S3.S3Service[0]
为端点Minio创建的webapp | Minio客户端:9000
webapp | info:webapp.Kafka.ProducerService[0]
webapp | ProducerService构造函数调用
webapp | info:webapp.Kafka.SchemaRegistry.Serdes.JsonDeserializer[0]
webapp |构建
webapp | info:webapp.Kafka.ConsumerService[0]
webapp |卡夫卡消费者收听相机主题=>
webapp | info:webapp.Kafka.ConsumerService[0]
webapp |摄像头主题::shinobi/RHSsYfiV6Z/xi5ncrnk6/trigger
webapp | info:webapp.Kafka.ConsumerService[0]
webapp |摄像头主题::shinobi/group/monitor/trigger
webapp | warn:Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
webapp |未配置XML加密程序。密钥{47af6978-c38e-429f-9b34-455ca445c2d8}可以以未加密的形式保存到存储器中。
webapp | info:webapp.Kafka.Admin.kafkaadmin服务[0]
webapp |管理服务正在尝试创建卡夫卡主题。。。
webapp | info:webapp.Kafka.Admin.kafkaadmin服务[0]
webapp |主题::事件总线,复制计数::1,分区计数::3
webapp | info:webapp.Kafka.Admin.kafkaadmin服务[0]
webapp |引导服务器::卡夫卡:19092
kafka |[2020-11-16 14:59:32335]信息创建主题事件总线,配置{}和初始分区分配HashMap(0->ArrayBuffer(1),1->ArrayBuffer(1),2->ArrayBuffer(1))(kafka.zk.AdminZkClient)
kafka |[2020-11-16 14:59:32543]信息[Controller id=1]新主题:[Set(eventbus)],已删除主题:[HashSet()],新分区副本分配[Map(eventbus-0->ReplicaAssignment(replicas=1,addingReplicas=,removingReplicas=),eventbus-1->ReplicaAssignment(replicas=1,addingReplicas=,removingReplicas=),eventbus-2->ReplicaAssignment(副本=1,添加副本=,移除副本=)](kafka.controller.KafkaController)
kafka |[2020-11-16 14:59:32546]信息[Controller id=1]针对eventbus-0、eventbus-1、eventbus-2的新分区创建回调(kafka.Controller.KafkaController)
kafka |[2020-11-16 14:59:32557]信息[Controller id=1 epoch=1]将分区eventbus-0状态从不存在的分区更改为具有指定副本的新分区1(state.change.logger)
卡夫卡
using System;
using System.Threading;
using System.Threading.Tasks;

using Confluent.Kafka;
using Confluent.Kafka.Admin;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;

using KafkaAdmin.Kafka.Config;


namespace KafkaAdmin.Kafka
{
    public delegate IAdminClient KafkaAdminFactory(KafkaConfig config);

    /// <summary>Background Service to make a request from Kafka to create a topic</summary>
    public class KafkaAdminService : BackgroundService, IDisposable
    {
        private KafkaAdminFactory _Factory { get; set; }
        private ILogger<KafkaAdminService> _Logger { get; set; }
        private KafkaConfig _Config { get; set; }


        /// <summary>
        /// Retrieve KafkaConfig from appsettings
        /// </summary>
        /// <param name="config">Config POCO from appsettings file</param>
        /// <param name="clientFactory"><see cref="KafkaAdminFactory"/></param>
        /// <param name="logger">Logger instance</param>
        public KafkaAdminService(
            IOptions<KafkaConfig> config,
            KafkaAdminFactory clientFactory,
            ILogger<KafkaAdminService> logger)
        {
            if (clientFactory == null)
                throw new ArgumentNullException(nameof(clientFactory));

            if (config == null)
                throw new ArgumentNullException(nameof(config));

            _Config = config.Value ?? throw new ArgumentNullException(nameof(config));
            _Factory = clientFactory ?? throw new ArgumentNullException(nameof(clientFactory));
            _Logger = logger ?? throw new ArgumentNullException(nameof(logger));
        }


        /// <summary>
        /// Create a Kafka topic if it does not already exist
        /// </summary>
        /// <param name="token">Cancellation token required by IHostedService</param>
        /// <exception name="CreateTopicsException">
        /// Thrown for exceptions encountered except duplicate topic
        /// </exception>
        protected override async Task ExecuteAsync(CancellationToken stoppingToken)
        {
            using (var client = _Factory(_Config))
            {
                try
                {
                    _Logger.LogInformation("Admin service trying to create Kafka Topic...");
                    _Logger.LogInformation($"Topic::{_Config.Topic.Name}, ReplicationCount::{_Config.Topic.ReplicationCount}, PartitionCount::{_Config.Topic.PartitionCount}");
                    _Logger.LogInformation($"Bootstrap Servers::{_Config.Consumer.BootstrapServers}");

                    await client.CreateTopicsAsync(new TopicSpecification[] {
                        new TopicSpecification {
                            Name = _Config.Topic.Name,
                            NumPartitions = _Config.Topic.PartitionCount,
                            ReplicationFactor = _Config.Topic.ReplicationCount
                        }
                    }, null);

                    _Logger.LogInformation($"Admin service successfully created topic {_Config.Topic.Name}");
                }
                catch (CreateTopicsException e)
                {
                    if (e.Results[0].Error.Code != ErrorCode.TopicAlreadyExists)
                    {
                        _Logger.LogInformation($"An error occured creating topic {_Config.Topic.Name}: {e.Results[0].Error.Reason}");
                        throw e;
                    }
                    else
                    {
                        _Logger.LogInformation($"Topic {_Config.Topic.Name} already exists");
                    }
                }
            }

            _Logger.LogInformation("Kafka Consumer thread started");

            await Task.CompletedTask;
        }


        /// <summary>
        /// Call base class dispose
        /// </summary>
        public override void Dispose()
        {
            base.Dispose();
        }
    }
}