Apache kafka docker compose重新启动后,Kafka主题配置丢失
我使用docker运行kafka,并在卷上存储数据。 我设置了一些源连接器,主题是通过cleanup.policyApache kafka docker compose重新启动后,Kafka主题配置丢失,apache-kafka,docker-compose,debezium,Apache Kafka,Docker Compose,Debezium,我使用docker运行kafka,并在卷上存储数据。 我设置了一些源连接器,主题是通过cleanup.policydelete自动创建的。 使用卡夫卡管理器,我将策略更改为compact 问题: 停止/启动后,docker compose主题出现,但cleanup.policy恢复为delete 问题: 重新启动后如何保持主题配置 其他信息 我使用以下命令重新启动kafka dockers: rm/kafka/data/1/meta.properties;docker compose down&
delete
自动创建的。
使用卡夫卡管理器,我将策略更改为compact
问题:
停止/启动后,docker compose主题出现,但cleanup.policy恢复为delete
问题:
重新启动后如何保持主题配置
其他信息
我使用以下命令重新启动kafka dockers:
rm/kafka/data/1/meta.properties;docker compose down&&docker compose up-d--无重新创建
Docker-compose.yml:
version: '2'
services:
zookeeper:
image: debezium/zookeeper:${DEBEZIUM_VERSION}
ports:
- 2181:2181
- 2888:2888
- 3888:3888
volumes:
- /kafka/zookeeper_data:/zookeeper/data
- /kafka/zookeeper_logs:/zookeeper/logs
- /kafka/zookeeper_conf:/zookeeper/conf
kafka:
image: debezium/kafka:${DEBEZIUM_VERSION}
ports:
- 9092:9092
links:
- zookeeper
environment:
- ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_BROKER_ID=1
- ADVERTISED_HOST_NAME=172.16.10.187
volumes:
- /kafka/data:/kafka/data
- /kafka/config:/kafka/config
schema-registry:
image: confluentinc/cp-schema-registry
ports:
- 8181:8181
- 8081:8081
environment:
- SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL=zookeeper:2181
- SCHEMA_REGISTRY_HOST_NAME=schema-registry
- SCHEMA_REGISTRY_LISTENERS=http://schema-registry:8081
links:
- zookeeper
connect3:
build:
context: debezium-jdbc-es
ports:
- 8083:8083
links:
- kafka
environment:
- BOOTSTRAP_SERVERS=kafka:9092
- GROUP_ID=1
- CONFIG_STORAGE_TOPIC=my_connect_configs
- OFFSET_STORAGE_TOPIC=my_connect_offsets
- STATUS_STORAGE_TOPIC=my_connect_statuses
- KEY_CONVERTER=io.confluent.connect.avro.AvroConverter
- VALUE_CONVERTER=io.confluent.connect.avro.AvroConverter
- INTERNAL_KEY_CONVERTER=org.apache.kafka.connect.json.JsonConverter
- INTERNAL_VALUE_CONVERTER=org.apache.kafka.connect.json.JsonConverter
- CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL=http://schema-registry:8081
- CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL=http://schema-registry:8081
- KAFKA_OPTS=-javaagent:/kafka/jmx_prometheus_javaagent.jar=8080:/kafka/config.yml
- CONNECT_REST_ADVERTISED_HOST_NAME=connect3
- JMX_PORT=1976
prometheus:
build:
context: debezium-prometheus
ports:
- 9090:9090
links:
- connect3
grafana:
build:
context: debezium-grafana
ports:
- 3000:3000
links:
- prometheus
environment:
- DS_PROMETHEUS=prometheus
restproxy:
image: confluentinc/cp-kafka-rest
environment:
KAFKA_REST_BOOTSTRAP_SERVERS: "kafka:9092"
KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
KAFKA_REST_HOST_NAME: restproxy
KAFKA_REST_DEBUG: "true"
KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
ports:
- 8082:8082
kafka-ui:
image: landoop/kafka-connect-ui:latest
ports:
- 8000:8000
links:
- connect3
- schema-registry
- zookeeper
environment:
- CONNECT_URL=http://connect3:8083/
kafka-topic-ui:
image: landoop/kafka-topics-ui
links:
- connect3
ports:
- 8001:8000
environment:
- KAFKA_REST_PROXY_URL=http://restproxy:8082
- PROXY=true
kafka_manager:
image: hlebalbau/kafka-manager:stable
ports:
- "9000:9000"
environment:
ZK_HOSTS: "zookeeper:2181"
links:
- connect3
卡夫卡主题配置存储在Zookeeper中。您可以使用
bin/zookeeper-shell zookeeper_ip_or_fdqn:2181 get /config/topics/yourTopic
想到的一个潜在问题是zookeeper数据没有持久化。我看到您的zookeeper容器有3个卷,但可能需要检查zookeeper.properties中的DataDir属性是否指向持久化文件夹,或者zookeeper数据未持久化是否还有其他原因
希望这是有用的
致以最诚挚的问候。我不知道Kafka Manager是如何工作的,但您可以在docker映像的配置文件中将cleanup属性更改为
compact
,以便每次启动容器时,它都以compact
@Shubham开始是的,但这将适用于defailt策略,对于某些主题,我需要delete
policydocker可用于通过提交以前文档中的更改来创建新映像image@Shubham这个想法很酷,但仍然可以治愈症状,而不是疾病。无论如何谢谢你!这是我的计划,汉克斯,你给了我正确的方向。问题是我装载了错误的dirdirlogs
。它是用config编写的,是/zookeper/txns
。添加到docker compose文件中的dir,现在一切正常!