Java 合流云层上的卡夫卡溪流:';段.ms';具有值';60万';超过内部重新分区主题的最小限制14400000

Java 合流云层上的卡夫卡溪流:';段.ms';具有值';60万';超过内部重新分区主题的最小限制14400000,java,apache-kafka,apache-kafka-streams,confluent-platform,Java,Apache Kafka,Apache Kafka Streams,Confluent Platform,使用kafka streams 2.1.0版在融合云上运行 kafka streams应用程序启动时出现以下错误: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.PolicyViolationException: Config property 'segment.ms' with value '600000' exceeded min limit of 14400000. 完整调用堆栈: at

使用kafka streams 2.1.0版在融合云上运行 kafka streams应用程序启动时出现以下错误:

java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.PolicyViolationException: Config property 'segment.ms' with value '600000' exceeded min limit of 14400000.

完整调用堆栈:

at org.apache.kafka.streams.processor.internals.InternalTopicManager.makeReady(InternalTopicManager.java:143)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.prepareTopic(StreamsPartitionAssignor.java:967)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assign(StreamsPartitionAssignor.java:525)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:403)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:569)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1100(AbstractCoordinator.java:95)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:521)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:504)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:870)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:850)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:575)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:389)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:397)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:340)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:341)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1214)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1164)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:913)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:818)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:777)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:747)
无效值来自RepartitionTopicConfig

 private static final Map<String, String> REPARTITION_TOPIC_DEFAULT_OVERRIDES;
    static {
        final Map<String, String> tempTopicDefaultOverrides = new HashMap<>();
        tempTopicDefaultOverrides.put(TopicConfig.CLEANUP_POLICY_CONFIG, TopicConfig.CLEANUP_POLICY_DELETE);
        tempTopicDefaultOverrides.put(TopicConfig.SEGMENT_INDEX_BYTES_CONFIG, "52428800");               // 50 MB
        tempTopicDefaultOverrides.put(TopicConfig.SEGMENT_BYTES_CONFIG, "52428800");                     // 50 MB
        tempTopicDefaultOverrides.put(TopicConfig.SEGMENT_MS_CONFIG, "600000");                          // 10 min
        tempTopicDefaultOverrides.put(TopicConfig.RETENTION_MS_CONFIG, String.valueOf(Long.MAX_VALUE));  // Infinity
        REPARTITION_TOPIC_DEFAULT_OVERRIDES = Collections.unmodifiableMap(tempTopicDefaultOverrides);
    }
私有静态最终映射重新分区\u主题\u默认\u覆盖;
静止的{
final Map TestOpicDefaultOverrides=new HashMap();
试探opicDefaultOverrides.put(TopicConfig.CLEANUP\u POLICY\u CONFIG,TopicConfig.CLEANUP\u POLICY\u DELETE);
testopicDefaultOverrides.put(TopicConfig.SEGMENT_INDEX_BYTES_CONFIG,“52428800”);//50 MB
试探opicDefaultOverrides.put(TopicConfig.SEGMENT_BYTES_CONFIG,“52428800”);//50 MB
TENTOPICDEFAULTOVERRIDES.put(TopicConfig.SEGMENT_MS_CONFIG,“600000”);//10分钟
试探opicDefaultOverrides.put(TopicConfig.RETENTION_MS_CONFIG,String.valueOf(Long.MAX_VALUE));//无穷大
重新分区\u主题\u默认\u覆盖=集合。不可修改映射(试探性OPICDEFAULTOVERRIDES);
}

最终通过添加
StreamsConfig.topicPrefix(TopicConfig.SEGMENT\u MS\u CONFIG)->“14400000”修复
到StreamsConfig

从Kafka文档中我发现
段.ms
此配置控制Kafka将强制日志滚动的时间段,即使段文件未满,以确保保留可以删除或压缩旧数据。
。Confluent页面上只有
14400000
的默认值是
Confluent.metrics.reporter.topic.roll.ms
,这意味着
记录度量主题的滚动时间。可能是Confluent Metrics Reporter有自己的主题设置,这些设置与RepartitionTopicConfig中的默认设置相冲突。最终通过将StreamsConfig.topicPrefix(TopicConfig.SEGMENT\u MS\u CONFIG)->“14400000”添加到StreamsConfig中来修复。您应该为自己的问题给出正确的答案(并接受它).Btw:我们已经意识到这个问题,并计划取消这个限制,这样就不再需要在Streams中更改
段.ms
配置。我们甚至可以考虑改变KS的默认值——10分钟似乎相当低,但这是AK社区的决定。BTW:我们知道这个问题,并计划解除这个限制,从而改变<代码>段。我们甚至可以考虑改变KS的违约——10分钟似乎相当低,但这是AK社区的决定。