Apache kafka 卡夫卡流-缺少源主题

Apache kafka 卡夫卡流-缺少源主题,apache-kafka,apache-kafka-streams,Apache Kafka,Apache Kafka Streams,我正在使用Kafka Streams拓扑,有时,在更改applicationId和/或clientId属性后,我在特定Kafka流上收到一个错误:“缺少源主题流.webshop.products.prices.5分配期间。返回错误不完整\u source\u topic\u METADATA”。我已经在每个Kafka节点的server.properties中设置了create.topic=true属性,但是似乎没有创建此流的主题 这是我的卡夫卡流拓扑: package ro.orange

我正在使用Kafka Streams拓扑,有时,在更改applicationId和/或clientId属性后,我在特定Kafka流上收到一个错误:“
缺少源主题流.webshop.products.prices.5分配期间。返回错误不完整\u source\u topic\u METADATA
”。我已经在每个Kafka节点的server.properties中设置了
create.topic=true
属性,但是似乎没有创建此流的主题

这是我的卡夫卡流拓扑:

    package ro.orange.eshop.productindexer.domain

import org.apache.kafka.streams.KafkaStreams
import org.apache.kafka.streams.StreamsBuilder
import org.apache.kafka.streams.kstream.Materialized
import org.apache.kafka.streams.kstream.Printed
import ro.orange.digital.avro.Aggregate
import ro.orange.digital.avro.Key
import ro.orange.digital.avro.Price
import ro.orange.digital.avro.StockQuantity
import ro.orange.eshop.productindexer.infrastructure.configuration.kafka.makeStoreProvider
import java.util.concurrent.CompletableFuture

class SaleProductTopology(
        private val streamNameRepository: IStreamNameRepository,
        private val saleProductMapper: ISaleProductMapper,
        private val productRatingMapper: IProductRatingMapper,
        private val productStockMapper: IProductStockMapper,
        private val lazyKafkaStreams: CompletableFuture<KafkaStreams>
) {
    fun streamsBuilder(): StreamsBuilder {
        val streamsBuilder = StreamsBuilder()
        val productsStream = streamsBuilder.stream<Key, Aggregate>(streamNameRepository.inputWebshopProductsTopic)
        val productPricesStream = streamsBuilder.stream<Key, Price>(streamNameRepository.productsPricesStreamTopic)
        val productsRatingsStream = streamsBuilder.stream<Key, Aggregate>(streamNameRepository.inputProductRatingsTopic)
        val inputProductsStockStream = streamsBuilder.stream<Key, Aggregate>(streamNameRepository.inputProductsStockTopic)

        val productsStockStream = inputProductsStockStream
                .mapValues(productStockMapper::aStockQuantity)
        productsStockStream.to(streamNameRepository.productsStockStreamTopic)

        streamsBuilder.globalTable<Key, StockQuantity>(
                streamNameRepository.productsStockStreamTopic,
                Materialized.`as`(streamNameRepository.productsStockGlobalStoreTopic)
        )

        val quantityProvider = lazyKafkaStreams.makeStoreProvider<StockQuantity>(streamNameRepository.productsStockGlobalStoreTopic)

        val saleProductsTable = productsStream
                .groupByKey()
                .reduce({ _, aggregate -> aggregate }, Materialized.`as`(streamNameRepository.saleProductsStoreTopic))
                .mapValues { aggregate -> saleProductMapper.aSaleProduct(aggregate, quantityProvider) }

        saleProductsTable.toStream().print(Printed.toSysOut())

        val productPricesTable = productPricesStream
                .groupByKey()
                .reduce({ _, price -> price }, Materialized.`as`(streamNameRepository.productsPricesStoreTopic))

        productPricesTable.toStream().print(Printed.toSysOut())

        val productsRatingsTable = productsRatingsStream
                .groupByKey()
                .reduce({ _, aggregate -> aggregate }, Materialized.`as`(streamNameRepository.productsRatingsStoreTopic))
                .mapValues { aggregate -> productRatingMapper.aProductRating(aggregate) }

        productsRatingsTable.toStream().print(Printed.toSysOut())

        val productsStockTable = productsStockStream
                .groupByKey()
                .reduce { _, aggregate -> aggregate }

        saleProductsTable
                .leftJoin(productPricesTable) { saleProduct, price -> saleProductMapper.aPricedSaleProduct(saleProduct, price) }
                .leftJoin(productsRatingsTable) { saleProduct, rating -> saleProductMapper.aRatedSaleProduct(saleProduct, rating) }
                .leftJoin(productsStockTable) { saleProduct, stockQuantity -> saleProductMapper.aQuantifiedSaleProduct(saleProduct, stockQuantity) }
                .mapValues { saleProduct -> AggregateMapper.aSaleProductAggregate(saleProduct) }
                .toStream()
                .to(streamNameRepository.saleProductsTopic)

        return streamsBuilder
    }
}
包ro.orange.eshop.productindexer.domain
导入org.apache.kafka.streams.KafkaStreams
导入org.apache.kafka.streams.StreamsBuilder
导入org.apache.kafka.streams.kstream.Materialized
导入org.apache.kafka.streams.kstream.Printed
导入ro.orange.digital.avro.Aggregate
导入ro.orange.digital.avro.Key
进口ro.orange.digital.avro.Price
进口ro.orange.digital.avro.StockQuantity
导入ro.orange.eshop.productindexer.infrastructure.configuration.kafka.makeStoreProvider
导入java.util.concurrent.CompletableFuture
类拓扑(
私有val streamNameRepository:IsStreamNameRepository,
私有val saleProductMapper:IsalProductMapper,
私有val productRatingMapper:IPProductratingMapper,
私有val productStockMapper:IPProductStockMapper,
私人瓦尔·拉齐卡夫卡斯特拉姆斯:完全的未来
) {
乐趣streamsBuilder():streamsBuilder{
val streamsBuilder=streamsBuilder()
val productsStream=streamsBuilder.stream(streamNameRepository.inputWebshopProductsTopic)
val productPricesStream=streamsBuilder.stream(streamNameRepository.productsPricesStreamTopic)
val productsratingstream=streamsBuilder.stream(streamNameRepository.inputProductRatingsTopic)
val inputProductsStockStream=streamsBuilder.stream(streamNameRepository.inputProductsStockTopic)
val productsStockStream=inputProductsStockStream
.mapValues(productStockMapper::aStockQuantity)
productsStockStream.to(streamNameRepository.productsStockStreamTopic)
streamsBuilder.globalTable(
streamNameRepository.productsStockStreamTopic,
具体化.`as`(streamNameRepository.productsStockGlobalStoreTopic)
)
val quantityProvider=lazyKafkaStreams.makeStoreProvider(streamNameRepository.productsStockGlobalStoreTopic)
val saleProductsTable=productsStream
.groupByKey()
.reduce({u0,aggregate->aggregate},物化。`as`(streamNameRepository.saleProductsStoreTopic))
.mapValues{aggregate->saleProductMapper.asalProduct(聚合,quantityProvider)}
saleProductsTable.toStream().print(Printed.toSysOut())
val productPricesTable=productPricesStream
.groupByKey()
.reduce({u0,price->price},具体化。`as`(streamNameRepository.productsPrice访问存储主题))
productPricesTable.toStream().print(Printed.toSysOut())
val productsRatingsTable=productsratingstream
.groupByKey()
.reduce({u0,aggregate->aggregate},物化。`as`(streamNameRepository.productsRatingsStoreTopic))
.mapValues{aggregate->productRatingMapper.aProductRating(aggregate)}
ProductsStatingTable.toStream().print(Printed.toSysOut())
val productsStockTable=productsStockStream
.groupByKey()
.reduce{,aggregate->aggregate}
销售产品稳定
.leftJoin(productPricesTable){saleProduct,price->saleProductMapper.aPricedSaleProduct(saleProduct,price)}
.leftJoin(productsRatingsTable){saleProduct,rating->saleProductMapper.aRatedSaleProduct(saleProduct,rating)}
.leftJoin(productsStockTable){saleProduct,stockQuantity->saleProductMapper.aQuantifiedSaleProduct(saleProduct,stockQuantity)}
.mapValues{saleProduct->AggregateMapper.aSaleProductAggregate(saleProduct)}
.toStream()
.to(streamNameRepository.saleProductsTopic)
返回流生成器
}
}

正如@jacek laskowski所写:

KafkaStreams不会创建它,因为它是源代码

这是出于设计,因为如果其中一个源主题是自动创建的(它将有默认的分区数),并且是由用户提前创建的,那么分区数可能会有所不同。当KStream/KTable被连接时,它们必须有相同数量的分区——这是一个至关重要的假设

用户必须有意识地使用适当数量的分区创建主题(对于控制Kafka Streams应用程序性能的方法之一的流处理线程的数量)


阅读。

什么是
流。webshop。产品。价格。5
?这是
productsPricesStreamTopic
?在启动Kafka Streams应用程序之前,它应该可用<代码>KafkaStreams不会创建它,因为它是一个源。是的,该流是使用productsPricesStreamTopic创建的,并且它是一个源。但是为什么卡夫卡不能像其他主题那样自动创建该流。@DinaBogdan我遇到了不完整的\u SOURCE\u topic\u元数据错误,尽管我在运行流应用程序之前就创建了该主题。你有什么建议给我吗?谢谢我想知道我是否认为卡夫卡溪流的设计是对的。但是,不自动创建流主题似乎很有意义。