Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Lambda 卡夫卡流中的RocksDB异常_Lambda_Apache Kafka_Kafka Consumer Api_Apache Kafka Streams - Fatal编程技术网

Lambda 卡夫卡流中的RocksDB异常

Lambda 卡夫卡流中的RocksDB异常,lambda,apache-kafka,kafka-consumer-api,apache-kafka-streams,Lambda,Apache Kafka,Kafka Consumer Api,Apache Kafka Streams,在一个简单的Kafka流程序中,当我使用下面的代码时,它不会抛出任何错误: KTable<String, Long> result= source.mapValues(textLine ->textLine.toLowerCase()) .flatMapValues(lowercasedTextLine -> Arrays.asList(lowercasedTextLine.split(" "))) .selectKey((ignore

在一个简单的Kafka流程序中,当我使用下面的代码时,它不会抛出任何错误:

      KTable<String, Long> result= source.mapValues(textLine
      ->textLine.toLowerCase()) .flatMapValues(lowercasedTextLine ->
      Arrays.asList(lowercasedTextLine.split(" "))) .selectKey((ignoredKey,word) ->
      word) .groupByKey() .count("Counts");

      result.to(Serdes.String(), Serdes.Long(), "wc-output");
KTable result=source.mapValues(textLine
->textLine.toLowerCase()).flatMapValues(小写文本行->
Arrays.asList(小写的textline.split(“”)).selectKey((ignoredKey,word)->
groupByKey().count(“Counts”);
to(Serdes.String(),Serdes.Long(),“wc输出”);
然而,当我使用下面的代码时,我得到了错误:

    KStream<String, String> source = builder.stream("wc-input");
    source.groupBy((key, word) -> word).windowedBy(TimeWindows.of(TimeUnit.SECONDS.toMillis(5000))).count()
            .toStream().map((key, value) -> new KeyValue<>(key.key(), value))
            .to("wc-output", Produced.with(Serdes.String(), Serdes.Long()));
KStream source=builder.stream(“wc输入”);
source.groupBy((关键字,word)->word.windowedBy(TimeWindows.of(TimeUnit.SECONDS.toMillis(5000)).count()
.toStream().map((键,值)->新键值(键。键(),值))
.to(“wc输出”,产生于.with(Serdes.String(),Serdes.Long());
线程中的异常 “streams-wordcount-b160d715-f0e0-42ee-831e-0e4eed7e9424-StreamThread-1” org.apache.kafka.streams.errors.StreamsException:捕获到异常 过程taskId=1_0,处理器=KSTREAM-SOURCE-0000000006, topic=streams-wordcount-KSTREAM-AGGREGATE-STATE-STORE-000000000 2-repartition, 分区=0,偏移量=0 org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:232) 在 org.apache.kafka.streams.processor.internal.AssignedTasks.process(AssignedTasks.java:403) 在 org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:317) 在 org.apache.kafka.streams.processor.internals.StreamThread.processAndMaybeCommit(StreamThread.java:942) 在 org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:822) 在 org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774) 在 org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744) 原因:org.apache.kafka.streams.errors.Processor状态异常: 打开存储时出错 KSTREAM-AGGREGATE-STATE-STORE-000000000 2:1553472000000位于位置 \tmp\kafka streams\streams wordcount\1\u 0\KSTREAM-AGGREGATE-STATE-STORE-0000000002\KSTREAM-AGGREGATE-STATE-STORE-0000000002:1553472000000 在 org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:204) 在 org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:174) 在 org.apache.kafka.streams.state.internals.Segment.openDB(Segment.java:40) 在 org.apache.kafka.streams.state.internal.Segments.getorCreateSecgment(Segments.java:89) 在 org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStore.put(RocksDBSegmentedBytesStore.java:81) 在 org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:43) 在 org.apache.kafka.streams.state.internals.RocksDBWindowStore$RocksDBWindowBytesStore.put(RocksDBWindowStore.java:34) 在 org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:67) 在 org.apache.kafka.streams.state.internals.ChangeLoggingWindowBytesStore.put(ChangeLoggingWindowBytesStore.java:33) 在 org.apache.kafka.streams.state.internals.CachingWindowStore$1.apply(CachingWindowStore.java:100) 在 org.apache.kafka.streams.state.internals.NamedCache.flush(NamedCache.java:141) 在 org.apache.kafka.streams.state.internals.NamedCache.execute(NamedCache.java:232) 在 org.apache.kafka.streams.state.internals.ThreadCache.maybeEvict(ThreadCache.java:245) 在 org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:153) 在 org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:157) 在 org.apache.kafka.streams.state.internals.CachingWindowStore.put(CachingWindowStore.java:36) 在 org.apache.kafka.streams.state.internals.MeteredWindowsStore.put(meteredWindowsStore.java:96) 在 org.apache.kafka.streams.kstream.internals.kstreamwindowaggegate$kstreamwindowaggeprocessor.process(kstreamwindowaggegate.java:122) 在 org.apache.kafka.streams.processor.internal.ProcessorNode$1.run(ProcessorNode.java:46) 在 org.apache.kafka.streams.processor.internal.streamsmetricsiml.measureLatencyNs(streamsmetricsiml.java:208) 在 org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:124) 在 org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:85) 在 org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:80) 在 org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:216) ... 6其他原因:org.rocksdb.rocksdbeexception:未能创建 目录: H:\tmp\kafka streams\streams wordcount\1\u 0\KSTREAM-AGGREGATE-STATE-STORE-000000000 2\KSTREAM-AGGREGATE-STATE-STORE-000000000 2:1553472000000: 位于的org.rocksdb.rocksdb.open(本机方法)处的参数无效 org.rocksdb.rocksdb.open(rocksdb.java:231)位于 org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:197)


使用窗口聚合时,存储的名称不同,并且Kafka
1.0.0
中存在影响Windows操作系统的错误:窗口存储的名称包含Windows操作系统上不允许的
。在
1.0.1
1.1.0


请改进代码的格式。这将使阅读和回答您的问题更容易。似乎Kafka Streams无法创建状态存储目录。您可以尝试将state store目录更改为另一个路径,看看这是否解决了您的问题。您使用的是哪个Kafka Streams版本?这仍在Centos 7上发生,Kafka版本为2.1.0