Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/368.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java “线程中的异常”;流线型螺纹-1“;org.apache.kafka.streams.errors.StreamsException:无法重新平衡_Java_Apache Kafka_Kafka Consumer Api - Fatal编程技术网

Java “线程中的异常”;流线型螺纹-1“;org.apache.kafka.streams.errors.StreamsException:无法重新平衡

Java “线程中的异常”;流线型螺纹-1“;org.apache.kafka.streams.errors.StreamsException:无法重新平衡,java,apache-kafka,kafka-consumer-api,Java,Apache Kafka,Kafka Consumer Api,我创建了一个主题,并让一个简单的制作人在该主题中发布一些消息 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-file-input bin/kafka-console-producer.sh --broker-list localhost:9092 --streams-file-input 我在kafka streams

我创建了一个主题,并让一个简单的制作人在该主题中发布一些消息

 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic streams-file-input

bin/kafka-console-producer.sh --broker-list localhost:9092 --streams-file-input
我在kafka streams中运行下面的简单示例,我遇到了一个我无法处理的奇怪异常

 Properties props = new Properties();
            props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
            props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.3:9092");
            props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
            props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());

            // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data
            props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

            KStreamBuilder builder = new KStreamBuilder();

            builder.stream("streams-file-input").to("streams-pipe-output");

            KafkaStreams streams = new KafkaStreams(builder, props);
            streams.start();

            // usually the stream application would be running forever,
            // in this example we just let it run for some time and stop since the input data is finite.
            Thread.sleep(5000L);

            streams.close();

线程“StreamThread-1”org.apache.kafka.streams.errors.StreamsException:无法重新平衡
位于org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:299)
位于org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
原因:org.apache.kafka.streams.errors.ProcessorStateException:创建状态管理器时出错
位于org.apache.kafka.streams.processor.internals.AbstractTask。(AbstractTask.java:71)
位于org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:86)
位于org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
位于org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
位于org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
位于org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
位于org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
位于org.apache.kafka.clients.consumer.internal.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
位于org.apache.kafka.clients.consumer.internal.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
位于org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
位于org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
位于org.apache.kafka.clients.consumer.internal.RequestFuture$2.onSuccess(RequestFuture.java:182)
位于org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
位于org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
位于org.apache.kafka.clients.consumer.internal.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
位于org.apache.kafka.clients.consumer.internal.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
位于org.apache.kafka.clients.consumer.internal.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
位于org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
位于org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
位于org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
位于org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
位于org.apache.kafka.clients.consumer.internal.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
位于org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
位于org.apache.kafka.clients.consumer.internal.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
位于org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
位于org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
位于org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
位于org.apache.kafka.clients.consumer.internals.AbstractCoordinator.EnsuccreactiveGroup(AbstractCoordinator.java:243)
位于org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
访问org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
访问org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
位于org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:295)
... 还有一个
原因:java.io.FileNotFoundException:C:\tmp\kafka streams\my streapplication\0\u 0\.lock(系统找不到指定的路径)
位于java.io.RandomAccessFile.open0(本机方法)
位于java.io.RandomAccessFile.open(RandomAccessFile.java:316)
位于java.io.RandomAccessFile。(RandomAccessFile.java:243)
位于org.apache.kafka.streams.processor.internals.ProcessorStateManager.lockStateDirectory(ProcessorStateManager.java:125)
位于org.apache.kafka.streams.processor.internals.ProcessorStateManager.(ProcessorStateManager.java:93)
位于org.apache.kafka.streams.processor.internals.AbstractTask。(AbstractTask.java:69)


org.apache.kafka
卡夫卡河
0.10.0.0
org.apache.kafka
卡夫卡客户
0.10.0.0
不管我做了什么,我都得到了这个例外。我使用Ubuntu在vmware中运行kafka cluster(我使用的版本是kafka_2.11-0.10.0.0),可能是ram Cpu的问题

Caused by: java.io.FileNotFoundException: C:\tmp\kafka-streams\my-streapplication\0_0\.lock (The system cannot find the path specified)
这意味着应用程序状态的父目录
C:\tmp\kafka streams
不存在。它是
StreamConfig
中的默认目录。我不知道为什么在Windows上创建失败


您可以将
StreamConfig.STATE\u DIR\u CONFIG
设置为指定目录。

多亏了@Muyoo,这是正确的修复方法:

        Properties props = new Properties();
        props.put(StreamsConfig.APPLICATION_ID_CONFIG,"my-stremapplication");
        props.put(StreamsConfig.STATE_DIR_CONFIG, "streams-pipe");
        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.210:9092");
        props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
        props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
Caused by: java.io.FileNotFoundException: C:\tmp\kafka-streams\my-streapplication\0_0\.lock (The system cannot find the path specified)
        Properties props = new Properties();
        props.put(StreamsConfig.APPLICATION_ID_CONFIG,"my-stremapplication");
        props.put(StreamsConfig.STATE_DIR_CONFIG, "streams-pipe");
        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.210:9092");
        props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
        props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());