Apache spark 不能';当我们在ApacheSpark中使用UI时,找不到集合([TOPICNNAME,0])的引线

Apache spark 不能';当我们在ApacheSpark中使用UI时,找不到集合([TOPICNNAME,0])的引线,apache-spark,apache-kafka,spark-streaming,Apache Spark,Apache Kafka,Spark Streaming,我们使用ApacheSpark 1.5.1和kafka_2.10-0.8.2.1以及kafka DirectStream API使用Spark从kafka获取数据 我们使用以下设置在卡夫卡中创建主题 复制因子:1和副本:1 当所有Kafka实例都在运行时,Spark作业工作正常。然而,当集群中的一个Kafka实例关闭时,我们会得到下面重现的异常。一段时间后,我们重新启动了已禁用的Kafka实例并试图完成Spark作业,但Spark已因异常而终止。因此,我们无法阅读卡夫卡主题中的剩余信息 ERRO

我们使用ApacheSpark 1.5.1和kafka_2.10-0.8.2.1以及kafka DirectStream API使用Spark从kafka获取数据

我们使用以下设置在卡夫卡中创建主题

复制因子:1和副本:1

当所有Kafka实例都在运行时,Spark作业工作正常。然而,当集群中的一个Kafka实例关闭时,我们会得到下面重现的异常。一段时间后,我们重新启动了已禁用的Kafka实例并试图完成Spark作业,但Spark已因异常而终止。因此,我们无法阅读卡夫卡主题中的剩余信息

ERROR DirectKafkaInputDStream:125 - ArrayBuffer(org.apache.spark.SparkException: Couldn't find leaders for Set([normalized-tenant4,0]))
ERROR JobScheduler:96 - Error generating jobs for time 1447929990000 ms
org.apache.spark.SparkException: ArrayBuffer(org.apache.spark.SparkException: Couldn't find leaders for Set([normalized-tenant4,0]))
        at org.apache.spark.streaming.kafka.DirectKafkaInputDStream.latestLeaderOffsets(DirectKafkaInputDStream.scala:123)
        at org.apache.spark.streaming.kafka.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:145)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:350)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:349)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:349)
        at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:399)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:344)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:342)
        at scala.Option.orElse(Option.scala:257)
        at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:339)
        at org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:38)
        at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:120)
        at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:120)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
        at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
        at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
        at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:120)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$2.apply(JobGenerator.scala:247)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$2.apply(JobGenerator.scala:245)
        at scala.util.Try$.apply(Try.scala:161)
        at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:245)
        at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:181)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:87)
        at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:86)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

提前谢谢。请帮助解决此问题。

这是预期行为。您已通过将ReplicationFactor设置为1,请求将每个主题存储在一台计算机上。当恰好存储主题4的一台机器被取下时,消费者无法找到主题的负责人


请参阅。

无法找到指定主题的leader的此类错误的原因之一是Kafka服务器配置有问题

打开Kafka服务器配置:

vim ./kafka/kafka-<your-version>/config/server.properties
我正在使用MapR沙盒提供的卡夫卡设置,并试图通过spark代码访问卡夫卡。我在访问卡夫卡时遇到相同的错误,因为我的配置缺少IP

listeners=PLAINTEXT://{host-ip}:{host-port}