Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark Spark 1.5.1独立群集-线程“main”akka.actor.ActorNotFound中出现异常:找不到的actor_Apache Spark - Fatal编程技术网

Apache spark Spark 1.5.1独立群集-线程“main”akka.actor.ActorNotFound中出现异常:找不到的actor

Apache spark Spark 1.5.1独立群集-线程“main”akka.actor.ActorNotFound中出现异常:找不到的actor,apache-spark,Apache Spark,通过spark submit或java代码将作业提交到集群时,我遇到了一些问题。作业不断失败,stderr记录在SPARK_HOME/work/app_id下。。始终显示相同的错误: 15/10/08 23:04:39 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@masternode:53411] has failed, address is now gated f

通过spark submit或java代码将作业提交到集群时,我遇到了一些问题。作业不断失败,stderr记录在SPARK_HOME/work/app_id下。。始终显示相同的错误:

15/10/08 23:04:39 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkDriver@masternode:53411] has failed, address is now gated for [5000] ms. Reason: [Association failed with [akka.tcp://sparkDriver@masternode:53411]] Caused by: [Connection refused: masternode/192.168.10.214:53411]
Exception in thread "main" akka.actor.ActorNotFound: Actor not found for: ActorSelection[Anchor(akka.tcp://sparkDriver@masternode:53411/), Path(/user/MapOutputTracker)]
    at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)
    at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
    at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:73)
    at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
    at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:120)
    at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
    at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:266)
    at akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:533)
    at akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:569)
    at akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:559)
    at akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:87)
    at akka.remote.EndpointWriter.postStop(Endpoint.scala:557)
    at akka.actor.Actor$class.aroundPostStop(Actor.scala:477)
    at akka.remote.EndpointActor.aroundPostStop(Endpoint.scala:411)
    at akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:210)
    at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
    at akka.actor.ActorCell.terminate(ActorCell.scala:369)
    at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
    at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
    at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

你知道是什么原因吗?运行netstat表明没有进程侦听端口53411。

我猜错误消息是:

与远程系统的关联 [阿卡。tcp://sparkDriver@主节点:53411]已失败

告诉你主工沟通有问题

我以前犯过这个错误,我的建议是:

确保您的主地址正确,检查 检查防火墙,确保端口未被阻止。Spark将使用一些随机端口号进行通信 确保你的内存足够大,如果没有足够的资源,有时也会产生类似的错误 您可以在端口4040和18080监视集群状态,也许这也会给您一些有用的线索

http://<server-url>:18080

http://<driver-node>:4040