Ubuntu 在客户机模式下启动pyspark时,执行器丢失

Ubuntu 在客户机模式下启动pyspark时,执行器丢失,ubuntu,hadoop,apache-spark,pyspark,yarn,Ubuntu,Hadoop,Apache Spark,Pyspark,Yarn,我能够在一台笔记本电脑上以“纱线客户端”模式运行pyspark,我正在尝试在另一台笔记本电脑上进行设置。然而,这次我无法让它运行 当我尝试在Thread客户机模式下启动pyspark时,会出现以下错误。我使用动态资源分配,已将SPARK_EXECUTOR_内存设置为小于容器内存。我正在使用hadoop 2.6.4、spark 1.6.1、ubuntu 15.10 错误是否可能是由于网络问题造成的 16/06/12 01:49:34 INFO scheduler.DAGScheduler: Exe

我能够在一台笔记本电脑上以“纱线客户端”模式运行pyspark,我正在尝试在另一台笔记本电脑上进行设置。然而,这次我无法让它运行

当我尝试在Thread客户机模式下启动pyspark时,会出现以下错误。我使用动态资源分配,已将SPARK_EXECUTOR_内存设置为小于容器内存。我正在使用hadoop 2.6.4、spark 1.6.1、ubuntu 15.10

错误是否可能是由于网络问题造成的

16/06/12 01:49:34 INFO scheduler.DAGScheduler: Executor lost: 1 (epoch 0)
In [1]: 16/06/12 01:49:34 INFO cluster.YarnClientSchedulerBackend: Disabling executor 1.
16/06/12 01:49:34 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
16/06/12 01:49:34 INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, 192.168.2.16, 37900)
16/06/12 01:49:34 ERROR client.TransportClient: Failed to send RPC 9123554941984942265 to 192.168.2.16/192.168.2.16:47630: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
16/06/12 01:49:34 INFO storage.BlockManagerMaster: Removed 1 successfully in removeExecutor
16/06/12 01:49:34 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 1 at RPC address 192.168.2.16:47640, but got no response. Marking as slave     lost.
java.io.IOException: Failed to send RPC 9123554941984942265 to 192.168.2.16/192.168.2.16:47630: java.nio.channels.ClosedChannelException
    at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:239)
    at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:226)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
    at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:567)
    at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:801)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:699)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1122)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
    at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)
    at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.ClosedChannelException
16/06/12 01:49:34 ERROR cluster.YarnScheduler: Lost executor 1 on 192.168.2.16: Slave lost
16/06/12 01:49:34 INFO cluster.YarnClientSchedulerBackend: Disabling executor 2.
16/06/12 01:49:34 INFO scheduler.DAGScheduler: Executor lost: 2 (epoch 1)
16/06/12 01:49:34 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
16/06/12 01:49:34 ERROR client.TransportClient: Failed to send RPC 8690255566269835148 to 192.168.2.16/192.168.2.16:47630: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
16/06/12 01:49:34 INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(2, 192.168.2.16, 41124)
16/06/12 01:49:34 INFO storage.BlockManagerMaster: Removed 2 successfully in removeExecutor
16/06/12 01:49:34 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 2 at RPC address 192.168.2.16:47644, but got no response. Marking as slave     lost.
java.io.IOException: Failed to send RPC 8690255566269835148 to 192.168.2.16/192.168.2.16:47630: java.nio.channels.ClosedChannelException
    at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:239)
    at org.apache.spark.network.client.TransportClient$3.operationComplete(TransportClient.java:226)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
    at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:567)
    at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:801)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:699)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1122)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:633)
    at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:32)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:908)
    at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:960)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:893)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
    at java.lang.Thread.run(Thread.java:745)

这不是()种吗?@RamPrasadG谢谢你的链接。我现在可以运行spark了,奇怪的是我什么都没做。我重新启动了ubuntu好几次,结果都没用。我放弃了,睡了,醒了,打开笔记本电脑,试了试,这次成功了!我的最后一个变化是在hadoop中从jdk 8切换到jdk 7。也许spark不支持jdk 8No。。Spark支持JDK8,我认为这是您的洗牌服务问题。内蒂还是尼奥。。许多帖子都在讨论洗牌服务。当他们把它换成其他的时候,他们就通过了