Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/wpf/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop Spark未在纱线上运行客户端模式(状态:已接受)结束Spark提交(纱线上有Spark 1.6.1)失败_Hadoop_Apache Spark_Yarn - Fatal编程技术网

Hadoop Spark未在纱线上运行客户端模式(状态:已接受)结束Spark提交(纱线上有Spark 1.6.1)失败

Hadoop Spark未在纱线上运行客户端模式(状态:已接受)结束Spark提交(纱线上有Spark 1.6.1)失败,hadoop,apache-spark,yarn,Hadoop,Apache Spark,Yarn,我正试图在客户机模式下对Spark执行以下查询 $SPARK_HOME/bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode client $SPARK_HOME/examples/target/scala-2.10/spark-examples*.jar 10 当我执行上面的查询时,我的应用程序在以下方面卡住了 2013年7月16日1

我正试图在客户机模式下对Spark执行以下查询

$SPARK_HOME/bin/spark-submit --class org.apache.spark.examples.SparkPi     --master yarn     --deploy-mode client         $SPARK_HOME/examples/target/scala-2.10/spark-examples*.jar     10
当我执行上面的查询时,我的应用程序在以下方面卡住了

2013年7月16日17:14:28信息。客户:申请报告_146842828769910_0002(状态:已接受)

16/07/13 17:14:28信息。客户: 客户端令牌:不适用 诊断:不适用 ApplicationMaster主机:不适用 ApplicationMaster RPC端口:-1 队列:默认值 开始时间:1468430067384 最终状态:未定义 跟踪URL: 用户:nachiket

2013年7月16日17:14:29信息。客户:申请报告_146842828769910_0002(状态:已接受)

2013年7月16日17:14:30信息。客户:申请报告_146842828769910_0002(状态:已接受)

2013年7月16日17:14:31信息。客户:申请报告(状态:已接受)

2013年7月16日17:14:32信息。客户:申请报告_1468428769910_0002(状态:已接受)

我已经实施了以下链接中提到的大部分建议:

我仍然面临同样的问题。除了上述链接,还有其他解决方案吗

最后,作业失败,出现以下子句

    client token: N/A
         diagnostics: Application application_1468455134412_0001 failed 2 times due to Error launching appattempt_1468455134412_0001_000002. Got exception: org.apache.hadoop.net.ConnectTimeoutException: Call From sclab103/104.239.213.7 to 104.239.213.7:60640 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=104.239.213.7/104.239.213.7:60640]; For more details see:  http://wiki.apache.org/hadoop/SocketTimeout
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:751)
        at org.apache.hadoop.ipc.Client.call(Client.java:1479)
        at org.apache.hadoop.ipc.Client.call(Client.java:1412)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy82.startContainers(Unknown Source)
        at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy83.startContainers(Unknown Source)
        at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:118)
        at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:250)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=104.239.213.7/104.239.213.7:60640]
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
        at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
        at org.apache.hadoop.ipc.Client.call(Client.java:1451)
        ... 16 more
. Failing the application.
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1468455280498
         final status: FAILED
         tracking URL: http://hadoop-master:8088/cluster/app/application_1468455134412_0001
         user: sclab

你必须检查你的纱线集群的内存配置,有多少分配给你的资源和节点管理器。嗨@nath,我只运行Spark Pi作业,而且我还要确保集群上没有运行其他作业。以下是我的Thread站点内存设置:Thread.nodemanager.resource.memory-mb 3072 Thread.scheduler.minimum-allocation-mb 512 Thread.scheduler.maximum-allocation-mb 3072我注意到的一件重要事情是,我无法通过spark与Thread建立连接,我尝试了spark shell--master Thread client,但它失败了给出与上述相同的错误。请检查您的纱线群集是否已启动,检查纱线资源和节点日志,这似乎是连接问题。您必须检查纱线群集的内存配置,分配给资源和节点管理器的内存量。您好@nath,我只运行Spark Pi作业,而且我还确保集群上没有其他作业在运行。以下是我的Thread站点内存设置:Thread.nodemanager.resource.memory-mb 3072 Thread.scheduler.minimum-allocation-mb 512 Thread.scheduler.maximum-allocation-mb 3072我注意到的一件重要事情是,我无法通过spark与Thread建立连接,我尝试了spark shell--master Thread client,但它失败了给出与上述相同的错误。请检查您的纱线群集是否已启动,检查纱线资源和节点日志,这似乎是连接问题。