Scala 火花在纱线簇exitCode上运行=13:

Scala 火花在纱线簇exitCode上运行=13:,scala,hadoop,apache-spark,yarn,Scala,Hadoop,Apache Spark,Yarn,我是spark/纱线新手,在提交纱线集群上的spark作业时遇到exitCode=13。当spark作业在本地模式下运行时,一切正常 我使用的命令是: /usr/hdp/current/spark-client/bin/spark-submit --class com.test.sparkTest --master yarn --deploy-mode cluster --num-executors 40 --executor-cores 4 --driver-memory 17g --exec

我是spark/纱线新手,在提交纱线集群上的spark作业时遇到exitCode=13。当spark作业在本地模式下运行时,一切正常

我使用的命令是:

/usr/hdp/current/spark-client/bin/spark-submit --class com.test.sparkTest --master yarn --deploy-mode cluster --num-executors 40 --executor-cores 4 --driver-memory 17g --executor-memory 22g --files /usr/hdp/current/spark-client/conf/hive-site.xml /home/user/sparkTest.jar*
火花错误日志:

16/04/12 17:59:30 INFO Client:
         client token: N/A
         diagnostics: Application application_1459460037715_23007 failed 2 times due to AM Container for appattempt_1459460037715_23007_000002 exited with  exitCode: 13
For more detailed output, check application tracking page:http://b-r06f2-prod.phx2.cpe.net:8088/cluster/app/application_1459460037715_23007Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_e40_1459460037715_23007_02_000001
Exit code: 13
Stack trace: ExitCodeException exitCode=13:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
        at org.apache.hadoop.util.Shell.run(Shell.java:487)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)


**Yarn logs**

    16/04/12 23:55:35 INFO mapreduce.TableInputFormatBase: Input split length: 977 M bytes.
16/04/12 23:55:41 INFO yarn.ApplicationMaster: Waiting for spark context initialization ...
16/04/12 23:55:51 INFO yarn.ApplicationMaster: Waiting for spark context initialization ...
16/04/12 23:56:01 INFO yarn.ApplicationMaster: Waiting for spark context initialization ...
16/04/12 23:56:11 INFO yarn.ApplicationMaster: Waiting for spark context initialization ...
16/04/12 23:56:11 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x152f0b4fc0e7488
16/04/12 23:56:11 INFO zookeeper.ZooKeeper: Session: 0x152f0b4fc0e7488 closed
16/04/12 23:56:11 INFO zookeeper.ClientCnxn: EventThread shut down
16/04/12 23:56:11 INFO executor.Executor: Finished task 0.0 in stage 1.0 (TID 2). 2003 bytes result sent to driver
16/04/12 23:56:11 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 82134 ms on localhost (2/3)
16/04/12 23:56:17 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x4508c270df0980316/04/12 23:56:17 INFO zookeeper.ZooKeeper: Session: 0x4508c270df09803 closed *
...
    16/04/12 23:56:21 ERROR yarn.ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application.
16/04/12 23:56:21 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 13, (reason: Timed out waiting for SparkContext.)
16/04/12 23:56:21 INFO spark.SparkContext: Invoking stop() from shutdown hook *

似乎您已将代码中的master设置为本地

SparkConf.setMaster(“本地[*])

您必须在代码中取消设置master,并在以后发出
spark submit

spark提交——主纱线客户端…

如果它能帮助某人


此错误的另一种可能性是,当您错误地输入--class参数时,我遇到了完全相同的问题,但上面的答案不起作用。
或者,当我使用
spark submit--deploy模式客户机运行此操作时,一切正常。

在集群模式下运行SparkSQL作业时,我遇到了相同的错误。其他解决方案都不适合我,但在Hadoop中查找job history服务器日志时,我发现了这个堆栈跟踪

20/02/05 23:01:24 INFO hive.metastore: Connected to metastore.
20/02/05 23:03:03 ERROR yarn.ApplicationMaster: Uncaught exception: 
java.util.concurrent.TimeoutException: Futures timed out after [100000 milliseconds]
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:223)
    at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220)
    at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:468)
    at org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$runImpl(ApplicationMaster.scala:305)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply$mcV$sp(ApplicationMaster.scala:245)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply(ApplicationMaster.scala:245)
    at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply(ApplicationMaster.scala:245)
...

查看,您会发现基本上AM在等待执行user类的线程设置
spark.driver.port
属性时超时

因此,这可能是一个暂时的问题,或者您应该调查您的代码超时的原因

您是否也可以共享纱线日志(不是整个日志,只是纱线日志中的错误消息)?您可以获得纱线日志:
$纱线日志-applicationId application_1459460037715_18191
,感谢您的回复。因此,exitCode 10是因为classNotFound问题。在快速修复之后,当spark作业在纱线集群上运行时,我遇到了退出代码为13的新问题。它在本地模式下运行良好。我已经更新了问题和日志,这样就不会让人感到困惑:)你在代码中设置了master了吗?像做
SparkConf.setMaster(“local[*])
?你完全正确!:)谢谢。我以前在另一个地方也发过同样的问题,退出代码是15。因此,当这一次是13时,我甚至没有将代码作为日志进行回顾,所以dumm.如果我想提交到--master Thread--deploy mode cluster…它的给出错误。错误是什么??因为这是spark提交版本2+的新方法,所以它不应该给出错误。有人知道原因吗?是的,这解决了我的问题。我使用的是
spark submit--deploy mode cluster
,但当我将其更改为
client
时,它工作得很好。在我的例子中,我使用python代码执行SQL脚本,因此我的代码不是“依赖于火花”,但我不确定在需要多处理时这样做会有什么影响。你救了我一天!这个类参数是什么?你怎么知道你的类参数?这是你想用spark执行的主要类,这个问题可能对你有帮助