Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 所有执行者结束时状态被终止并退出状态1_Apache Spark - Fatal编程技术网

Apache spark 所有执行者结束时状态被终止并退出状态1

Apache spark 所有执行者结束时状态被终止并退出状态1,apache-spark,Apache Spark,我正在尝试设置本地Spark群集。我正在Windows 10计算机上使用Spark 2.4.4。 要启动主控和一个工人,我需要 spark-class org.apache.spark.deploy.master.Master spark-class org.apache.spark.deploy.worker.Worker 172.17.1.230:7077 将应用程序提交到集群后,它会成功完成,但在Spark web admin UI中会显示应用程序已被终止。这也是我从工作日志中得到的。我

我正在尝试设置本地Spark群集。我正在Windows 10计算机上使用Spark 2.4.4。 要启动主控和一个工人,我需要

spark-class org.apache.spark.deploy.master.Master
spark-class org.apache.spark.deploy.worker.Worker 172.17.1.230:7077
将应用程序提交到集群后,它会成功完成,但在Spark web admin UI中会显示应用程序已被终止。这也是我从工作日志中得到的。我试着运行我自己的示例和Spark安装中包含的示例。他们都被exitStatus 1杀死了

从spark安装文件夹启动spark JavaSparkKPI示例

Spark> spark-submit --master spark://172.17.1.230:7077 --class org.apache.spark.examples.JavaSparkPi .\examples\jars\spark-examples_2.11-2.4.4.jar
完成计算输出后的部分日志

20/01/19 18:55:11 INFO DAGScheduler: Job 0 finished: reduce at JavaSparkPi.java:54, took 4.183853 s
Pi is roughly 3.13814
20/01/19 18:55:11 INFO SparkUI: Stopped Spark web UI at http://Nikola-PC:4040
20/01/19 18:55:11 INFO StandaloneSchedulerBackend: Shutting down all executors
20/01/19 18:55:11 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
20/01/19 18:55:11 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/01/19 18:55:11 WARN TransportChannelHandler: Exception in connection from /172.17.1.230:58560
java.io.IOException: An existing connection was forcibly closed by the remote host
已完成应用程序的stderr日志在结尾处输出此信息

20/01/19 18:55:11 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 910 bytes result sent to driver
20/01/19 18:55:11 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 910 bytes result sent to driver
20/01/19 18:55:11 INFO CoarseGrainedExecutorBackend: Driver commanded a shutdown
工人日志输出

20/01/19 18:55:06 INFO ExecutorRunner: Launch command: "C:\Program Files\Java\jdk1.8.0_231\bin\java" "-cp" "C:\Users\nikol\Spark\bin\..\conf\;C:\Users\nikol\Spark\jars\*" "-Xmx1024M" "-Dspark.driver.port=58484" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@Nikola-PC:58484" "--executor-id" "0" "--hostname" "172.17.1.230" "--cores" "12" "--app-id" "app-20200119185506-0001" "--worker-url" "spark://Worker@172.17.1.230:58069"
20/01/19 18:55:11 INFO Worker: Asked to kill executor app-20200119185506-0001/0
20/01/19 18:55:11 INFO ExecutorRunner: Runner thread for executor app-20200119185506-0001/0 interrupted
20/01/19 18:55:11 INFO ExecutorRunner: Killing process!
20/01/19 18:55:11 INFO Worker: Executor app-20200119185506-0001/0 finished with state KILLED exitStatus 1
我已经为Hadoop 2.6和2.7尝试了Spark 2.4.4。在这两种情况下,问题仍然存在


这个问题和一个问题是一样的

检查执行器日志您可能会收到OOM错误如果执行器日志指的是stdout和stderr日志,stdout为空,而stderr显示没有问题。我包括了stderr的最后3行。