Scala Spark独立模式:工作人员未正确停止
在火花(0.7.0)中使用 并非所有工人都正确停止工作。 更具体地说,如果我想用Scala Spark独立模式:工作人员未正确停止,scala,mapreduce,apache-spark,Scala,Mapreduce,Apache Spark,在火花(0.7.0)中使用 并非所有工人都正确停止工作。 更具体地说,如果我想用 $SPARK_HOME/bin/start-all.sh 我得到: host1: starting spark.deploy.worker.Worker, logging to [...] host3: starting spark.deploy.worker.Worker, logging to [...] host2: starting spark.deploy.worker.Worker, logging
$SPARK_HOME/bin/start-all.sh
我得到:
host1: starting spark.deploy.worker.Worker, logging to [...]
host3: starting spark.deploy.worker.Worker, logging to [...]
host2: starting spark.deploy.worker.Worker, logging to [...]
host5: starting spark.deploy.worker.Worker, logging to [...]
host4: spark.deploy.worker.Worker running as process 8104. Stop it first.
host7: spark.deploy.worker.Worker running as process 32452. Stop it first.
host6: starting spark.deploy.worker.Worker, logging to [...]
在主机4和主机7上,确实有一个StandaloneExecutorBackend仍在运行:
$ jps
27703 Worker
27763 StandaloneExecutorBackend
28601 Jps
简单地重复
$SPARK_HOME/bin/stop-all.sh
不幸的是,也没有阻止工人。Spark只是告诉我工人们即将被阻止:
host2: no spark.deploy.worker.Worker to stop
host7: stopping spark.deploy.worker.Worker
host1: no spark.deploy.worker.Worker to stop
host4: stopping spark.deploy.worker.Worker
host6: no spark.deploy.worker.Worker to stop
host5: no spark.deploy.worker.Worker to stop
host3: no spark.deploy.worker.Worker to stop
no spark.deploy.master.Master to stop
但是,
$ jps
27703 Worker
27763 StandaloneExecutorBackend
28601 Jps
但事实并非如此。
有人知道stop-all.sh如何正常工作吗?
谢谢。原因似乎是试图缓存整个数据集导致工作机器大量交换。在这种情况下,工作机器的数量对于数据集来说太小了
$ jps
27703 Worker
27763 StandaloneExecutorBackend
28601 Jps