Apache spark 无法在Apache Spark群集上运行sparkPi

Apache spark 无法在Apache Spark群集上运行sparkPi,apache-spark,cluster-computing,bigdata,Apache Spark,Cluster Computing,Bigdata,以下是我的spark master UI,其中显示了1名注册工人。 我正在尝试使用以下提交脚本在集群上运行sparkPi应用程序 ./bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master spark://159.8.201.251:7077 \ /opt/Spark/spark-1.2.1-bin-cdh4/lib/spark-examples-1.2.1-hadoop2.0.0-mr1-c

以下是我的spark master UI,其中显示了1名注册工人。 我正在尝试使用以下提交脚本在集群上运行sparkPi应用程序

 ./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://159.8.201.251:7077 \
   /opt/Spark/spark-1.2.1-bin-cdh4/lib/spark-examples-1.2.1-hadoop2.0.0-mr1-cdh4.2.0.jar \
  1
但它不断发出以下警告,并且从未完成执行:

 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
我使用
/sbin/start master.sh
我使用
/bin/spark class org.apache.spark.deploy.worker.worker连接工人spark://x.x.x.x:7077

主服务器上的日志(不断重复)

工作程序上的日志(不断重复)


此错误的原因是:工人无法真正连接到主节点,因为工人的/etc/hosts文件中不存在spark主节点的IP和主机名。为了使集群正常工作,每个节点必须在其/etc/hosts文件中包含集群中每个其他节点的主机条目。 例如:


您如何初始化spark cluster?请添加命令。已编辑问题以包含命令您是否尝试在您的PC上运行该命令?不,它们都是具有1个内核、1GB ram的虚拟机,对于sparkPi应用程序来说,这还不够。您可以提供一个主机文件示例,请?
15/05/01 01:16:48 INFO AppClient$ClientActor: Executor added: app-20150501005353-0000/40 on worker-20150501004757-spark-worker30-04-2015-23-11-51-1.abc.com-48624 (spark-worker30-04-2015-23-11-51-1.abc.com:48624) with 1 cores
15/05/01 01:16:48 INFO SparkDeploySchedulerBackend: Granted executor ID app-20150501005353-0000/40 on hostPort spark-worker30-04-2015-23-11-51-1.abc.com:48624 with 1 cores, 512.0 MB RAM
15/05/01 01:16:48 INFO AppClient$ClientActor: Executor updated: app-20150501005353-0000/40 is now RUNNING
15/05/01 01:16:48 INFO AppClient$ClientActor: Executor updated: app-20150501005353-0000/40 is now LOADING
15/05/01 01:16:55 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/05/01 01:17:10 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/05/01 01:17:23 INFO AppClient$ClientActor: Executor updated: app-20150501005353-0000/40 is now EXITED (Command exited with code 1)
15/05/01 01:17:23 INFO SparkDeploySchedulerBackend: Executor app-20150501005353-0000/40 removed: Command exited with code 1
15/05/01 01:17:23 ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 40
15/05/01 01:13:56 INFO Worker: Executor app-20150501005353-0000/34 finished with state EXITED message Command exited with code 1 exitStatus 1
15/05/01 01:13:56 INFO Worker: Asked to launch executor app-20150501005353-0000/35 for Spark Pi
Spark assembly has been built with Hive, including Datanucleus jars on classpath
15/05/01 01:13:58 INFO ExecutorRunner: Launch command: "java" "-cp" "::/opt/Spark/spark-1.2.1-bin-cdh4/conf:/opt/Spark/spark-1.2.1-bin-cdh4/lib/spark-assembly-1.2.1-hadoop2.0.0-mr1-cdh4.2.0.jar:/opt/Spark/spark-1.2.1-bin-cdh4/lib/datanucleus-core-3.2.10.jar:/opt/Spark/spark-1.2.1-bin-cdh4/lib/datanucleus-rdbms-3.2.9.jar:/opt/Spark/spark-1.2.1-bin-cdh4/lib/datanucleus-api-jdo-3.2.6.jar" "-Dspark.driver.port=48714" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "akka.tcp://sparkDriver@spark-master-node30-04-2015-23-01-40.abc.com:48714/user/CoarseGrainedScheduler" "35" "spark-worker30-04-2015-23-11-51-1.abc.com" "1" "app-20150501005353-0000" "akka.tcp://sparkWorker@spark-worker30-04-2015-23-11-51-1.abc.com:48624/user/Worker"
15/05/01 01:14:31 INFO Worker: Executor app-20150501005353-0000/35 finished with state EXITED message Command exited with code 1 exitStatus 1
15/05/01 01:14:31 INFO Worker: Asked to launch executor app-20150501005353-0000/36 for Spark Pi
127.0.0.1 localhost.localdomain localhost
10.0.2.12 master.example.com master
10.0.2.13 worker1.example.com worker1
10.0.2.13 worker2.example.com worker2