Apache spark 如何在独立群集上正确提交spark作业

Apache spark 如何在独立群集上正确提交spark作业,apache-spark,pyspark,Apache Spark,Pyspark,我刚刚在Ubuntu14上构建了一个spark 2.0独立单节点集群。 正在尝试提交pyspark作业: ~/spark/spark-2.0.0$ bin/spark-submit --driver-memory 1024m --executor-memory 1024m --executor-cores 1 --master spark://ip-10-180-191-14:7077 examples/src/main/python/pi.py spark给了我这个信息: WARN Tas

我刚刚在Ubuntu14上构建了一个spark 2.0独立单节点集群。 正在尝试提交pyspark作业:

~/spark/spark-2.0.0$ bin/spark-submit --driver-memory 1024m --executor-memory 1024m  --executor-cores 1 --master spark://ip-10-180-191-14:7077 examples/src/main/python/pi.py
spark给了我这个信息:

WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
以下是完整的输出:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/07/27 17:45:18 INFO SparkContext: Running Spark version 2.0.0
16/07/27 17:45:18 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/27 17:45:18 INFO SecurityManager: Changing view acls to: ubuntu
16/07/27 17:45:18 INFO SecurityManager: Changing modify acls to: ubuntu
16/07/27 17:45:18 INFO SecurityManager: Changing view acls groups to:
16/07/27 17:45:18 INFO SecurityManager: Changing modify acls groups to:
16/07/27 17:45:18 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(ubuntu); groups with view permissions: Set(); users  with modify permissions: Set(ubuntu); groups with modify permissions: Set()
16/07/27 17:45:19 INFO Utils: Successfully started service 'sparkDriver' on port 36842.
16/07/27 17:45:19 INFO SparkEnv: Registering MapOutputTracker
16/07/27 17:45:19 INFO SparkEnv: Registering BlockManagerMaster
16/07/27 17:45:19 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-e25f3ae9-be1f-4ea3-8f8b-b3ff3ec7e978
16/07/27 17:45:19 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
16/07/27 17:45:19 INFO SparkEnv: Registering OutputCommitCoordinator
16/07/27 17:45:19 INFO log: Logging initialized @1986ms
16/07/27 17:45:19 INFO Server: jetty-9.2.16.v20160414
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@4674e929{/jobs,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@1adab7c7{/jobs/json,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@26296937{/jobs/job,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@7ef4a753{/jobs/job/json,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@1f282405{/stages,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@5083cca8{/stages/json,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@3d8e675e{/stages/stage,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@661b8183{/stages/stage/json,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@384d9949{/stages/pool,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@7665e464{/stages/pool/json,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@381fc961{/storage,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@2325078{/storage/json,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@566116a6{/storage/rdd,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@f7e9eca{/storage/rdd/json,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@496c0a85{/environment,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@59cd2240{/environment/json,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@747dbf9{/executors,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@7c349d15{/executors/json,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@55259834{/executors/threadDump,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@65ca7ff2{/executors/threadDump/json,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@5c6be8a1{/static,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@4ef1a0c{/,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@7df2d69d{/api,null,AVAILABLE}
16/07/27 17:45:19 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@4b71033e{/stages/stage/kill,null,AVAILABLE}
16/07/27 17:45:19 INFO ServerConnector: Started ServerConnector@646986bc{HTTP/1.1}{0.0.0.0:4040}
16/07/27 17:45:19 INFO Server: Started @2150ms
16/07/27 17:45:19 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/07/27 17:45:19 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.180.191.14:4040
16/07/27 17:45:19 INFO Utils: Copying /home/ubuntu/spark/spark-2.0.0/examples/src/main/python/pi.py to /tmp/spark-ee1ceb06-a7c4-4b18-8577-adb02f97f31e/userFiles-565d5e0b-5879-40d3-8077-d9d782156818/pi.py
16/07/27 17:45:19 INFO SparkContext: Added file file:/home/ubuntu/spark/spark-2.0.0/examples/src/main/python/pi.py at spark://10.180.191.14:36842/files/pi.py with timestamp 1469641519759
16/07/27 17:45:19 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://ip-10-180-191-14:7077...
16/07/27 17:45:19 INFO TransportClientFactory: Successfully created connection to ip-10-180-191-14/10.180.191.14:7077 after 25 ms (0 ms spent in bootstraps)
16/07/27 17:45:20 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20160727174520-0006
16/07/27 17:45:20 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 39047.
16/07/27 17:45:20 INFO NettyBlockTransferService: Server created on 10.180.191.14:39047
16/07/27 17:45:20 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.180.191.14, 39047)
16/07/27 17:45:20 INFO BlockManagerMasterEndpoint: Registering block manager 10.180.191.14:39047 with 366.3 MB RAM, BlockManagerId(driver, 10.180.191.14, 39047)
16/07/27 17:45:20 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.180.191.14, 39047)
16/07/27 17:45:20 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@2bc4029c{/metrics/json,null,AVAILABLE}
16/07/27 17:45:20 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/07/27 17:45:20 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@60378632{/SQL,null,AVAILABLE}
16/07/27 17:45:20 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@6491578b{/SQL/json,null,AVAILABLE}
16/07/27 17:45:20 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@9ae3f78{/SQL/execution,null,AVAILABLE}
16/07/27 17:45:20 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@3c80379{/SQL/execution/json,null,AVAILABLE}
16/07/27 17:45:20 INFO ContextHandler: Started o.e.j.s.ServletContextHandler@245146b3{/static/sql,null,AVAILABLE}
16/07/27 17:45:20 INFO SharedState: Warehouse path is 'file:/home/ubuntu/spark/spark-2.0.0/spark-warehouse'.
16/07/27 17:45:20 INFO SparkContext: Starting job: reduce at /home/ubuntu/spark/spark-2.0.0/examples/src/main/python/pi.py:43
16/07/27 17:45:20 INFO DAGScheduler: Got job 0 (reduce at /home/ubuntu/spark/spark-2.0.0/examples/src/main/python/pi.py:43) with 2 output partitions
16/07/27 17:45:20 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at /home/ubuntu/spark/spark-2.0.0/examples/src/main/python/pi.py:43)
16/07/27 17:45:20 INFO DAGScheduler: Parents of final stage: List()
16/07/27 17:45:20 INFO DAGScheduler: Missing parents: List()
16/07/27 17:45:20 INFO DAGScheduler: Submitting ResultStage 0 (PythonRDD[1] at reduce at /home/ubuntu/spark/spark-2.0.0/examples/src/main/python/pi.py:43), which has no missing parents
16/07/27 17:45:20 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 4.6 KB, free 366.3 MB)
16/07/27 17:45:21 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 3.0 KB, free 366.3 MB)
16/07/27 17:45:21 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.180.191.14:39047 (size: 3.0 KB, free: 366.3 MB)
16/07/27 17:45:21 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1012
16/07/27 17:45:21 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (PythonRDD[1] at reduce at /home/ubuntu/spark/spark-2.0.0/examples/src/main/python/pi.py:43)
16/07/27 17:45:21 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/07/27 17:45:36 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
16/07/27 17:45:51 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
我不是在hadoop或Thread上运行spark,而是单独运行。
如何使spark处理这些作业?

尝试将master设置为local(本地),以便使用local(本地)模式:

~/spark/spark-2.0.0$ bin/spark-submit --driver-memory 1024m --executor-memory 1024m  --executor-cores 1 --master local[2] examples/src/main/python/pi.py
您可能还需要使用

--py-files

选择也是如此

如上所述,将master设置为local只会使您的程序在local模式下运行-这对初学者/单台计算机的小负载很好-但不会将其配置为在集群上运行。 要在真正的集群(可能在多台机器上)中运行程序,您需要使用位于以下位置的脚本设置主程序和从程序:

/start master.sh

您的从属设备(必须至少有一个)应使用以下方式启动:

start-slave.sh spark://:7077

这样,您将能够在真正的集群模式下运行—UI将显示您的工作人员和作业等。 您将在主机上的8080端口中看到主UI。 运行驱动程序的机器上的端口4040将向您显示应用程序UI。 端口8081将向您显示workers UI(如果您在同一台机器上使用多个从机,则第一台机器的端口将为8081,第二台机器的端口将为8082,以此类推)

您可以从任意多台机器上运行任意数量的从机,并为每个从机提供一定数量的内核(可以从同一台机器上提供几个从机,只需为它们提供适当数量的内核/ram,这样您就不会混淆调度程序)